Skip to main content

Currently Skimming:

Perceptions of Low-Cost Autonomous Driving - Tae Eun Choe, Xiaoshu Liu, Guang Chen, Weide Zhang, Yuliang Guo, and Ka Wai Tsoi
Pages 67-74

The Chapter Skim interface presents what we've algorithmically identified as the most significant single chunk of text within every page in the chapter.
Select key terms on the right to highlight them within pages of the chapter.


From page 67...
... We also discuss perception modules for dynamic and stationary object detection, sensor fusion (using Dempster–Shafer theory) , and virtual lane line and camera calibration.
From page 68...
... . Reprinted courtesy of Baidu.
From page 69...
... illustrates the actual data distribution for different environments and 2(b) shows the distribution of data after balanced data collection.
From page 70...
... Reprinted courtesy of Baidu.
From page 71...
... . Network Training and Optimization Preprocessed images are transferred to a deep neural network for object detection and tracking, lane line and landmark detection, and other computer vision problems.
From page 72...
... Lane Detection Among stationary objects, a lane is a key stationary object for both longitudinal and lateral control. An "ego-lane"1 monitor guides lateral control, and any dynamic object in the lane determines longitudinal control.
From page 73...
... calibration: At the factory, we estimate intrinsic and extrinsic camera parameters using fixed targets. However, the camera position changes over time and therefore the parameters need to be updated frequently.
From page 74...
... CONCLUSION We have shown perception algorithms for low-cost autonomous driving using a camera and radar. As deep neural networks are the key tool for solving perception issues, data collection and labeling became more important tasks.


This material may be derived from roughly machine-read images, and so is provided only to facilitate research.
More information on Chapter Skim is available.