LiDAR vs Visual Perception, who can dominate automated driving?

LiDAR vs Visual Perception, who can dominate automated driving?

Two factions in the field of automated driving are divided - the LiDAR faction and the pure visual perception faction.


LiDAR using mechanical radar, millimeter wave radar, ultrasonic radar, and multiplexed cameras can automate driving. LiDar is a sensor used to accurately obtain 3D position information of an object, and accurately map out the three-dimensional structure information of the target.


Visionist believes that since humans can become a qualified driver through visual information + brain processing. Then camera + deep learning neural network + computer hardware, can also achieve a similar effect, through at least 8 cameras covering 360 °, than the human perception range is larger and safer.


Because the two technology routes have their own advantages and disadvantages, the general thinking in the industry is that multiple sensors and a large number of redundant designs in a car capable of realizing L2 and above self-driving functions are necessary to ensure the safety and reliability of the product. Intelligent driving at the L2 level requires 9-19 sensors, including ultrasonic radar, long-range and short-range radar and surround-view cameras, and the L3 level is expected to require the carriage of 19-27, and may also require LiDAR, high-precision navigation and positioning. After all, safety is the cornerstone and bottom line of the development of automatic driving, under the guarantee of safety, the intermingling and elimination of various technology routes will make the probability of safety closer to 100%.

 

Back to blog