As the eyes of driverless cars, visual recognition is improving faster and faster
Driverless cars typically consist of five levels, but no matter which level will include environmental awareness, planning decisions and executive control. The environmental sensing methods mainly include visual recognition, millimeter-wave radar sensing and lidar perception.
The two fatal accidents in 2016 in China and the United States occurred in the state of tesla's self-driving cars. It is essentially because of the defect of visual recognition technology.
In the US car crash, because of the low location of the millimeter-wave radar on the tesla, the truck was not able to detect the truck's height, and the camera should have been able to detect the truck. But in the course of the vehicle's journey, the two detection devices may have problems in the final fusion, failing to identify the location of the truck and eventually causing the crash.
Domestic car accident, Tesla in the process of car, the car suddenly changed, the front of the construction vehicle speed is slow, and the distance between Tesla quickly shortened, millimeter-wave radar can not scan to the short-distance side of the car. Coupled with the camera was included only part of the construction vehicle body, and then visual recognition can not respond in time, eventually leading to car crash.
The above two accidents are sufficient to show that the use of autopilot mode is very dangerous in cases where visual recognition technology is not yet complete. Similarly, the importance of visual recognition technology for autopilot, unmanned technology is self-evident.
The goal is a change from static to dynamic, one of the biggest challenges in visual recognition in the automotive sector
Traditional visual recognition of the common application scenarios are text transcription, face recognition, fingerprint recognition and so on. However, these visual recognition technology has a common characteristic, are static state of recognition. In the field of automotive, visual recognition in the identification of content and requirements of the two aspects of traditional visual recognition is different.
It is because the visual recognition in the automotive field requires both the cost and the performance, and the content is more complicated, so the application of visual recognition in the automotive field is especially prominent.
Depth learning to make visual recognition a higher level
Depth learning can be regarded as one of the biggest breakthroughs in the field of artificial intelligence in recent years. If the algorithm and sample size are enough, the accuracy rate can reach 99.9% and the limit of traditional visual algorithm detection is about 93%. In this way, the depth of learning into the visual identification system, can make unmanned technology more perfect.