Researchers at Incheon National University (INU) in South Korea have developed a deep learning-based detection system for autonomous vehicles (AVs). The system, which draws on the Internet of Things, is designed to improve the detection capabilities of AVs, including in unfavourable conditions.
AVs employ smart sensors such as LiDARs (Light Detection and Ranging) for a 3D view of their surroundings and depth information, RADaR (Radio Detection and Ranging) for detecting objects at night and cloudy weather, and a set of cameras for providing RGB images and a 360-degree view, collectively forming a comprehensive dataset known as point cloud. However, current detection methods can suffer from diminished detection capabilities due to factors such as bad weather, unstructured roads, or occlusion.
In order to overcome these shortcomings, an international team of researchers led by Professor Gwanggil Jeon from the Department of Embedded Systems Engineering at INU, Korea, has developed an Internet-of-Things-enabled, deep learning-based, end-to-end 3D object detection system.
“Our proposed system operates in real time, enhancing the object detection capabilities of autonomous vehicles, making navigation through traffic smoother and safer,” stated Prof. Jeon.
The proposed system is built on the YOLOv3 (You Only Look Once) deep-learning object detection technique, which is the most active state-of-the-art technique available for 2D visual detection. The researchers first used this new model for 2D object detection and then modified the YOLOv3 technique to detect 3D objects. Using both point cloud data and RGB images as input, the system generates bounding boxes, with confidence scores and labels for visible obstacles as output.
The team assessed the system’s performance by conducting experiments using the Lyft dataset, which consisted of road information captured from 20 AVs travelling a predetermined route in Palo Alto, California, over a four-month period. The INU team say the results demonstrated that YOLOv3 exhibits high accuracy, surpassing other state-of-the-art architectures. Notably, the overall accuracy for 2D and 3D object detection were 96% and 97%, respectively.
Prof. Jeon explained the potential importance of this enhanced detection capability: “By improving detection capabilities, this system could propel autonomous vehicles into the mainstream. The introduction of autonomous vehicles has the potential to transform the transportation and logistics industry, offering economic benefits through reduced dependence on human drivers and the introduction of more efficient transportation methods.”
Furthermore, INU expects the presented work will drive research and development in various technological fields such as sensors, robotics, and artificial intelligence. Moving ahead, the team aims to explore additional deep-learning algorithms for 3D object detection, recognising the current focus on 2D image development. In short, INU believes this study could pave the way for a widespread adoption of AVs.