Message Board

Respected readers, authors and reviewers, you can add comments to this page on any questions about the contribution, review,        editing and publication of this journal. We will give you an answer as soon as possible. Thank you for your support!

Name
E-mail
Phone
Title
Content
Verification Code
Turn off MathJax
Article Contents

HUANG Yuanxian, LI Bijun, HUANG Qi, ZHOU Jian, WANG Lanlan, ZHU Jialin. Camera-LiDAR Fusion for Object Detection, Tracking and Prediction[J]. Geomatics and Information Science of Wuhan University. doi: 10.13203/j.whugis20210614
Citation: HUANG Yuanxian, LI Bijun, HUANG Qi, ZHOU Jian, WANG Lanlan, ZHU Jialin. Camera-LiDAR Fusion for Object Detection, Tracking and Prediction[J]. Geomatics and Information Science of Wuhan University. doi: 10.13203/j.whugis20210614

Camera-LiDAR Fusion for Object Detection, Tracking and Prediction

doi: 10.13203/j.whugis20210614
Funds:

The 14th Five-Year Plan" National Key R&D Program "New Energy Vehicles" Key Special 2021 Annual Project (2021YFB2501100)

The National Natural Science Foundation of China (42101448).

  • A real-time and robust 3D dynamic object perception module is a key part of autonomous driving system. This paper fuse monocular camera and LiDAR to detect 3D objects. Firstly, we use a convolutional neural network (CNN) to detect 2D bounding boxes in the image and generate 3D frustum regions of interest (ROI) according to the geometric projection relation between lidar and camera. And then, we cluster the point cloud in the frustum ROI and fit the 3D bounding box of the object. After detecting 3D objects, we reidentify the objects between adjacent frames by appearance features and Hungarian algorithm, and then propose a tracker management model based on a quad-state machine. Finally, a novel prediction model is proposed, which leverages lane lines to constrain vehicle trajectories. The experimental results demonstrate that our algorithm is both effective and efficient. The whole algorithm only takes approximately 25 milliseconds, which meets the requirements of real-time.
  • [1] Jo K, Kim J, Kim D, et al.Development of Autonomous Car,Part I:Distributed System Architecture and Development Process[J].IEEE Transactions on Industrial Electronics, 2014, 61(12):7131-7140.
    [2] Berger C, Rumpe B.Autonomous Driving-5 Years after the Urban Challenge:The Anticipatory Vehicle as a Cyber-Physical System[J].Computer Science, 2014, 19(6):41-56.
    [3] Arnold E, Al-Jarrah O Y, Dianati M, et al.A Survey on 3D Object Detection Methods for Autonomous Driving Applications[J].IEEE Transactions on Intelligent Transportation Systems, 2019:3782-3795.
    [4] Huang Y, Chen Y.Survey of State-ofArt Autonomous Driving Technologies with Deep Learning[C]//2020 IEEE 20th International Conference on Software Quality, Reliability and Security Companion (QRS-C).IEEE, 2020.
    [5] Zapletal D, Herout A.Vehicle Reidentification for Automatic Video Traffic Surveillance[C]//IEEE Conference on Computer Vision&Pattern Recognition Workshops.IEEE Computer Society, 2016:1568-1574.
    [6] Roller D, Daniilidis K, Nagel H H.Model-based object tracking in monocular image sequences of road traffic scenes[J].International Journal of Computer 11263on, 1993, 10(3):257-281.
    [7] Novak L.Vehicle detection and pose estimation for autonomous driving[D].Czech Technical University in Prague, 2017.
    [8] MJ Leotta.Generic, deformable models for 3-d vehicle surveillance[J].Dissertations&Theses-Gradworks, 2010.
    [9] Chen X, Kundu K, Zhang Z, et al.Monocular 3D Object Detection for Autonomous Driving[C]//IEEE Conference on Computer Vision&Pattern Recognition.IEEE, 2016.
    [10] Shi S, Wang X, Li H.PointRCNN:3D Object Proposal Generation and Detection From Point Cloud[C]//2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).IEEE, 2019.
    [11] Mendes A, Bento L C, Nunes U.Multitarget detection and tracking with a laser scanner[C]//IEEE Intelligent Vehicles Symposium, 2004.IEEE, 2004.
    [12] Lang A H, Vora S, Caesar H, et al.PointPillars:Fast Encoders for Object Detection From Point Clouds[C]//2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).IEEE, 2019.
    [13] Yin Z, Tuzel O.VoxelNet:End-to-End Learning for Point Cloud Based 3D Object Detection[J].2017.
    [14] Bochkovskiy A, Wang C Y, Liao H.YOLOv4:Optimal Speed and Accuracy of Object Detection[J].2020.
    [15] Bewley A, Ge Z, Ott L, et al.Simple Online and Realtime Tracking[J].2016 IEEE International Conference on Image Processing (ICIP), 2016.
    [16] Wojke N, Bewley A, Paulus D.Simple Online and Realtime Tracking with a Deep Association Metric[J].IEEE, 2017:3645-3649.
    [17] Rusu R B.Semantic 3D Object Maps for Everyday Manipulation in Human Living Environments[J].KI-Künstliche Intelligenz, 2010, 24(4):345-348.
    [18] Xiao Z, Xu W, Dong C, et al.Efficient L-shape fitting for vehicle detection using laser scanners[C]//2017 IEEE Intelligent Vehicles Symposium (IV).IEEE, 2017.
    [19] Eidehall A, Pohl J, Gustafsson F.Joint road geometry estimation and vehicle tracking[J].Control Engineering Practice, 2007, 15(12):1484-1494.
    [20] Jansson J.Collision Avoidance Theory with Application to Automotive Collision Mitigation[D].Linköping University Electronic Press, 2005.
    [21] Liu X, Chen G, Sun X, et al.Ground Moving Vehicle Detection and Movement Tracking Based On the Neuromorphic Vision Sensor[J].IEEE Internet of Things Journal, 2020, PP (99):1-1.
    [22] Yan Y, Mao Y, Li B.SECOND:Sparsely Embedded Convolutional Detection[J].Sensors, 2018, 18(10).
  • 加载中
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Article Metrics

Article views(112) PDF downloads(22) Cited by()

Related
Proportional views

Camera-LiDAR Fusion for Object Detection, Tracking and Prediction

doi: 10.13203/j.whugis20210614
Funds:

The 14th Five-Year Plan" National Key R&D Program "New Energy Vehicles" Key Special 2021 Annual Project (2021YFB2501100)

The National Natural Science Foundation of China (42101448).

Abstract: A real-time and robust 3D dynamic object perception module is a key part of autonomous driving system. This paper fuse monocular camera and LiDAR to detect 3D objects. Firstly, we use a convolutional neural network (CNN) to detect 2D bounding boxes in the image and generate 3D frustum regions of interest (ROI) according to the geometric projection relation between lidar and camera. And then, we cluster the point cloud in the frustum ROI and fit the 3D bounding box of the object. After detecting 3D objects, we reidentify the objects between adjacent frames by appearance features and Hungarian algorithm, and then propose a tracker management model based on a quad-state machine. Finally, a novel prediction model is proposed, which leverages lane lines to constrain vehicle trajectories. The experimental results demonstrate that our algorithm is both effective and efficient. The whole algorithm only takes approximately 25 milliseconds, which meets the requirements of real-time.

HUANG Yuanxian, LI Bijun, HUANG Qi, ZHOU Jian, WANG Lanlan, ZHU Jialin. Camera-LiDAR Fusion for Object Detection, Tracking and Prediction[J]. Geomatics and Information Science of Wuhan University. doi: 10.13203/j.whugis20210614
Citation: HUANG Yuanxian, LI Bijun, HUANG Qi, ZHOU Jian, WANG Lanlan, ZHU Jialin. Camera-LiDAR Fusion for Object Detection, Tracking and Prediction[J]. Geomatics and Information Science of Wuhan University. doi: 10.13203/j.whugis20210614
Reference (22)

Catalog

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return