Objectives The incomplete data in vehicle-mounted laser point clouds and the large number of overlapping objects among consecutive frames of images have brought great challenges to the extraction of continuous and complete road boundaries.
Methods To address the above challenges, we propose a road boundary extraction and vectorization method that takes the full advantage of point clouds and panoramic images. First, initial road boundaries are extracted from point clouds and panoramic images respectively. Then, the extracted road boundaries are accurately fused at the result level based on an improved Snake model. The fusion procedure includes three main steps: Feature map generation, mathematical model formulation, and the model solver. With the successful fusion of road boundaries from two modal data, the model finally generates complete and continuous vectorized road boundaries.
Results Additionally, the effectiveness of the proposed method is demonstrated on two typical urban scene datasets. Experiments elaborate that the proposed method can effectively extract complete and continuous vectorized road boundaries with diverse structures and shapes, in terms of precision, recall, and F1 score better than 95.43%, 89.27%, and 93.38%, respectively.
Conclusions Compared to the single data source based method, the proposed multimodal data fusion method fully leverages the advantages of 3D point clouds with precise geometrical features and panoramic images with rich textures. The method is robust to data incompleteness due to occlusion and overlapping objects in multi-frame images. Consequently, the extracted vectorized road boundaries are more accurate, complete, and smoother compared to the sole source data based methods, which can support downstream applications such as high definition maps generation, directly.