Objectives High-precision positioning and navigation services play a crucial role in emerging fields such as mobile robots, drones, and autonomous driving. Compared with visual-inertial system algorithms, visual-inertial-LiDAR fusion algorithms can utilize the spatial structure and texture information of the environment to achieve more robust pose estimation results. However, they still suffer from error accumulation problems in large-scale scenes. Therefore, we propose a global navigation satellite system (GNSS) precise point positioning (PPP)/vision/inertial/LiDAR tightly-coupled fusion algorithm (GVIL).
Methods First, the algorithm initially performs a joint initialization of four sensors, which results in the unification of the spatial reference frames of the different sensors. Second, the original observations from vision, inertial, and LiDAR are combined with the dual-frequency ionosphere-free combination of GNSS pseudo range and phase observations to generate error factors. Finally, our algorithm achieves global pose estimation by using factor graph optimization based on keyframe strategy and sliding windows.
Results The vehicle-borne experiment results show that even under GNSS-constrained observation conditions, the proposed four-sensor tightly-coupled algorithm can improve the position estimation accuracy by more than 84% and the attitude estimation accuracy by more than 66% compared with the VIL combination algorithm.
Conclusions It has been demonstrated that the GVIL algorithm can significantly enhance the accuracy, continuity, and reliability of pose estimation in complex environments by combining the raw observation data from four types of sensors, achieving continuous navigation.