廖健驰, 李星星, 冯绍权. GVIL:基于图优化的GNSS PPP/视觉/惯性/激光雷达紧组合算法[J]. 武汉大学学报 ( 信息科学版), 2023, 48(7): 1204-1215. DOI: 10.13203/j.whugis20230119
引用本文: 廖健驰, 李星星, 冯绍权. GVIL:基于图优化的GNSS PPP/视觉/惯性/激光雷达紧组合算法[J]. 武汉大学学报 ( 信息科学版), 2023, 48(7): 1204-1215. DOI: 10.13203/j.whugis20230119
LIAO Jianchi, LI Xingxing, FENG Shaoquan. GVIL: Tightly-Coupled GNSS PPP/Visual/INS/LiDAR SLAM Based on Graph Optimization[J]. Geomatics and Information Science of Wuhan University, 2023, 48(7): 1204-1215. DOI: 10.13203/j.whugis20230119
Citation: LIAO Jianchi, LI Xingxing, FENG Shaoquan. GVIL: Tightly-Coupled GNSS PPP/Visual/INS/LiDAR SLAM Based on Graph Optimization[J]. Geomatics and Information Science of Wuhan University, 2023, 48(7): 1204-1215. DOI: 10.13203/j.whugis20230119

GVIL:基于图优化的GNSS PPP/视觉/惯性/激光雷达紧组合算法

GVIL: Tightly-Coupled GNSS PPP/Visual/INS/LiDAR SLAM Based on Graph Optimization

  • 摘要: 高精度定位与导航服务在移动机器人、无人机与自动驾驶等新兴领域中发挥着至关重要的作用。视觉/惯性/激光雷达组合算法相较于视觉/惯性组合算法,可同时利用环境的空间结构与纹理信息以实现更为鲁棒的位姿估计结果,然而其在大尺度场景下仍存在误差累计问题。为此提出了一种全球导航卫星系统(global navigation satellite system,GNSS)精密单点定位(precise point positioning,PPP)/视觉/惯性/激光雷达紧组合算法。该算法首先通过4种传感器的联合初始化,实现了不同传感器空间基准的统一;然后, 用双频无电离层组合后的GNSS伪距、相位观测值与视觉、惯性、激光雷达原始观测值共同构成误差因子;最后,通过基于关键帧与滑动窗口的因子图优化实现了全局位姿的精确、鲁棒估计。经车载实验验证,所提出的GNSS PPP/视觉/惯性/激光雷达紧组合算法通过4种传感器在原始观测值层面的组合,可以显著提升系统在复杂环境下的位姿估计的精度、连续性与可靠性,实现无缝导航。

     

    Abstract:
      Objectives  High-precision positioning and navigation services play a crucial role in emerging fields such as mobile robots, drones, and autonomous driving. Compared with visual-inertial system algorithms, visual-inertial-LiDAR fusion algorithms can utilize the spatial structure and texture information of the environment to achieve more robust pose estimation results. However, they still suffer from error accumulation problems in large-scale scenes. Therefore, we propose a global navigation satellite system (GNSS) precise point positioning (PPP)/vision/inertial/LiDAR tightly-coupled fusion algorithm (GVIL).
      Methods  First, the algorithm initially performs a joint initialization of four sensors, which results in the unification of the spatial reference frames of the different sensors. Second, the original observations from vision, inertial, and LiDAR are combined with the dual-frequency ionosphere-free combination of GNSS pseudo range and phase observations to generate error factors. Finally, our algorithm achieves global pose estimation by using factor graph optimization based on keyframe strategy and sliding windows.
      Results  The vehicle-borne experiment results show that even under GNSS-constrained observation conditions, the proposed four-sensor tightly-coupled algorithm can improve the position estimation accuracy by more than 84% and the attitude estimation accuracy by more than 66% compared with the VIL combination algorithm.
      Conclusions  It has been demonstrated that the GVIL algorithm can significantly enhance the accuracy, continuity, and reliability of pose estimation in complex environments by combining the raw observation data from four types of sensors, achieving continuous navigation.

     

/

返回文章
返回