一种自适应点线特征和IMU耦合的视觉SLAM方法

A Visual SLAM Method Coupled with Adaptive Point-line Features and IMU

  • 摘要: 室内低/弱纹理、光照不足的场景下,视觉惯导融合的SLAM定位精度明显优于纯视觉SLAM方法。然而,当前基于点特征的视觉惯导SLAM方法通常难以检测并追踪足够的特征,同时惯性测量单元的先验测量信息亦未充分利用,导致SLAM整体定位精度低、鲁棒性弱。针对这些问题,构建一种自适应点线特征和IMU耦合的视觉SLAM方法。首先设计一种自适应的FAST阈值角点算法,以增强图像特征点检测的鲁棒性。另外,LSD线特征算法易检测短线、断线,且图像因光照变化易导致线特征的“过提取”或“错提取”。为此,基于边缘检测二值图像构造自适应线特征提取算法,并借助消影点的特性筛选聚类线特征。然后,由点线特征重投影误差和IMU先验预积分位姿估计量,通过松耦合为SLAM前端位姿估计和算法提供稳健的初始化结果。随后,后端利用紧耦合建立视觉和IMU观测量的统一非线性最小化残差函数,并优化得到准确的图像帧位姿。最后,在开源数据集上测试验证,并对比几种SOTA(State Of The Art) SLAM方法。实验结果表明,本文构建的SLAM方法平均定位精度至少提高12%,同时兼顾较强的鲁棒性。

     

    Abstract: Objectives: Visual-inertial SLAM typically outperforms pure visual SLAM in indoor scenes characterized by low or sparse textures and varying lighting conditions. Nonetheless, most existing visual-inertial SLAM encounter challenges in detecting and tracking sufficient feature points. Moreover, the prior pose measurement information from the Inertial Measurement Unit is often underutilized, resulting in reduced pose estimation accuracy and limited robustness. Methods: An adaptive point detection approach has been developed to enhance the robustness of feature point detection in images. Additionally, the LSD line feature algorithm makes it easy to detect short lines and broken line features, and the performance of the algorithm is affected by the change of illumination, resulting in "over-extraction" or "wrong-extraction" of line features. Accordingly, an adaptive algorithm for extracting line features was introduced, utilizing edge-detected binary images and incorporating the removal of erroneous lines based on the geometry characteristics of the vanishing point. Following this, the algorithm integrates the visual measurements from point-line features with the pre-integration measurement of IMU to yield reliable outcomes for front-end pose estimation and initialization parameters in a loosely coupled manner. In the back-end section of our proposed SLAM method, a unified nonlinear minimization residual function is established for visual and IMU measurements through tight coupling, optimizing for obtaining precise pose of the image or camera. Results: Our SLAM method has been validated and tested on publicly available benchmarks, showcasing its performance through ablation experiments and qualitative as well as quantitative comparative analyses against several state-of-the-art visual-inertial SLAM algorithms. Conclusions: The results indicate that our algorithm improves average localization accuracy by at least 12% and displays significant robustness.

     

/

返回文章
返回