一种热红外与可见光影像深度特征匹配方法

A Method Based on Depth Features for Matching Thermal Infrared Images with Visible Images

  • 摘要: 针对无人机热红外影像与光学卫星影像的匹配难题,提出一种基于异源地标数据集学习的深度局部特征匹配方法。首先,利用生成对抗网络学习热红外与可见光影像的灰度分布规律,并进一步合成用于特征提取模型训练的热红外影像地标数据集;然后,联合残差网络和注意力机制模型,从数据集中学习深度不变特征;最后,经过对不变特征的匹配、提纯等处理,获得像对的正确匹配点。试验测试了该方法的性能,并与KAZE、特征检测描述网络和深度局部特征模型进行了对比。结果表明,提出的方法对灰度、纹理、重叠率以及几何变化具有较强的适应性,且匹配效率较高,可为无人机视觉导航提供支撑。

     

    Abstract:
      Objectives  Aiming at the matching problem of unmanned aerial vehicle(UAV) thermal infrared images and optical satellite images, a deep local feature matching method based on heterogeneous landmark dataset for learning is proposed.
      Methods  Firstly, the gray distribution law of thermal infrared images and visible images is learned by the generative adversarial network, and the landmark dataset consisting of thermal infrared images for feature extraction model training is synthesized. Secondly, the deep invariant features are learned from the multi-modal landmark dataset by the residual network and attention mechanism. Finally, correct matching points of image pairs are obtained by matching and purifying the invariant features.
      Results  The performance of this method was tested experimentally and compared with KAZE, detect-and-describe network and deep local features. The results show that the adaptability of this method to the grayscale, texture, overlap rate and geometric variations is stronger, and the matching efficiency of this method is higher.
      Conclusions  The effectiveness of this method is proved through multiple sets of experiments. Therefore, the UAV visual navigation is provided support for.

     

/

返回文章
返回