一种顾及区域特征差异的热红外与可见光图像多尺度融合方法

Region Feature Based Multi-scale Fusion Method for Thermal Infrared and Visible Images

  • 摘要: 针对传统的热红外与可见光图像融合方法对比度低,容易出现边缘细节、目标等信息丢失或减弱的现象,提出一种顾及区域特征差异的热红外与可见光图像多尺度融合方法。首先采用自适应PCNN(脉冲耦合神经网络)模型和二维Renyi熵相结合的图像分割方法,分别对红外和可见光图像进行区域分割;然后利用非下采样Contourlet变换对原图像进行多尺度多方向分解,根据区域的特征差异设计不同的融合规则,融合热红外与可见光图像。实验结果表明,该方法不仅能有效地融合热红外图像的目标特征,还能更多地保留可见光图像丰富的背景信息,融合图像对比度高,在视觉效果和客观评价上优于传统融合方法。

     

    Abstract: To overcome the defects of existing algorithms that the target information and edge details are easily lost and that fusion image contrast is low, a novel fusion method that combines region feature and multi-scale transform for thermal infrared and visible images is proposed in this paper. Firstly, the source infrared and visible images are segmented based on adaptive pulse coupled neural network (PCNN) and two-dimension Renyi entropy, and a joint segmentation map can be acquired through region joint operation. Then the original images are multi-scale and multi-directional decomposed by nonsubsampled contourlet transform (NSCT). After that, the fusion rules are designed based on region feature difference in NSCT domain. Finally, the fusion image is reconstructed by NSCT inverse transform. Experimental results show the proposed method can effectively fuse infrared target feature, preserve the background information as much as possible, and obtain good contrast. The proposed method is superior to the traditional methods in terms of both subjective evaluation and objective evaluation.

     

/

返回文章
返回