Remote Sensing Image Fusion Based on Low-Level Visual Features and PAPCNN in NSST Domain
-
摘要: 针对融合规则中活动度量构建的单一性和脉冲耦合神经网络(pulse coupled neural network, PCNN)参数设置的主观性问题,提出一种非下采样剪切波变换(non-subsampled shearlet transform, NSST)域内结合低级视觉特征和参数自适应PCNN(parameter adaptive PCNN,PAPCNN)的遥感影像融合方法。首先将全色影像和多光谱影像YUV颜色空间的亮度分量Y通过NSST分解得到高频和低频子带,其次利用基于低级视觉特征的融合规则对低频子带进行融合,结合局部相位一致性、局部突变度量和局部能量信息3个低级特征构建新的活动度量;然后采用PAPCNN模型对高频子带进行融合,将多尺度形态梯度作为模型的外部输入信号;最后依次进行NSST逆变换和YUV逆变换,得到融合影像。实验结果表明,所提方法对不同平台和不同地面特征的遥感影像都能表现出较好的效果,相较于其他11种方法,在所有评价指标上均表现优秀。所提方法能够较好地保留原始影像中的空间信息和光谱信息,可以提供优势互补的融合影像。Abstract:Objectives In order to solve the problems of the singleness of activity metric construction in fusion rules and the subjectivity of parameter setting of pulse-coupled neural network (PCNN), a remote sensing image fusion method combining low-level visual features and parameter adaptive pulse coupled neural network (PAPCNN) in the non-subsampled shearlet transform (NSST) domain is proposed in this paper.Methods First, the panchromatic image and the luminance component Y in YUV color space of multispectral image are decomposed by NSST to obtain the high and low frequency components. Second, a fusion rule based on low-level visual features is used to low-frequency components fusion, and a new activity measure is constructed by combining three low-level features, namely, local phase congruency, local abrupt measure and local energy information. Then, PAPCNN model is used to high-frequency components fusion, and the multi-scale morphological gradient is used as the external input signal of the model. Finally, the fused image is obtained through NSST inverse transform and YUV inverse transform in turn.Results The experimental results show that this method has better performance in remote sensing images of different platforms and different ground features. Compared with other 11 methods, this method has absolute advantages in all evaluation indexes.Conclusions The proposed method can better preserve the spatial and spectral information in the original image, thus it can provide a fused image with complementary advantages.
-
-
表 1 5组影像的详细信息
Table 1 Details of Five Groups of Images
卫星类型 星下点地面像元分辨率/m 主要地物类别 发射时间 全色 多光谱 QuickBird 0.61 2.44 城市建筑物 2001-10-18 SPOT-6 1.5 6 郊区农田 2012-09-09 WorldView-2 0.45 1.88 山区山地 2009-10-08 IKONOS 1 4 水域湿地 1999-09-24 Pleiades 0.5 2 交通道路 2012-12-01 表 2 5组影像数据的定量评价结果
Table 2 Objective Evaluation of 5 Image Datasets
影像 方法 IE MI AG SF SD ERGAS VIFF 运行时间/s QuickBird Curvelet 3.988 7.977 1.505 3.208 2.338 10.673 0.587 6.712 DTCWT 4.005 8.010 1.541 3.318 2.438 11.224 0.591 1.739 CNN 3.812 7.625 1.524 3.288 4.214 24.836 0.444 354.199 CSE 3.739 7.479 1.495 3.284 4.338 26.453 0.332 2.052 ASR 3.799 7.599 0.822 2.025 2.297 10.998 0.470 423.17 CSR 3.944 7.888 1.149 2.632 2.268 10.363 0.561 574.712 CSMCA 4.019 8.037 1.330 2.962 2.385 10.626 0.631 4 683.296 RGF 4.051 8.102 1.558 3.318 2.049 8.213 0.559 130.615 MLGCF 4.017 8.026 1.521 3.228 2.284 10.099 0.631 1 178.951 WLE-PAPCNN 4.262 8.514 1.552 3.359 1.720 7.025 0.715 208.240 EA-PAPCNN 4.007 8.015 1.524 3.322 2.560 12.127 0.520 105.871 本文方法 4.308 8.616 1.598 3.382 1.268 5.442 0.899 287.332 SPOT-6 Curvelet 3.620 7.200 0.763 1.562 2.001 11.129 0.550 7.058 DTCWT 3.622 7.213 0.787 1.666 2.036 11.415 0.564 1.939 CNN 3.273 6.546 0.776 1.631 3.526 16.812 0.337 333.146 CSE 3.601 5.931 0.677 1.447 1.991 10.730 0.641 2.604 ASR 3.600 7.202 0.350 0.925 1.920 10.650 0.461 400.027 CSR 3.533 7.066 0.475 1.194 1.937 10.985 0.501 636.415 CSMCA 3.584 7.168 0.590 1.398 1.970 11.010 0.609 4 696.782 RGF 3.468 6.937 0.792 1.655 2.222 12.057 0.452 120.543 MLGCF 3.625 7.251 0.772 1.579 1.919 10.160 0.589 1 179.240 WLE-PAPCNN 4.002 8.004 0.799 1.695 0.848 3.883 0.842 181.919 EA-PAPCNN 3.638 7.276 0.779 1.661 1.969 8.625 0.541 97.684 本文方法 4.021 8.042 0.809 1.712 0.630 3.441 0.958 281.141 WorldView-2 Curvelet 5.385 10.771 2.667 4.778 8.608 4.095 0.563 7.467 DTCWT 5.409 10.818 2.802 5.040 8.707 4.146 0.594 2.378 CNN 5.209 10.417 2.796 5.037 16.035 6.942 0.519 348.949 CSE 5.083 10.166 2.784 5.019 16.337 9.168 0.471 2.135 ASR 5.685 11.370 1.362 2.961 7.203 3.273 0.723 420.006 CSR 5.360 10.720 1.977 3.804 8.605 4.047 0.530 545.683 CSMCA 5.386 10.771 2.450 4.560 8.786 4.116 0.553 4 932.366 RGF 5.116 10.234 2.799 5.041 8.729 4.081 0.474 132.490 MLGCF 4.413 10.820 2.627 4.709 8.433 3.975 0.580 1 163.904 WLE-PAPCNN 5.735 11.470 2.823 5.079 2.898 1.969 0.594 189.573 EA-PAPCNN 5.435 10.873 2.809 5.057 8.629 3.590 0.635 91.971 本文方法 5.758 11.516 2.839 5.102 3.059 1.482 0.809 288.734 IKONOS Curvelet 5.289 10.579 3.735 7.082 5.584 7.312 0.605 6.595 DTCWT 5.191 10.607 3.806 7.284 5.777 7.571 0.624 1.650 CNN 5.295 10.590 6.678 7.256 8.500 12.189 0.701 375.058 CSE 5.241 10.481 3.676 7.272 9.940 14.293 0.608 3.246 ASR 5.220 10.440 2.063 5.039 4.582 5.541 0.601 444.017 CSR 5.230 10.460 2.650 5.698 5.568 7.210 0.591 554.677 CSMCA 5.284 10.568 3.305 6.684 6.283 8.209 0.609 4 763.214 RGF 5.274 10.549 3.672 7.227 9.447 13.237 0.634 133.230 MLGCF 5.314 10.629 3.700 7.047 5.652 7.378 0.636 1 155.392 WLE-PAPCNN 5.329 10.647 3.822 7.330 6.060 7.946 0.609 212.151 EA-PAPCNN 5.324 10.659 3.849 7.343 3.972 5.656 0.644 95.768 本文方法 5.378 10.756 3.874 7.378 4.326 6.167 0.685 310.367 Pleiades Curvelet 4.089 8.177 1.276 2.582 2.484 8.004 0.453 6.741 DTCWT 4.116 8.231 1.330 2.720 2.571 8.206 0.458 1.789 CNN 4.522 9.043 1.350 2.732 1.932 6.370 0.657 358.731 CSE 3.586 7.172 1.265 2.608 4.707 16.998 0.148 2.105 ASR 3.929 7.858 0.679 1.773 2.341 8.183 0.446 416.783 CSR 4.037 8.073 0.911 2.120 2.416 7.875 0.454 618.300 CSMCA 4.191 8.383 1.094 2.343 2.384 7.477 0.611 4868.276 RGF 4.048 8.096 1.367 2.719 3.817 9.207 0.488 126.641 MLGCF 4.135 8.271 1.282 2.587 2.344 7.241 0.508 1152.555 WLE-PAPCNN 4.408 8.818 1.349 2.753 2.008 5.020 0.641 195.489 EA-PAPCNN 4.135 8.269 1.309 2.688 2.608 8.162 0.408 90.912 本文方法 4.523 9.047 1.427 2.872 1.404 3.907 0.915 293.282 注:粗体表示最优值,下划线表示次优值。 -
[1] 许丽娜, 肖奇, 何鲁晓. 考虑人类视觉特征的融合图像评价方法[J]. 武汉大学学报(信息科学版), 2019, 44(4): 546-554. https://www.cnki.com.cn/Article/CJFDTOTAL-WHCH201904011.htm Xu Lina, Xiao Qi, He Luxiao. Fused Image Quality Assessment Based on Human Visual Characteristics[J]. Geomatics and Information Science of Wuhan University, 2019, 44(4): 546-554. https://www.cnki.com.cn/Article/CJFDTOTAL-WHCH201904011.htm
[2] 黎付安, 黄登山. 国产高空间分辨率影像融合方法的适宜性评价[J]. 测绘地理信息, 2020, 45(6): 47-55. https://www.cnki.com.cn/Article/CJFDTOTAL-CHXG202006011.htm Li Fuan, Huang Dengshan. Suitability Evaluation of Domestic High Spatial Resolution Image Fusion Method[J]. Journal of Geomatics, 2020, 45(6): 47-55. https://www.cnki.com.cn/Article/CJFDTOTAL-CHXG202006011.htm
[3] 李彦胜, 张永军. 耦合知识图谱和深度学习的新一代遥感影像解译范式[J]. 武汉大学学报(信息科学版), 2022, 47(8): 1176-1190. https://www.cnki.com.cn/Article/CJFDTOTAL-WHCH202208002.htm Li Yansheng, Zhang Yongjun. A New Paradigm of Remote Sensing Image Interpretation by Coupling Knowledge Graph and Deep Learning[J]. Geomatics and Information Science of Wuhan University, 2022, 47(8): 1176-1190. https://www.cnki.com.cn/Article/CJFDTOTAL-WHCH202208002.htm
[4] 杨昌兰, 关雪峰, 李静波, 等. 融合多源遥感影像的城市扩展识别与空间驱动分析[J]. 测绘地理信息, 2022, DOI: 10.14188/j.2095-6045.2022128. Yang Changlan, Guan Xuefeng, Li Jingbo, et al. Identification and Spatial Determinant Analysis of Urban Expansion Through Integrating Multi-source Remote Sensing Images [J]. Journal of Geomatics, 2022, DOI: 10.14188/j.2095-6045.2022128.
[5] Polinati S, Dhuli R. Multimodal Medical Image Fusion Using Empirical Wavelet Decomposition and Local Energy Maxima[J]. Optik, 2020, 205: 163947. doi: 10.1016/j.ijleo.2019.163947
[6] Liu H, Xiao G F, Tan Y L, et al. Multi-source Remote Sensing Image Registration Based on Contourlet Transform and Multiple Feature Fusion[J]. International Journal of Automation and Computing, 2019, 16(5): 575-588. doi: 10.1007/s11633-018-1163-6
[7] 邰建豪, 潘斌, 赵珊珊, 等. 基于Shearlet变换的SAR与多光谱遥感影像融合[J]. 武汉大学学报(信息科学版), 2017, 42(4): 468-474. https://www.cnki.com.cn/Article/CJFDTOTAL-WHCH201704007.htm Tai Jianhao, Pan Bin, Zhao Shanshan, et al. SAR and Multispectral Remote Sensing Image Fusion Method Using Shearlet Transform[J]. Geomatics and Information Science of Wuhan University, 2017, 42(4): 468-474. https://www.cnki.com.cn/Article/CJFDTOTAL-WHCH201704007.htm
[8] 向天烛, 高熔溶, 闫利, 等. 一种顾及区域特征差异的热红外与可见光图像多尺度融合方法[J]. 武汉大学学报(信息科学版), 2017, 42(7): 911-917. https://www.cnki.com.cn/Article/CJFDTOTAL-WHCH201707007.htm Xiang Tianzhu, Gao Rongrong, Yan Li, et al. Region Feature Based Multi-scale Fusion Method for Thermal Infrared and Visible Images[J]. Geomatics and Information Science of Wuhan University, 2017, 42(7): 911-917. https://www.cnki.com.cn/Article/CJFDTOTAL-WHCH201707007.htm
[9] Sulaiman A G, Elashmawi W H, El-Tawel G S. A Robust Pan-sharpening Scheme for Improving Resolution of Satellite Images in the Domain of the Nonsubsampled Shearlet Transform[J]. Sensing and Imaging, 2019, 21(1): 1-27.
[10] 成飞飞, 付志涛, 黄亮, 等. 结合自适应PCNN的非下采样剪切波遥感影像融合[J]. 测绘学报, 2021, 50(10): 1380-1389. https://www.cnki.com.cn/Article/CJFDTOTAL-CHXB202110012.htm Cheng Feifei, Fu Zhitao, Huang Liang, et al. Non-subsampled Shearlet Transform Remote Sensing Image Fusion Combined with Parameter-Adaptive PCNN[J]. Acta Geodaetica et Cartographica Sinica, 2021, 50(10): 1380-1389. https://www.cnki.com.cn/Article/CJFDTOTAL-CHXB202110012.htm
[11] Yin M, Liu X N, Liu Y, et al. Medical Image Fusion with Parameter-Adaptive Pulse Coupled Neural Network in Non-subsampled Shearlet Transform Domain[J]. IEEE Transactions on Instrumentation and Measurement, 2019, 68(1): 49-64.
[12] 吴一全, 王志来. 混沌蜂群优化的NSST域多光谱与全色图像融合[J]. 遥感学报, 2017, 21(4): 549-557. https://www.cnki.com.cn/Article/CJFDTOTAL-YGXB201704006.htm Wu Yiquan, Wang Zhilai. Multispectral and Panchromatic Image Fusion Using Chaotic Bee Colony Optimization in NSST Domain[J]. Journal of Remote Sensing, 2017, 21(4): 549-557. https://www.cnki.com.cn/Article/CJFDTOTAL-YGXB201704006.htm
[13] Ding S F, Zhao X Y, Xu H, et al. NSCT-PCNN Image Fusion Based on Image Gradient Motivation[J]. IET Computer Vision, 2018, 12(4): 377-383.
[14] Zhang Y, Bai X, Wang T. Boundary Finding Based Multi-focus Image Fusion Through Multi-scale Morphological Focus-measure[J]. Information Fusion, 2017, 35: 81-101.
[15] Li H F, Qiu H M, Yu Z T, et al. Infrared and Visible Image Fusion Scheme Based on NSCT and Low-level Visual Features[J]. Infrared Physics & Technology, 2016, 76: 174-184.
[16] Chen Y L, Park S K, Ma Y D, et al. A New Automatic Parameter Setting Method of a Simplified PCNN for Image Segmentation[J]. IEEE Transactions on Neural Networks, 2011, 22(6): 880-892.
[17] Liu Y, Liu S P, Wang Z F. A General Framework for Image Fusion Based on Multi-scale Transform and Sparse Representation[J]. Information Fusion, 2015, 24: 147-164.
[18] Liu Y, Chen X, Peng H, et al. Multi-focus Image Fusion with a Deep Convolutional Neural Network[J]. Information Fusion, 2017, 36: 191-207.
[19] Sufyan A, Imran M, Shah S A, et al. A Novel Multimodality Anatomical Image Fusion Method Based on Contrast and Structure Extraction[J]. International Journal of Imaging Systems and Technology, 2022, 32(1): 324-342.
[20] Liu Y, Wang Z F. Simultaneous Image Fusion and Denoising with Adaptive Sparse Representation[J]. IET Image Processing, 2015, 9(5): 347-357.
[21] Liu Y, Chen X, Ward R K, et al. Image Fusion with Convolutional Sparse Representation[J]. IEEE Signal Processing Letters, 2016, 23(12): 1882-1886.
[22] Liu Y, Chen X, Ward R K, et al. Medical Image Fusion via Convolutional Sparsity Based Morphological Component Analysis[J]. IEEE Signal Processing Letters, 2019, 26(3): 485-489.
[23] Jian L H, Yang X M, Zhou Z L, et al. Multi-scale Image Fusion Through Rolling Guidance Filter[J]. Future Generation Computer Systems, 2018, 83: 310-325.
[24] Tan W, Zhou H X, Song J, et al. Infrared and Visible Image Perceptive Fusion Through Multi-level Gaussian Curvature Filtering Image Decomposition[J]. Applied Optics, 2019, 58(12): 3064-3073.
[25] Han Y, Cai Y Z, Cao Y, et al. A New Image Fusion Performance Metric Based on Visual Information Fidelity[J]. Information Fusion, 2013, 14(2): 127-135.