联合全局-局部特征和动态错误监督的遥感影像建筑物损伤评估方法

Building Damage Assessment from Satellite Images Combining Global-Local Features and Dynamic Error Supervision

  • 摘要: 在灾害发生后,快速准确地评估灾害区域的范围和严重程度对于后续的救援和重建至关重要。目前针对遥感影像的深度学习建筑物损伤评估方法面临特征差异建模不足、全局-局部特征利用不充分和困难样本感知能力缺乏等问题。为此,提出一种基于全局-局部特征融合和动态错误监督网络(global-local feature fusion and dynamic error supervision network,GLESNet)的双时相遥感影像建筑物损伤评估方法。在编码阶段,采用共享权重的编码器提取双时相影像特征,将双时相影像特征送入差异增强融合模块(difference enhancement fusion module,DEFM)增强特征间差异并获取融合特征;在解码阶段,融合特征先后经过全局-局部特征融合模块(global-local feature fusion module,GLFFM)和动态错误感知解码器(dynamic error aware decoder,DEAD)并输出评估结果,以实现兼顾全局-局部特征的融合解码和困难样本感知的学习。在目前最大的全球级别建筑物损伤评估高分辨率遥感影像数据集xBD上进行实验,GLESNet取得了建筑提取F1分数86.03%,损伤分类F1分数75.20%,综合评价F1分数78.45%的结果,总体指标优于多个对比方法。在Ida-BD和LEVIR-CD数据集上进行了迁移实验和变化检测实验,验证了GLESNet的泛化性和不同任务适用性。

     

    Abstract: Objectives: After a disaster, it is essential to quickly and accurately assess the extent and severity of the disaster area for subsequent humanitarian relief and reconstruction. Traditional damage assessment methods are constrained by time efficiency, labor cost, and accessibility. In contrast, satellite images can quickly obtain the real situation of a wide range of disaster areas, and gradually become an important data source for building damage assessment. Automated building damage assessment from satellite images relies on deep learning methods, but current deep learning building damage assessment methods for satellite images face challenges such as insufficient modeling of feature differences, inadequate utilization of global-local features, and lack of difficult sample perception ability. Methods: To address these problems, a building damage assessment method based on global-local feature fusion and dynamic error supervision network (GLESNet) is proposed. At the encoding stage, the dual-temporal image features were extracted by a shared weight backbone, and the features were sent to the difference enhancement fusion module (DEFM) to enhance the difference between the features, filter out spurious changes, and obtain the fusion features. At the decoding stage, the fusion features are passed by the vertical and horizontal global-local feature fusion modules (GLFFM) and the dynamic error aware decoder (DEAD), to fuse the global and local features and percept the difficult samples. Results: The proposed GLESNet achieves 86.03% F1-score of building extraction, 75.20% F1-score of damage classification, and 78.45% overall F1-score on xBD, the largest global level high-resolution satellite image dataset for building damage assessment. Conclusions: The quantitative evaluation and visualization results are better than other advanced comparison methods. Ablation study verifies the effectiveness of each module. Transfer experiments and change detection experiments carried out on the IdaBD and LEVIR-CD datasets verify the generalization of the proposed GLESNet to different data and tasks.

     

/

返回文章
返回