Abstract:
Objectives: After a disaster, it is essential to quickly and accurately assess the extent and severity of the disaster area for subsequent humanitarian relief and reconstruction. Traditional damage assessment methods are constrained by time efficiency, labor cost, and accessibility. In contrast, satellite images can quickly obtain the real situation of a wide range of disaster areas, and gradually become an important data source for building damage assessment. Automated building damage assessment from satellite images relies on deep learning methods, but current deep learning building damage assessment methods for satellite images face challenges such as insufficient modeling of feature differences, inadequate utilization of global-local features, and lack of difficult sample perception ability.
Methods: To address these problems, a building damage assessment method based on global-local feature fusion and dynamic error supervision network (GLESNet) is proposed. At the encoding stage, the dual-temporal image features were extracted by a shared weight backbone, and the features were sent to the difference enhancement fusion module (DEFM) to enhance the difference between the features, filter out spurious changes, and obtain the fusion features. At the decoding stage, the fusion features are passed by the vertical and horizontal global-local feature fusion modules (GLFFM) and the dynamic error aware decoder (DEAD), to fuse the global and local features and percept the difficult samples.
Results: The proposed GLESNet achieves 86.03% F1-score of building extraction, 75.20% F1-score of damage classification, and 78.45% overall F1-score on xBD, the largest global level high-resolution satellite image dataset for building damage assessment.
Conclusions: The quantitative evaluation and visualization results are better than other advanced comparison methods. Ablation study verifies the effectiveness of each module. Transfer experiments and change detection experiments carried out on the IdaBD and LEVIR-CD datasets verify the generalization of the proposed GLESNet to different data and tasks.