Abstract:
Objectives: This paper is primarily aimed at addressing the prevailing challenges in remote sensing scene image classification, specifically those associated with the utilization of heterogeneous data and the achievement of cross-domain classification. The conventional deep learning methods, while effective, often encounter limitations due to factors such as spatial scale and resolution, data sources, model assumptions, and the inherent diversity of scene data when dealing with tasks like feature transferring and model reuse.
Methods: In an attempt to overcome these obstacles, we introduce a novel approach called task-oriented alignment for unsupervised domain adaptation (ToAlign UDA). This approach, borrowed from the field of computer vision, is designed to enhance cross-domain remote sensing scene image classification. The principles and optimization mechanisms of the algorithm are explained, and its classification performance is evaluated through comparative experiments.
Results: ToAlign UDA is used in the experiment to train on the source domain dataset, while tests are conducted on three target datasets: NWPU-RESISC45, AID, and PatternNet. When the spatial distribution, spectral characteristics, scale, and other similarities between the source and target domains are high, ToAlign UDA achieves an overall classification accuracy of 95.16% on NWPU-RESISC45, 96.17% on AID, and 99.28% on PatternNet.
Conclusions: The results clearly indicate that the ToAlign UDA approach outperforms most scene classification algorithms in terms of classification accuracy in remote sensing scene image analysis. Therefore, it holds significant potential in advancing the field of remote sensing image classification, particularly in the context of utilizing heterogeneous data and achieving cross-domain classification.