Abstract:
Objectives In the land cover classification study from multi-source remote sensing images, domain adaptation method can align images or extracted image features from source and target images, thus improves the generalization ability of deep learning models and plays an important role in intelligent remote sensing image interpretation.
Methods A self-training domain adaptation method based on information entropy uncertainty estimation for pseudo label correction is proposed, its core is an entropy uncertainty loss function for land cover classification between cross source remote sensing images. First, a land cover classification model is pretrained on the source domain training set with ground truth, and is applied on the target domain images without ground truth labels to generate pseudo labels. Then, the pseudo labels are used to further train the model, the information entropy of the prediction result is calculated and used as the uncertainty estimation of the pseudo labels to further correct the pseudo labels with self-training, so as to obtain weights of the classification model more suitable for the target domain dataset. Finally, a cross domain classification experiment was conducted on three data sets, namely, the WHU building change detection data set, the ISPRS 2D semantic annotation competition data set, and the Wuhan land cover classification data set.
Results Experimental results show that:(1) The proposed method improved the mean intersection over union(mIoU) and overall accuracy(OA) of semantic segmentation network by 0.3%-3.1% and 1.2%-4.5%, respectively. (2) Compared with the traditional self-training method, the proposed method can improve the mIoU and OA by 0.1%-1.5%. (3) Compared with the most recent uncertainty estimation method based on Kullback-Leibler divergence, the proposed method can improve the mIoU and OA about 0.6% in average.
Conclusions The proposed method can further improve the performance of a trained segmentation model for the target domain images without the requirement of target labels. There is also no need to introduce additional modules or parameters on the existing segmentation model.