Abstract:
Objectives Digital elevation model (DEM) serves as the fundamental data infrastructure for urban modeling, disaster assessment, and high-fidelity three-dimensional visualization. DEM as an input can significantly enhance the accuracy and reliability of diverse geographic information system applications. To address the inherent limitations of traditional interpolation and standard deep learning models in terrain reconstruction, we aim to develop a super-resolution generative adversarial network with adaptive deformable residual convolutions and U-Net discriminator (SRDCGAN) to suppress reconstruction artifacts and noise while simultaneously elevating the precision of DEM reconstruction.
Methods The proposed SRDCGAN method features several technical innovations: (1) The feature extraction layer of the generator incorporates a multi-level residual in residual dense block (RRDB) coupled with an adaptive deformable residual convolution mechanism. The RRDB structure effectively mitigates grid artifacts and high-frequency noise by nesting dense connections within a high-level residual framework. (2) To capture complex geomorphic features, adaptive deformable convolutional layers are integrated into the network. This allows the model to adaptively adjust sampling offsets based on local terrain complexity, providing superior flexibility in learning irregular terrain structures. (3) The discriminator is redesigned using a U-Net architecture augmented with an attention block to facilitate multi-scale feature fusion and capture extensive global receptive fields.
Results The performance is rigorously validated across four distinct terrain datasets. Quantitative analysis indicates that compared to the baseline super-resolution generative adversarial network model, the mean values of the root mean square error and the mean absolute error of the proposed method are reduced by 0.3%-12.6% and 3.2%-14.5%, respectively. Furthermore, terrain derivative accuracy shows marked improvements, with the mean values of the slope error and the aspect error declining by 1.9%-18.3% and 2.7%-7.1%, respectively. Comparative evaluations demonstrate that the proposed SRDCGAN method effectively eliminates visual artifacts and preserves intricate topographic textures.
Conclusions The experimental findings verify that the proposed SRDCGAN method establishes a robust mapping relationship between high-resolution and low-resolution DEM data, showing strong generalization capabilities across diverse geographical regions.