Objectives The deep learning-based 3D point cloud semantic segmentation methods often overlook global contextual information and do not fully leverage the synergy between the local geometric structure, color information, and high-level semantic features of point cloud. It is essential to effectively capture the geometric structure, color variations, and semantic features of point clouds while retaining global context information.
Methods This paper proposes a point cloud semantic segmentation model that integrates local feature encoding and dense connectivity. First, a local feature extraction module is employed to enable the model to concurrently capture spatial geometric structure, color information, and semantic features. Second, a local feature aggregation module is incorporated to preserve the rich geometric data within the original point cloud, minimizing information loss during feature extraction. Finally, we utilize a dense connectivity module to aggregate contextual semantic information, and promote synergy between low-level features and high-level semantic data.
Results The proposed model is benchmarked on two large datasets, S3DIS and Semantic3D. The results show that the proposed model achieves an overall accuracy (OA) of 88.3% and mean intersection over union (mIoU) of 71.8% on S3DIS dataset, improving the baseline set of RandLA-Net by 0.3% and 1.8%, respectively. On Semantic3D dataset, we register an OA of 94.9% and an mIoU of 77.8%, marking respective improvements of 0.1% and 0.4% over RandLA-Net.
Conclusions The proposed model effectively preserves local geometric and color information through local feature encoding. The local feature aggregation module refines point proximity along boundaries to align with similar feature domains, and dense connections successfully integrate global context and key geometric features. Overall, the proposed model delivers more accurate semantic labels and a superior geometric feature representation, enhancing the precision of local segmentations.