Objectives Point cloud has no topological structure, current deep learning semantic segmentation algorithm is difficult to capture geometric features implied in irregular points. In addition, the point cloud is in three-dimensional space with a large amount of data size. If we blindly expand the captive field size during extract neighborhood information, it will increase the number of model parameters, which will make model training difficult.
Methods We propose a point cloud semantic segmentation model based on the dilated convolution and combining elementary geometric features such as angle as the model input. First, during feature extraction, basic geometric features such as the relative coordinates, distance and angle between the centroid and the neighboring points are used as the model input to mine the geometric information. Second, in the process of building local neighborhoods, we expand the image dilated convolution operator to point cloud processing, the point cloud dilated operator can expand the receptive field size with no increasing the number of parameters of the model. Finally, the dilated convolution operator, multi-geometric features encoding modules and U-Net architecture are combined to form a complete point cloud semantic segmentation model.
Results The results show that compared with the traditional neighborhood structure, the overall accuracy (OA) of dilated neighborhood structure is increased by 1.4%. Compared with the model that only uses coordinates as input, multi-geometric features encoding module is increased by 10.7%. The final model based on the two proposed algorithms get mean intersection over union and OA are 91.2% and 68.2%, respectively.
Conclusions The dilated neighborhood structure can effectively extract point cloud information in a larger range without increasing the number of model parameters. multi-geometric features encoding module can maximize the capture of shape information in the neighborhood.