图像跨尺度特征融合与数据增强的公交客流检测模型

Bus Passenger Flow Detection Model Based on Image Cross-Scale Feature Fusion and Data Augmentation

  • 摘要: 针对目前公交客流目标检测中由于前后排乘客图像尺寸差距大、遮挡严重而难以检测的问题,提出一种改进的目标检测模型YOLOv5s_P。该模型基于YOLOv5基本架构,针对乘客图像尺寸差距大的问题,使用重复加权双向特征金字塔网络结构替换YOLOv5中的路径聚合网络结构,引入权重机制双向跨尺度融合不同特征,增强提取复杂目标特征的能力;同时针对遮挡问题,结合Mixup数据增强方法,扩充遮挡和重叠图像的训练样本,提高模型泛化能力,减少客流图像残缺导致的漏检。为了验证YOLOv5s_P模型性能,将其应用于实际公交场景,并与Faster R-CNN、SSD300、RetinaNet、YOLOv5s 4种模型进行对比测试。实验结果表明,YOLOv5s_P模型在不影响检测速度的情况下,平均精度均值达到96.9%,平均漏检率较YOLOv5s模型降低了3.43%,提高了公交客流的检测精度。

     

    Abstract:
    Objectives In order to resolve the issues of large size gap, serious occlusion and overlapping images in the objection detection research of bus passenger flow, an improved object detection model YOLOv5s_P is proposed.
    Methods The PANet structure in YOLOv5 model is replaced with the BiFPN structure to strengthen features information of different scales maps in order to extract complex target features. At the same time, Mixup data augmentation method is used to increase the training samples of occlusion and overlapping images to improve model generalization capabilities and to reduce the detection errors caused by the fragmentation of passenger flow images. In order to verify the performance of the YOLOv5s_P model, it is compared with four other models: Faster R-CNN, SSD300, RetinaNet and YOLOv5s to detect bus passenger flow in real bus scenario, in which the image sets are labeled to detect the upper body of the human instead of the head.
    Results Experimental results show that the average accu⁃racy of YOLOv5s_P model reached 96.9% without affecting the detection speed, and the average missed detection rate was reduced by 3.43% compared with YOLOv5s model, which improved the detection accuracy of bus passenger flow.
    Conclusions In the future, research will be integrated with the attention mechanism to further improve the detection accuracy, and combined with the tracking algorithm to solve the problem of the bus passengers number fluctuation.

     

/

返回文章
返回