Abstract:
With the rapid advancement of big data, cloud computing, and artificial intelligence, spatiotemporal intelligence has emerged as a novel research frontier, profoundly reshaping our understanding of natural phenomena and human activities. Multi-line LiDAR, as a critical spatial information acquisition tool, provides high-precision 3D dynamic perception capabilities for surveying, intelligent transportation, and autonomous driving. This paper proposes an integrated AIoT-LiDAR-3D scenario fusion perception framework based on an edge-cloud-terminal collaborative architecture to overcome the high latency and 2D semantic constraints of traditional systems. By implementing real-time acquisition of 3D spatiotemporal point clouds and deeply coupling them with multi-source 3D models—including Point Cloud, Mesh, and Vector models—the framework achieves a seamless mapping from physical space to a dynamic twin space. Quantitative evaluation conducted in a typical urban intersection scenario using 64-line LiDAR demonstrates that the framework achieves a total end-to-end latency of only 56.41 ms, which includes 4.56 ms for edge preprocessing, 28.12 ms for transmission, and 23.73 ms for cloud processing. This latency is significantly lower than the 100 ms single-frame acquisition cycle and represents an 80% reduction compared to existing centralized architectures, such as CMM, which exhibit latencies between 285 ms and 335 ms. Furthermore, bandwidth stress tests from 20 Mbps to 100 Mbps confirm the system's robust real-time performance, with mean latencies stabilized between 48.0 ms and 78.3 ms. The resulting 3D dynamic scenes possess measurable, computable, and interactive characteristics, providing precise decision-making support for refined management and dynamic optimization in environmental monitoring, resource management, and emergency response.