2021 Vol. 46, No. 2
It is imperative to prevent interregional transmission in the early stages of an epidemic for both controlling the epidemic and ensuring socioeconomic stability. The premises of such exercise are knowing the present and upcoming spatial distribution of any existing cases. During the coronavirus disease 2019 (COVID⁃19) epidemic, researchers have used location⁃based services data to extract the origins and destinations of travelers and thus analyze the spatial distribution of the epidemic. However, these data can only provide positions of long⁃term stays of travelers, but not short⁃term stops and the vehicles they are taking, which are also common spaces of transmission. Hence it is necessary to introduce online transportation data such as route recommendation and train tables to characterize the route taken by interregional travelers when evaluating the distribution of existing cases. We propose an approach to support risk evaluation of regional epidemic spread and regional transportation control, aiming to improve our spatial governing capabilities in face of an epidemic. It involves estimating outflow cases using recent population flows and previous comparable flows, projecting the probable route they will take using online map route recommendation and flight calendar/train tables, locating short⁃term stops according to the projected routes, and thus formulating transportation restriction policies to lower further regional transmission. The key and distinct step of this approach is to locate potential stops of regional travelers, which is achieved by combining the proportion of transportation mode choice and minimum time strategy. The effectiveness and necessity of introducing probable routes are verified with active cases data, population flow data and transportation data in January, 2020. Results show that introducing anticipated short⁃term stops significantly improves the fitting performance of population flow data to spatial distribution of active COVID⁃19 cases.
The outbreak of coronavirus disease 2019 (COVID-19) in early 2020 has caused a serious impact on the safety of people's lives and economic development around the world. Based on a huge geospatial information database, location-based service can provide convenient, real-time, accurate, and comprehensive services related to geographic information to service objects. Therefore, location services can be well used in events such as emergency disaster relief, large-volume personnel tracking, and site personnel management and control in complex environments. Facts have proved that the advantages of location-based services have played an irreplaceable role in the prevention and control of COVID-19 in the spring of 2020. We elaborate on China's application of location-based services for COVID-19 from multiple aspects such as BeiDou-based vehicle ad-hoc network, basic services based on ubiquitous positioning for epidemic prevention and control, and intelligent navigation robots participating in fight against the epidemic. Through the analysis and summary of these applications and their technical methods, we can deepen the connotation of location-based service technologies. At the same time, we can provide solutions for the emergency response to major public health events in the future.
Coronavirus disease 2019 (COVID-19) has aroused people's attention to public health emergency management. Based on wave effect gradient field, the production-inducing gradient is proposed to explore the wave effect of a certain industry on other industries. By combining the economic space field theory with exploratory spatial data analysis (ESDA) method, this paper studies the industrial wave effect and spatial distribution of pharmaceutical industry, and develops effective strategies for public health emergency accordingly. The results of economic space field theory analysis of China's pharmaceutical industry reveal the wave effect between pharmaceutical industry and each industrial sector in the industrial economic space respectively, and identify the industries closely related to pharmaceutical production. The response capability of each province in Chinese mainland to public health emergency can be evaluated based on spatial aggregation types, and then emergency strategies and suggestions for the development of pharmaceutical industry are put forward accordingly. The comprehensive consideration of industrial wave effect and the industrial spatial distribution can not only promote the rational development of pharmaceutical industry in all provinces, but also help to enhance the response capability of provinces to public health emergencies.
With the outbreak of coronavirus disease 2019 (COVID-19) in the world, researches on the related epidemic situation are also constantly increasing. However, the current researches focus more on the prediction analysis and the researches on epidemic situation prevention and control measures, remain at the statistical level and the model parameters lack spatiotemporal evolution description. This paper introduces the granularity and virtual real line of the boundary of the discrete grid to describe the tightness of physical isolation measures and the connectivity and isolation of adjacent spaces separately and designs the medical reception and cure model under the discrete grid based on the spatial autocorrelation between the medical bed admission capacity and the grid. Furthermore, the LSEIR (logistic-susceptible-exposed-infected-removed) epidemic model is used to construct the artificial prevention and control measures model of physical isolation and medical reception and cure under the discrete grid, which provides an effective method to analyze and assess the impacts of the artificial prevention and control measures model of physical isolation and medical reception and cure on the spread and prevention and control of the epidemic situation. The original spatiotemporal evolution of COVID-19 epidemic situation in Wuhan, China was simulated with the early data of epidemic of the United States, Germany, Spain, and the United Kingdom, the experimental analysis result of epidemic situation data in Wuhan, China shows that physical isolation measures have a very obvious effect on reducing the peak value of infected population, advancing the peak of the inflection point and shortening the duration of the epidemic situation; medical reception and cure measures can effectively reduce the peak value of the infected population in the early stage of the epidemic, but has no significant impact on the advance of the peak inflection point and the shortening of the epidemic duration; the model can analyze and assess the impacts of physical isolation and medical reception and cure measures on the epidemic situation from both quantitative and qualitative perspectives, which has high rationality and correctness.
Since the coronavirus disease 2019 (COVID-19) epidemic was kept under control in China, to conduct scientific research on the patterns of the virus transmission has become essential in terms of disease control. Therefore, the demand for the precise and structured trajectory of the individual cases is increasing. While considering the highly unstructured characteristics of the spatiotemporal trajectory source string retrieved from the official website, it is difficult to obtain a precise trajectory efficiently by either hand-crafted method or an automated algorithm. To address the above contradiction of efficiency and precision in trajectory extraction, a human-computer interactive (HCI) trajectory extraction and validation approach was proposed based on natural language processing (NLP) artificial intelligence algorithm, the source string was firstly analyzed by NLP, and coarse trajectories were then identified and extracted automatically, then the trajectories were confirmed or edited by user, after that other user will validate those trajectories whether correct or not by voting. The essential technologies of the approach were also investigated, including trajectory location segmentation and combination algorithm, trajectory quality evaluation algorithm, and trajectory extraction and validation workflow. A comparative experiment that takes the Harbin native clustered cases during April as a study case was conducted to evaluate the effectiveness and practicability of the proposed approach. The results show that the efficiency of the proposed approach is significantly improved one time more than the extraction method without NLP. The evaluation results of the trajectory credibility also suggest that the HCI extraction method can effectively reduce 26.34% of missing locations and wrong positioning of the trajectory automatically extracted by NLP alone. Furthermore, the validation results also suggest that there are 92.63% trajectories were assessed to be reliable, and those incorrect trajectory nodes were mainly created by the NLP algorithm rather than the hand-crafted method. According to the experimental result, our proposed approach can improve the efficiency and quality of trajectories extraction effectively. Apart from that, our prototype system can also be used as a potential tool for epidemiological investigations to assist doctors or patients.
ICESat-2's laser data has the highest elevation accuracy up to date, and its observation range covers the global land, which can be used as basic data for the high-precision global ground elevation reference. Based on ICESat-2 /ATLAS global laser data product ATL08, this paper obtained ICESat-2 laser points on the global land, and studied the method of extracting global elevation control points based on elevation reference and attribute parameters, and used reference elevation data to verify their accuracy. The obtained laser points were verified by airborne laser data in Shandong experimental field and Henan experimental field in China. And the root mean square error (RMSE) were 1.11 m and 1.39 m respectively before filtering. After filtering with elevation reference and slope constraints, the mean RMSE were 0.69 m and 0.57 m respectively, and the corresponding data retention rates were 61.38% and 60.00%, which proved that the proposed method in this paper could effectively improve the elevation accuracy while ensuring the data retention rate. The airborne laser data from the western, central and eastern United States experimental fields were used to verify the elevation control points. The RMSE of each experimental field was less than 0.9 m, which proved that the extraction method proposed in this paper could be used to extract elevation control points worldwide. This method can automatically extract global elevation control points with high density and high precision, providing support for the stereo mapping of domestic high-resolution satellites without or with few ground control points, and assistance in evaluating the quality of DEM/DSM products.
3D reconstruction technology is widely used in digital elevation model production, robot navigation, augmented reality and autonomous driving, etc. Disparity map is an important expression of 3D reconstruction, and stereo matching is the most widely used technology to obtain a disparity map. In recent years, with the development of hardware, data sets, and algorithms, stereo matching methods based on deep learning have received extensive attention and achieved great success. However, these works are mainly validated in close-range images, and the evaluation on remote sensing aerial images is scarce. This paper reviews deep learning methods for stereo matching, and selects five representative models, such as GC-Net (geometry and context network), PSM-Net (pyramid stereo matching network), GWC-Net (group-wise correlation stereo network), GA-Net (guided aggregation network), HSM-Net (hierarchical deep stereo matching network), and applies them to a set of open source street-scene datasets (KITTI2015) and two sets of aerial remote sensing image datasets (München, WHU). The various networks are analyzed, and the performance of deep learning stereo matching methods is discussed and compared to traditional methods. The experimental results reveals that most of the deep learning methods exceed the classic semi-global matching and had a powerful generalization ability on cross-dataset transfer.
3D geometric tree models are of great interest to many applications, such as digital city and digital forestry, among others. Of late, light detection and ranging (LiDAR) technique has been extensively used to capture the geometric shapes of the trees from the outdoor scenes. Despite two decades of research, tree modeling algorithms and the created tree models are still far from being satisfactory. In this paper, we review most of the mainstream tree modeling algorithms by using ubiquitous point clouds. These tree modeling algorithms can be roughly classified into five categories, including clustering-based method, graph-based method, a priori assumption-based method, Laplace's method, and lightweight expression-based method. In each category, we analyze the strengths and challenges of the tree modeling algorithms. Afterwards, some possible tree modeling methods and strategies are given to overcome the potential limitations in terms of detailed skeleton representation, robustness and scalability, level of details (LoDs) representation, and tree modeling evaluation. We finally propose a few suggestions for future research topics in tree modeling.
In the existing methods of indoor 3D model reconstruction, indoor navigation elements that work as space separators are usually regarded as undividable structure. However, the shape difference between two wall surfaces on one wall will cause details loss in the indoor 3D reconstruction room extraction, as well as difficulties in extracting doors and windows. Aiming to solve this problem, this paper proposes an idea of refining space separator. By refining one wall into two wall surfaces, regional growth algorithm is applied to obtain the corner points of inner wall, so that the refined expression of the interior can be obtained. Point cloud densities of corresponding areas on two wall surfaces are compared to avoid the influence of the obstacles blocking the wall surface on the extraction result of door and window extraction. The results show that the proposed method can effectively extract indoor doors and windows, which provides an important basis for the generation of navigation network.
Tibetan Plateau is a region with more concentrated wetland distribution in China, which has always been a sensitive region of global change, and its wetland distribution and change are of great value to the study of the change of water resources and environments in China. Based on Landsat 8 OLI (operation land imager), we adopt the method of object-oriented classification and manual interpretation to obtain the wetland distribution of the Tibetan Plateau in 2016, and the wetland classification data in 2008, as well as the auxiliary data of elevation and watershed boundary. The present situation of wetland distribution and wetland changes in the Tibetan Plateau from 2008 to 2016 were analyzed. The results show that: ① In 2016, the total area of the Tibetan Plateau wetland was 115 584 km2, the area of lake wetland was 48 737 km2, marsh wetland area was 34 698 km2, river wetland area was 15 927 km2, flood wetland area was 15 035 km2, constructed wetland area was 1 188 km2. ② From 2008 to 2016, the total area of wetland in the Tibetan Plateau increased 3 867 km2, mainly from the increase of lake, river and flood wetland and the decrease of the marsh wetland. ③ In different watersheds and altitudes, the distribution and variation of wetland in the Tibetan Plateau showed significant regional differences. ④ Both temperature and precipitation in the Tibetan Plateau showed upward trends, which was positively correlated with the overall change of the wetlands; the change of glacier area in drainage basins had a certain correlation with the wetland change; human activities mainly played a negative role in the wetland change of the Tibetan Plateau. This paper provides a useful support for the study of environmental change and wetland conservation in the Tibetan Plateau.
The lunar core is stratified into an outer core and an inner core. They can be estimated according to the second degree of gravity field model and selenophysical libration parameters from lunar laser ranging (LLR). We use the high-resolution lunar gravity field model GL1500E to evaluate the size and density of lunar core, which are inversed through a nonlinear particle swarm optimization. A large number of inversion indicates that, the outer diameter of the outer core is about 469 km with a corresponding density of 4 613 kg/m3, the radius of the inner core is around 303 km with a corresponding density of 7 004 kg/m3 and the density of mantle surrounds 3 340 kg/m3, which is quite close to the geological value of 3 360 kg/m3. The radii of inner core and outer core are quite close to other recent studies as well. The size and density of the lunar core found here can be therefore meaningful. If the lunar core is composed of pure iron and ferrous sulfide, our study shows that the inner core is mainly composed of pure iron whereas the outer core is largely composed of ferrous sulfide.
The on-board atomic clock will be affected by such factors as bad space conditions and aging equipment while it is working, so there are often outliers in the satellite clock error data, among which the additive outlier (AO) is a common class of outliers in the clock error sequence. Based on the autoregressive moving average (ARMA) model, this paper proposes an AO detection algorithm, which can not only accurately detect isolated AO, but also accurately detect pieces of AO, and overcome the overwhelming and masking phenomenon that often occurs in other algorithms. When the AO of the clock error sequence is successfully detected, the algorithm can obtain an accurate ARMA model, and then accurately predict satellite clock error. In order to analyze the outlier detection and prediction effect of this algorithm, BeiDou satellite clock error is used to verify it. The results show that the proposed algorithm can accurately detect the AO of the clock error sequence, and has good effects in the short-term prediction of satellite clock errors.
The uncertainty of distance-related error of global navigation satellite system is gradually increasing with the increase of distance between the reference stations and the rover station. Therefore, the dual-systems of global positioning system (GPS) and BeiDou satellite navigation system (BDS) network real-time kinematic (RTK) method was presented to meet the demand of high-precision long range positioning RTK. Firstly, the wide-lane ambiguities were fixed by the multi-frequency observation data of GPS and BDS between long-range reference stations. The satellite clock errors can be eliminated by the double-difference solution model, simultaneously the atmospheric error and satellite orbit errors can be weakened. Then the double-difference carrier phase integer ambiguities can be fixed by the resolution model including atmospheric error and carrier phase ambiguity. The method of classification error corrections between long-range reference stations network was used. The observation errors were classified, according to the characteristics of observation errors between long-range reference station network. The ionosphere errors and non-dispersive errors of the rover station were calculated by using the reference stations' error correction and regional error interpolation. The atmospheric errors and satellite orbit errors of the rover station can be weakened by the method of interpolation. Then the errors of GPS/BDS carrier phase observation of the rover station were removed by the calculation errors. The carrier phase integer ambiguities of the rover station can be fixed by the method of resolution integer ambiguity with multi-frequency carrier phase, and the position of the rover station was obtained by the fixed ambiguities. The algorithm validation was carried out by the data on long-range reference station network. Three long-range reference stations and one rover station were used to test in Central China. The positioning accuracy of centimeter can be obtained by the algorithm of dual-systems of GPS and BDS network RTK. At the same time, single system can also get the centimeter level of position and the GPS is better than BDS. The method of dual-systems of GPS and BDS network RTK can guarantee the positioning accuracy of the rover station. The results of experiment indicate that the GPS/BDS long-range network RTK can be realized and the centimeter level positioning accuracy can be achieved by this algorithm.
High-precision ionospheric model is of great significance for improving the positioning accuracy of navigation satellite system. With the rapid development of low earth orbit satellites, the establishment of a high-precision ionospheric model has provided new opportunities. Based on the simulation data, this paper obtains LEO (low earth orbit) and GNSS (global navigation satellite system) satellites observation data of January 1 to 30, 2017 by means of simulation. The constellation types include 60, 96, 192 and 288 satellites respectively. Based on these data and taking African region as an example, the coverage of GNSS and LEO satellites' ionospheric pierce points and the joint modeling accuracy are studied. The results show that, after LEO satellites are added, the distribution of ionospheric pierce points is significantly improved, which leads to a noticeable rise of their density. The range of ionospheric pierce points of single low-orbit satellite is larger than that of GNSS satellite, and the altitude angle and azimuth angle of LEO satellite change remarkably. With the increase of the number of low-orbit satellites, the accuracy of joint modeling also rises up. Within different latitudes of 30°E at UTC 12:00, the difference between GNSS-only and GNSS+288 LEO ionospheric modeling results is the largest, reaching -1.6 TECU. With the increase of modeling time, the difference between the joint modeling results and GNSS-only results gradually decreases.
The diurnal variability of total electron content (TEC) in midlatitude summer nighttime anomaly (MSNA) region varies seasonally. Whether the characteristics of MSNA can be effectively described is one of the key indicators to test the accuracy of ionospheric empirical TEC model. A new ionospheric empirical TEC model named SSM-T2 (single station model type2) is proposed for MSNA anomalies. The effectiveness of the model is verified by an example of ohi3 located in the Antarctic Peninsula in the MSNA region. The SSM-T2 model consists of three parts: The diurnal variation component of TEC, the seasonal variation component and the solar activity component. The coefficients in the model are obtained by least square fitting. At the ohi3 station in the Antarctic Peninsula, test results of the model fitting show that the SSM-T2-ohi3 model fits well with the modeling data GPS-TEC, and describes the MSNA phenomenon well. By comparing and analyzing the models, it is found that SSM-T2-ohi3 is in good agreement with CODE GIMs (Center for Orbit Determination in Europe, Global Ionosphere Maps) and SSM-month models at extrapolated time points. It can effectively describe the characteristics of MSNA and has better prediction ability than IRI2016 model.
As a further extension of traditional regression model, the regression prediction model not only involves the fixed parameter estimation of regression model, but also incorporates the model prediction into part of adjustment, which is more in line with the solutions of actual requirements. Focusing on the issues of predicted non-common points (independent variables) polluted with errors and inaccurate stochastic model, this paper proposes a new complete solution with a sufficient consideration to all errors of each variables based on errors-in-variables (EIV) model. Meanwhile, performed with the methodology of variance-covariance component estimation, stochastic model and prior cofactor matrix of the predicted non-common points have been corrected. The corresponding formulas are derived and the iterative algorithm is also presented. Experimental design shows that the presented approach can effectively achieve the estimation of variance components for various types of observations. It provides a feasible means for retrieving more reasonable parameter results and achieving higher prediction accuracy. In addition, the prediction effect of our presented approach is better over other control schemes, especially for the situation where there is a certain correlation between the observed data and the random elements in coefficient matrix.
Surface ruptures and deformation of large earthquakes are important for investigating earthquake mechanisms, fault activities and continental deformation. With the improvement of satellite techniques, optical and radar images have been widely used in earthquake studies since the 1992 Landers earthquake. However, due to a lack of pre-earthquake images, historical earthquakes prior to the 1990s are rarely studied. Recent declassification of American KeyHole (KH) satellite images opened up new possibilities of investigating old earthquakes back to the 1970s. Researchers have successfully applied KH-9 images to the 1978 Tabas-e-Golshan and 1979 Khuli-Boniabad earthquakes in Iran, and gained some new insights into fault behaviours. We first provided a review of the methodology and progresses of using KH-9 images to measure earthquake deformation, then investigated the 1976 Chaldiran, Turkey earthquake by matching the pre- and post-earthquake KH-9 images, and obtained an E-W displacement of about (3.1±0.7) m (i.e. strike-slip), consistent with the measurements in the field. KH-9 imagery provides a new means of investigating historical earthquakes in detail, but there are some limitations. These limitations are briefly discussed in the end.
The dialogue system has become an important human-computer interaction interface in artificial intelligence. Chatbots technology plays a vital role in the development of dialogue system technology and represents the frontier development of dialogue system technology. In this paper, we analyze the emerging of chatbots and recent trends in chatbots. When introducing the research progress of chatbots technology in academia and industry, special emphasis is placed on the three key technologies of chatbots. They include multi-turn response selection model in retrieval-based chatbots, response generation model in generation-based chatbots and dialogue model based on deep integration of retrieval and generation. The future development is prospected by analyzing the problems in the chatbots technology.
Watermarking provides a technical mean for copyright protection of digital audio. However, with the popularity of recording equipment, recapturing attack has become an effective method to remove audio watermarks. In order to improve the security of the watermarking system, we propose a robust speech watermarking algorithm against recapturing attacks. Firstly, we define the discrete cosine transform coefficients logarithm mean (DCT-CLM) feature and get the conclusion that the changes of DCT-CLM feature are very small after recapturing attacks. Secondly, Frame number and watermark are embedded together in frames by quantifying the DCT-CLM feature. Frame number is used to resynchronize watermarked speech after the signal is subjected to de-synchronization attacks. If watermarked frame is synchronized, we extract watermark bits from the frame for resource tracing. Compared with other speech watermarking algorithms, the algorithm proposed in this paper is not only robust against de-synchronization attacks, but also robust against recapturing attacks.