- Big Data GIS
- An Improved Cellular Automata Model for Simulating Pedestrian Evacuation
- Probability Estimation of Future Earthquakes in China Based on Improved PSHA Model
- Web GIS Component
- Big Data in Smart City
- Experimental Geography Based on Virtual GeographicEnvironments(VGEs)
- Research Advance and Application Prospect of UnmannedAerial Vehicle Remote Sensing System
- Introduction to Urban Computing
- Detecting “Hot Spots” of Facility POIs Based on Kernel Density Estimation and Spatial Autocorrelation Technique
- Integrated Space-Air-Ground Early Detection, Monitoring and Warning System for Potential Catastrophic Geohazards
- The Spatial-Temporal Pattern Analysis of City Development in Countries along the Belt and Road Initiative Based on Nighttime Light Data
- On Construction of China’s Space Information Network
- Oblique Image Based Automatic Aerotriangulation and Its Application in 3D City Model Reconstruction
- Review of Change Detection Methods for Multi-temporal Remote Sensing Imagery
Download CenterMore >
LinkMore >
Objectives: The text in street view images is the clue to perceive and understand scene information. Low-resolution street view images lack details in the text region, leading to poor recognition accuracy. Super-resolution can be introduced as pre-processing to reconstruct edge and texture details of the text region. To improve text recognition accuracy, we propose a text super-resolution network combining attentional mechanism and sequential units. Methods: A hybrid residual attention structure is proposed to extract spatial information and channel information of the image text region, learning multi-level feature representation. A sequential unit is proposed to extract sequential prior information between texts in the image through bidirectional gated recurrent units. Using gradient prior knowledge as the constraint, a gradient prior loss is designed to sharpen character boundaries. Results and Conclusions: In order to verify the effectiveness of the proposed method, we use real scene text images in TextZoom and synthetic text images to carry out comparative analysis experiments. Experimental results show that the proposed method can reconstruct clear text edges and rich text texture details, and improve text recognition accuracy of street view images.
Objectives: Curve simplification is of importance in automated map generalization; nevertheless, the Douglas-Peucker Algorithm (i.e., the DP Algorithm) popularly used in map generalization is not automatic, because a key parameter called distance tolerance (i.e.,ε) must be given by experienced cartographers and need to be input before execution of the algorithm. Methods: To solve the problem, this paper proposed a method to automatically calculate 𝜀 and by which the automation of the DP Algorithm is achieved. The method consists of the following steps:(1) A formula is constructed by the Hausdorff Distance for calculating the similarity degree (Sim) between a curve at a larger scale and its simplified counterpart at a smaller scale. (2) 15 linear rivers are selected, and each of them are manually simplified to get their counterparts at seven different scales. The Sim of each original river and each of its simplified counterpart at a smaller scale can be obtained using the formula constructed by the Hausdorff Distance, and 15×7=105 coordinate pairs consisting of (S, Sim) can be got, and a function between Sim and 𝑆 are constructed by the curve fitting using the coordinates. (3) In the meanwhile, the 15 rivers are simplified using a number of 𝜀, and the Sim of each original river and each of its simplified counterpart at a smaller scale can be calculated using the formula constructed by the Hausdorff Distance. In this way, a number of coordinate pairs (ε, Sim) are got, and a function between 𝜀 and Sim is constructed by the curve fitting. (4) By the function between Sim and S and that between 𝜀 and Sim, a formula between 𝜀 and 𝑆 can be deducted. Using the formula ε can be calculated automatically, because in a map generalization task 𝑆 is usually known. After this step, automation of the DP Algorithm is achieved. Results: The experiments have shown that (1) the proposed DP Algorithm can automatically simplify the rivers in a specific geographical area to get the results at different scales; and (2) the resulting river curves generated by the proposed DP Algorithm have a high degree of similarity with the ones made by experienced cartographers. Their average similarity degree is 0.927. Conclusion: The proposed DP algorithm can simplify curve features on maps automatically, and the results are highly intelligent and credible. Although only river data is tested in this paper, the principle of the proposed method can be extended to other linear features on maps. Our future work will be on improving the accuracy of the proposed DP Algorithm using more river data so that the algorithm can be used in practical map generalization engineering.
The refined seafloor topography models play an important role in fields of submarine plate tectonic movement, underwater carrier navigation support and marine resource exploration and so on. In this paper, the development of seafloor topography detection technology and model construction is reviewed, and discusses the current research status and main challenges of the global refined seafloor topography modeling. Also, the developing trends of global seafloor topography modeling is summarized, considering that the technology of recovering seafloor topography from altimeter-derived marine gravity anomalies is still the main means to construct global seafloor topography models. New altimeter satellites such as dual satellite tandem altimeter and SWOT (Surface Water Ocean Topography) will provide data sources for further improving the accuracy of marine gravity field and seafloor topography models. Optimization of seafloor topographic inversion theory based on topographic complexity is expected to bring theoretical innovation. It is worth paying attention to explore the application of artificial intelligence technology in global seafloor topography modeling.
Non-ideality of navigation satellite signals can cause ranging bias between different receivers, which is an important factor that may degrade the accuracy and integrity performance of global navigation satellite system. In civil aviation and other high integrity services, it is necessary to consider the non-ideality characteristics of the signal and define the design constraints of the user receiver, so as to reduce the impact of non-ideality and ensure service safety. The B1C and B2a signals of BeiDou System (BDS) are planned to join the international civil aviation standard, so it is necessary to study their non-ideality characteristics and define the receiver design constraints. This paper analyzed the non-ideality characteristic of BDS B1C and B2a signals. In order to avoid the influence of noise and multipath, a large aperture antenna was used to collected all the in-orbit satellite broadcasting B1C and B2a signals (including 27 satellites) to obtaining pure signal samples. Then, the software receiver was used to process the signal samples under various receiving parameters to obtain the ranging biases under different receiver front-end bandwidth and code discriminator space, and the ranging bias range and variation with the receiving parameters are evaluated. Furthermore, taking the application in Dual Frequency Multi-Constellation Satellite Based Augmentation Service (DFMC SBAS) as an example, the receiver design constraints of the two signals were analyzed. The research results show that the range biases introduced by the non-ideality of B1C and B2a signals are less than 0.7 m and 0.4 m respectively under the parameters range commonly used by receivers. Under the requirement of range bias less than 0.1 m, the applicable parameter range of receiver design constraints for B1C and B2a signals is better than the relevant requirements of the international civil aviation organization draft standard. There is sufficient margin to further consider other constraints.
【Objective】 Point cloud density is an important parameter of lidar technology. Point cloud density has an important impact on the extraction of remote sensing retrieval index for forest.【Methods】 The experimental data, sized by1600m*1450m, had been obtained by UAV Lidar and thinned by the graded random thinning method in order to simulate different point cloud density during actual operation, which was used to extract the remote sensing retrieval index for forest such as Canopy Closure(CC), Gap Fraction (GF), Leaf Area Index(LAI), height quantile variables and density quantile variables. Then these parameters were used to make difference comparison with the indexes extracted through raw data. 【Results】(1) The lower the point cloud density is, the lower the extracted Canopy Closure is slightly, while the extracted Gap Fraction is slightly increased. The point cloud density has little influence on the extracted Canopy Closure and Gap Fraction. (2) When the point cloud density is high, it has little impact on Leaf Area Index, but when the point cloud density is small, it has a great impact on Leaf Area Index, and some areas may have sudden changes on Leaf Area Index. (3) When the point cloud density is large, the effect of point cloud density on height and density quantile variables is not obvious, but when the point cloud density drops to 3.6 p/m2, there may be sudden changes in density and height density quantile variables in some areas. 【Conclusions】 In short, the point cloud density has an important impact on the description of forest structural characteristics. The appropriate point cloud density is conducive to describe the forest structure morphology more accurately, but the low point cloud density affects the extraction of remote sensing retrieval index for forest. This study has certain guidance and reference for selection of point cloud density to estimate the remote sensing retrieval index with UAV Lidar on forestry.
User emotions have a significant impact on spatial attention, spatial decision-making, and spatial memory. Emotions associated with landmarks can improve navigation efficiency while enhancing users' cognitive map abilities. Previous studies focused on the role of emotional landmarks in navigation, but few studies paid attention to landmark extraction methods in complex indoor environments. In this paper, we propose a quantitative model of salience to automate the extraction of emotional landmarks in large shopping malls based on user-generated content (UGC). Firstly, we obtain user comment data of a large shopping mall using web crawler technology. Second, we conduct the sentiment analysis of user comments based on SnowNLP and extend the results to landmark cognitive salience. Then, we combine the Analytic Hierarchy Process (AHP) and Criteria Importance Through Intercriteria Correlation (CRITIC) to calculate the weights of indoor landmark salience indicators and construct a quantitative evaluation model of emotional landmark salience. Finally, we extract hierarchical landmarks using hierarchical clustering algorithms, design multi-scale indoor navigation maps based on hierarchical landmarks to meet user cognition, and verify the usability of the landmark extraction method through user experiments. This work can promote indoor navigation map design standardization and provide a valuable complement to intelligent indoor navigation services.
Objective: Deep Convolutional Neural Network (DCNN) is widely used in automatic road extraction from high-resolution Remote Sensing images (HRSIs). However, the existing methods are difficult to model the context relationship between pixels in the predicted results. To solve this problem, some studies have used Fully Connected Crf (FullCrf) to perform secondary optimization of semantic segmentation results combined with context information, but the discontinuity problem of road structure cannot be effectively improved. In order to improve the integrity of road structure, this study proposes a Short Range Conditional Random Filed (SRCRF) model combined with DCNN. Methods: SRCRF mainly includes unary potential function based on road pre-segmentation, binary potential function based on spectral spatial features and k-neighborhood mean field inference algorithm. Firstly, the priori knowledge of road pre-segmentation results is obtained by using the powerful feature extraction capability of DCNN as the unary potential function of SRCRF, Secondly, the dependence of the binary potential function defined by the linear combination of Gaussian kernel functions on the surrounding nodes is modeled. The binary potential function enables the classification results to have local consistency that is, adjacent pixels with similar spectral features have the same label. Finally, K-neighborhood mean field inference algorithm based on mean field approximation inference algorithm optimizes the inference range to make full use of the spatial context information and spectral feature context information of the road, and then calculates the optimal label corresponding to each pixel based on the space and spectral feature to optimize the road accurately. The convolution method is adopted to control the inference range of SRCRF within the radius of K in order to improve the proportion of feature vectors (road-road). Results: The experimental results show that SRCRF alleviates the transition smoothness of FullCrf, and alleviates the structural discontinuity and incompleteness in the road acquisition results of high resolution remote sensing images. In Zimbawe-Roads dataset and Cheng-Roads dataset, F1 values of SRCRF increased by about 4.01% and 3.73% respectively compared with DCNN, and about 3.25% and 2.28% respectively compared with FullCrf. Conclusion: Compared with traditional deep learning methods for road extraction from high resolution remote sensing images, this paper proposes a new road extraction scheme for remote sensing images, SRCRF. This scheme combines the advantages of DCNN and optimizes the fully connected structure of traditional conditional random fields into k-neighborhood structure, which reduces the inference scope and improves the proportion of feature vectors (road to road). Compared with FullCrf, SRCRF can make better use of image color features and spatial features to accurately optimize the road extraction results of deep learning output. According to the results of SRCRF on Cheng-Roads and Zimbawe-Roads data sets, the performance of this method is improved compared with DCNN and FullCrf, and the time is shortened by one order of magnitude compared with FullCrf. In future work, we will further investigate the potential for learning Gaussian features and investigate more complex CRF architectures to better capture global context information. Finally, we are particularly interested in exploring the application potential of SRCRF in other fields, such as building extraction, vehicle extraction, lake extraction, etc.
Objectives: Coverage and communication are of great significance to the accuracy, comprehensiveness and data transmission of sensor network monitoring, especially in the case of different vertical monitoring requirements, traditional monitoring methods are difficult to achieve good coverage effect. We proposed a node coverage deployment method based on 3D finite dominating sets to solve the above problem. Methods: The node deployment problem in continuous space is transformed into discrete maximum coverage location problem by 3D finite dominating sets. Firstly, the continuous space is discretized by cubes, each cube is weighted according to the actual monitoring needs. A set of 3D finite dominating sets which can represent the infinite candidate positions in the continuous space is extracted. Then, a maximum coverage model considering communication is constructed to get the optimal deployment location of the sensors. Take water quality testing as an example, the underwater sensors deployment simulation is carried out. The communication effect between sensors and the influence of discrete size on the result are analyzed, and the coverage of this method compared with other methods was elucidated. Results: The results show that, the sensor deployment method proposed in this study can effectively improve the coverage in the continuous three-dimensional space, achieve higher coverage through fewer nodes, and ensure the communication between sensors even if there are few deployed sensors. In addition, when the discretization size is small, the solution time is long, and the error between the model coverage and the actual coverage is small. On the contrary, when the discretization scale is large, the solving efficiency is high, but the error is relatively large. Conclusions: The proposed method can effectively solve the problem of sensor deployment in three-dimensional space and efficiently obtain the data related to the monitored elements with different spatial distribution in vertical direction.
HY2B, launched on 24 October 2018, is the first satellite of Chinese marine dynamic environment monitoring mission. It is mainly used to detect various marine dynamic environmental parameters such as sea surface height, sea surface wind field, and gravity field. Precise orbit tracking and determination is v ery key for the mission. The reduced-dynamic precise orbit determination for HY2B is carried out utilizing the spaceborne GPS observation in January 2021 and the orbit accuracy is evaluated by means of four methods. The results show that: (1) The average v alue of the carrier phase fitting residual is about 7.2mm, and the three-dimensional position difference between adjacent arcs of 28-hour orbit determination overlapping for 4h is less than 1.5cm; (2) Compared with the Precision Orbit Ephemerides (POE) issued by French CNES, the RMS average values of the difference in direction R, T, N and 3D are 1.5cm, 2.0cm, 1.5cm and 3cm respectively; (3)The SLR validation statistics RMS is about 2cm. (4) By applying PCV model constructed by residual method, the 3D RMS difference with CNES POE product is reduced from 3cm to 2.5cm. In conclusion, high -precision HY2B satellite orbit products can be obtained basing on space-borne GPS observation of the HY2B satellite.
As woodland is an important natural and economic resource of China, it is important to understand the distribution of woodland for the investigation and management of woodland resources. In this paper, we design a woodland extraction method combining multi-scale attention mechanism and edge constraint to tackle the issue of low accuracy and irregular boundaries in traditional forest extraction methods. First, an end-to-end multi-scale attentional neural network model is constructed to fully extract the context features of woodland in remote sensing images, and semantically describe woodland at different scales to achieve high-precision pixel-level expression of woodland. Secondly, the edge constraint rules are constructed to optimize the boundary of the extraction results, to improve the readability of the extraction results. To prove the effectiveness of the proposed method, Santai County, Mianyang City, Sichuan Province, is taken as the experimental area to establish datasets and carry out woodland extraction experiments. The results show that the extraction accuracy of this method is 81.9%, the recall rate is 75.6%, F1 is 78.1%, IoU(Intersection of Union) is 64.2%, and the method in the paper has a better effect in the application of woodland extraction with remote sensing image.
A real-time and robust 3D dynamic object perception module is a key part of autonomous driving system. This paper fuse monocular camera and LiDAR to detect 3D objects. Firstly, we use a convolutional neural network (CNN) to detect 2D bounding boxes in the image and generate 3D frustum regions of interest (ROI) according to the geometric projection relation between lidar and camera. And then, we cluster the point cloud in the frustum ROI and fit the 3D bounding box of the object. After detecting 3D objects, we reidentify the objects between adjacent frames by appearance features and Hungarian algorithm, and then propose a tracker management model based on a quad-state machine. Finally, a novel prediction model is proposed, which leverages lane lines to constrain vehicle trajectories. The experimental results demonstrate that our algorithm is both effective and efficient. The whole algorithm only takes approximately 25 milliseconds, which meets the requirements of real-time.
Objectives:Millimeter Wave Radar (mmWave Radar) has been widely used in automotive industry and other fields, but its application is mainly limited to the environmental perception of obstacles or specific tasks. At present, there is little research on the application of mmWave Radar in the field of navigation and positioning. Methods:This paper first studies the raw data processing principle of mmWave Radar, and then designs an indoor ego-localization method which only depends on a low-cost mmWave Radar. The process mainly includes extracting centroid feature points using DBSCAN (density based spatial clustering of applications with noise) algorithm, matching centroid feature point pairs through nearest neighbor criterion, constructing nonlinear optimization function and solving positioning results using LM (Levenberg Marquardt) method. Results:Experiments show that indoor navigation and positioning can be solved in real time by using a low-cost mmWave Radar. Under static conditions, the average horizontal positioning accuracy can reach sub centimeter level (mean value is 0.83cm and standard deviation is 0.47cm). Under dynamic conditions, the absolute trajectory error can reach 0.66m and the average heading angle error can reach 4.58 °, which shows the feasibility of ego-localization of low-cost mmWave Radar. Conclusions:Finally, this paper discusses the problems and feasible research ideas of low-cost mmWave Radar in navigation and positioning.
Objectives:GNSS-IR (Global Interferometric Reflectometry) is a new passive remote sensing technique for determining surface environment parameters, which places an important part in the inversion of earth's surface properties, such as soil moisture monitoring, snow parameter retrieval, and vegetation remote sensing, etc. The GNSS-IR offers several benefits over the traditional soil moisture inversion approach, including all-weather capability, high temporal precision, and cheap cost. Method:Considering the fact that the existing soil moisture inversion algorithms only utilize one single feature of GNSS reflected signal and from the perspective of increasing data availability, this paper proposes a GNSS-IR soil moisture inversion approach that integrates multi-type feature data by utilizing phase, amplitude, and frequency extracted GNSS signals reflected by soil. The main work is to effectively filter all available features extracted from the original GNSS SNR observations. The feasibility and effect of the suggested method are compared and evaluated using three machine learning models, including LSSVM (Least Squares Support Vector Machine), RF (Random Forest), and BPNN (Back Propagation Neural Network). Results:Comparing the inversion effects of the above three models, the BP neural network model has the best inversion effect, followed by the random forest (RF) model, and the least squares support vector machine (LSSVM) model is the worst. The results showed that the correlation coefficients between the reference value and soil moisture inversed by the multi-feature fusion method LSSVM, RF, And BPNN models are 0.823, 0.944, and 0.955, respectively, and the corresponding root mean square errors (RMSE) are 0.045, 0.035 and 0.032cm3·cm-3. Conclusions:Compared with the single feature inversion method, the soil moisture inversion accuracy is increased by 6-14%, and the correlation coefficient is increased by 2-7%. The results demonstrated that the proposed method has higher inversion accuracy and reliability than the single feature inversion method.
Objectives: Using battlefield text data for spatiotemporal topic analysis, we can obtain the spatial distribution pattern of geographical environment elements and their impact characteristics on battlefield activities from micro to macro and from scattered to gathering places, and mine the spatial distribution and development law of battlefield events, which further enriches the perception mode of battlefield environment and provides a new means of battlefield environment efficiency analysis. It is of great significance and value for in-depth understanding of battlefield environmental knowledge. Methods: The key technology to improve the quality of spatiotemporal topic discovery in geographical environment is to effectively construct entity composite relationship network and integrate multi-dimensional heterogeneous relationships for topic clustering. Firstly, a spatiotemporal topic tensor model integrating multi-dimensional relationships is constructed, and the complete expression of topic relationship is given by using the Tucker decomposition of topic tensor model; then, the feature vector space of multi-dimensional relational clustering is constructed as the objective function of topic classification, and the block valued matrix decomposition technology is used for joint clustering calculation, and the core tensor matrix is used to solve the problem of data sparsity; finally, the block value matrix obtained by multi-dimensional relational clustering is used to obtain the association value between geographical environment elements and spatiotemporal topics. Results: The results show that:(1) The geographical environment entities and entity relationships are correctly clustered into spatiotemporal topic structure. The accuracy rates in the training set were 88.4% and 86.9% respectively, and in the test set were 87.3% and 85.8% respectively. (2) The number of entities and tags clustered under different subject structures decreases gently with the reduction of subject scale. The statistical results show that the most subject tags are maneuver, attack and interception, and the most location tags are Highlands, roads and villages. (3) Compared with LDA Algorithm, the multi-dimensional relationship joint clustering method can be seen that the number of entities and labels mined by this algorithm is generally higher than LDA Algorithm, so it can be seen that the accurate and clear space-time topic structure can be obtained after integrating multi-dimensional relationships. (4) The block value matrix obtained by multi-dimensional relationship clustering reflects the internal characteristic relationship of the spatiotemporal topic structure of the geographical environment, indicating that the spatial-temporal theme has strong cohesion. Conclusions: This method can effectively improve the quality and efficiency of spatiotemporal topic discovery, making the obtained topics better show the cohesive correlation between geographical environment elements, geographical location and event topics, providing a basis for clearly reflecting the evolution process of spatiotemporal topics, and supporting for mastering the development trend of events. Since this paper only takes the co-occurrence frequency of entity words as the weight value in the construction of relationship matrix, there is a certain deviation in data analysis. In the future, we will combine the attention mechanism to dig deep into multi-source text data, improve the efficiency and accuracy of data analysis, and establish the temporal and spatial correlation between different hot spots on the basis of effectively discovering cohesive hot spots, for inferring the event change process and providing an important reference value for the dynamic deduction of battlefield environment.
Objectives: The emerging global navigation satellite system interferometric reflectometry (GNSS-IR) technique can obtain the property of reflector using geodetic GNSS receiver, and has been extensively applied to the research on snow depth, tide level, soil moisture, sea surface wind and so on. However, multipath suppression technique is often adopted in geodetic GNSS receiver and the reflected signal will be weakened or even eliminated, which will undoubtedly reduce the performance of GNSS-IR. Therefore, the influence of multi-receiver and multifrequency on GNSS-IR should be carefully investigated. Methods: By learning from the method of zero baseline, we design an experiment of snow depth retrievals using GNSS-IR at Chinese Yellow River Station in Svalbard, and quantitatively illustrate the influence of receiver performance on the accuracy of snow depth retrievals. In the experiment, two different GNSS receivers are connected to the same antenna with a power divider. It brings a special benefit that the observations of two receivers at three frequencies are derived from absolutely the same reflector. After collecting enough data, the daily average retrievals of the two receivers in the three frequencies are compared in order to verify the effectiveness of the retrieval strategy. Then the differences in the retrieval effects of different receivers are discussed through the comparison and analysis of the single retrievals. Results: The results show that the observations of each receiver and each frequency can be used to retrieve snow depth and the bias is in 3 cm. However, in spite of the same reflector, the differences between the snow depth retrievals from two receivers and three frequencies remain, especially when the snow depth changes rapidly. Conclusions: Different receivers have different methods and technologies to suppress multipath effect, which will affect the retrieval effect of GNSS-IR. Therefore, when using geodetic GNSS receivers for GNSS-IR retrievals, the difference brought by receivers cannot be ignored. If the method of multipath suppression can be properly considered in GNSS data processing, it should be helpful to improve the accuracy of GNSS-IR retrievals.
Ridesharing is an essential part of shared mobility for improving passengers' travel efficiency in cities. Existing studies usually determine shareable trips based on whether the arrival sequence of vehicles in more than one pick-up and drop-off points can meet predefined spatio-temporal constraints. Such a simple approach cannot quickly and comprehensively find all the potential shareable trips under scenarios involving large-scale car-sharing requests. Therefore, based on the modeling method of space-time prism and the spatial-temporal expression of passengers' sharing willing, we propose a potential spatio-temporal path area model of travel. Then, we apply the topological relation between the space-time prisms of trips for ridesharing identification, and quantify the strength of ridesharing of trips. Finally, two ridesharing matching strategies are proposed to simulate the result of ridesharing matching in real-world travel environment. Results confirm that the proposed ridesharing identification model can accurately delineate the potential space-time accessibility of vehicular travel, which makes it easier to discover all potential shareable trips and to realize the accurate and effective car-sharing identification. Our study will be helpful to vehicle resource scheduling and passenger ridesharing travel planning in urban shared mobility system.
Objectives: Receiver autonomous integrity monitoring (RAIM) is the guarantee of highly reliable navigation and positioning for terminal users, and the development of low earth orbit (LEO) satellites brings new opportunities for integrity monitoring. However, there may be significant differences in terminal RAIM performance under different LEO constellation enhancements. Methods: This paper systematically evaluates the RAIM availability and fault detection effects of the Beidou system (BDS) under LEO satellite augmentation based on three typical LEO constellations:high-inclination (80 satellites), mid-inclination (120 satellites) and mixed-inclination constellations (168 satellites). Results: The simulation results show that the RAIM availability effect under high-inclination constellation enhancement is most effective in high latitudes, while in the mid- and low- latitude regions, the RAIM availability effect under the enhancement of the mid-inclination constellation is best. The global RAIM availability in the non-precision approach (NPA) phase is improved by 30.5%, 29.0%, and 41.0% after adding the high-inclination, mid-orbit inclination, and mixed-inclination constellations, respectively, compared with the BDS. Conclusions: It can be seen that the hybrid constellation composed of different orbital inclinations can better compensate for the defects in spatial coverage of the visible satellites, and its global RAIM availability enhancement effect is optimal, and the minimum pseudorange deviation can be detected under the enhanced RAIM is reduced by 33.3 m compared with the previous one.
Objectives: The urban agglomeration has evolved into the main form of China's urbanization in the next stage, which can greatly affect the urban spatial pattern in China. At present, although the research on the expansion of urban agglomerations is deepening and becoming mature, there still exists some problems, e.g., insufficient quantitative measurement on the expansion form of urban agglomerations, neglection of the analysis on the spatial differentiation of expansion modes and lack of the spatial interaction factors in driving forces. In view of this, taking Chang-Zhu-Tan Urban Agglomeration (CZT-UA), which is the earliest integration in central China and is of strategic value to the rise of the central region, as the case study, a comprehensive study has been conducted on the spatial expansion of CZT-UA as well as the interaction between different cities. Methods: 30m impervious surface data is firstly used to extract time series of CZT-UA's built-up area from 2003 to 2018 owing to its time-sequence stability and accessibility. Then a collection of measurement methods, e.g., fractal dimension, expansion intensity index, Moran's I and Getis-ord Gi* are applied to quantitatively reveal the spatial structure, expansion process and spatialtemporal patterns of CZT-UA's built-up area. Finally, principal component analysis and geographically weighted regression are integrated to explore the driving forces for the built-up area's expansion in CZT-UA. In this regression model, two special factors are established to represent the interaction inside the urban agglomeration besides the traditional population, social economy and transportation accessibility factors. Results: The results show that:(1) The CZT-UA's built-up area shows obvious axial distribution, which generally follows Xiangjiang River and the transportation network consisting of five vertical and five horizontal trunk lines. (2) From 2003 to 2018, the built-up area and expansion speed of Changsha, Zhuzhou, and Xiangtan showed an overall upward trend. Compared with Changsha, Zhuzhou and Xiangtan expand more slowly and the gap is gradually widened, and thus Changsha's dominant position in the urban agglomeration continues to strengthen. (3) The expansion pattern analysis shows that the overall spatial differences in the expansion and distribution of built-up area in CZT-UA have gradually narrowed. In addition, the hotspot regions of urban expansion formed a kernel in central CZT-UA, and exerted a driving force to the peripheral areas. (4) The GWR regression model demonstrates that three principal factors are involved in the expansion of CZT-UA's built-up area, including socio-economic external connections, location and traffic accessibility, and administrative power. The model output indicates that the flow of residents and economy between cities, clear policy guidance from government, convenient transportation network as well as the radiation effect of the central city can jointly attribute to the expansion of built-up areas. However, each factor shows different importance, among which the administrative power has the most positive impact, and the northern parts in CZT-UA are more sensitive to it; the driving force of location and traffic accessibility decreases from the north to the south, while socioeconomic external connections show a much more important effect on the expansion of the southern CZT-UA. Conclusions: Based on the thorough analysis, this paper suggests that relevant decision-making departments should further promote the integration and sustainable development of CZT-UA by strengthening the allocation of resources and financial support to Zhuzhou and Xiangtan through appropriate policy tilt and giving full play to the radiation function and trickle-down effect of Changsha to narrow the regional gap in the urban agglomeration. Given that expansion of the built-up area in CZT-UA has been significantly attracted by road network, it is feasible to improve the construction of comprehensive transportation infrastructure such as intercity expressway in remote areas to enhance the accessibility and interaction between regions.
Objectives: In order to solve problems of low accuracy and single task in the existing crack identification algorithm, this paper proposes a pavement crack identification method via improved Mask R-CNN model. Methods: Firstly, the crack dataset is collected and labeled.Through the training and optimization of the Mask R-CNN model, the crack pixels in the generated detection box are segmented while the crack is located. Secondly, to solve the problems of inaccurate detection of crack edge and low accuracy of Mask R-CNN model, an improved C-Mask RCNN is designed, which improves the quality of crack region proposal generation by cascading multi threshold detectors and achieves accurate positioning under high threshold. Finally,a series of optimization parameters and experimental comparison are carried out for the improved model, and the effectiveness of the proposed model is verified. Results: The experimental results show that the mean average precision of C-Mask RCNN model detection part is 95.4%, which is 9.7% higher than that of the conventional model, and the mean average precision of the segmentation part is 93.5%, which is 13.0% higher than that of the conventional model. Conclusions: In summary, the C-Mask RCNN model proposed in this paper can locate and extract cracks with high identification accuracy.
Objectives: The Mw4.9 Le Teil earthquake that occurred on November 11, 2019 is the most destructive earthquake recorded in the Rhône River Valley of France. Methods: In this study, we first used Sentinel-1 InSAR data to calculate the coseismic displacement field of the Le Teil earthquake with the GAMMA software package. We then obtained fault geometric parameters and coseismic displacement fields based on Bayesian inversion and the Steepest Descent Method (SDM). We last quantified the effects of quarry extraction activity on fault by using the Digital Elevation Model (DEM) data acquired in 2000 and 2006-2011. We calculated the extraction volume and the Coulomb stress change on the fault plane based on the Boussinesq solution of three dimension homogeneous and elastic half-space. Results: The coseismic displacement field show that the largest displacement in the line of sight of the ascending and descending orbits is 14.9 cm and 8.6 cm, respectively. We find that the seismogenic fault has a southeast dip angle of 72, a strike of 54 and an average rake of 108; the earthquake rupture reached the surface, with a rupture area of about 3413 m×1358 m, and a depth of about 1.472 km. The slip is over 0.15 m and is concentrated at a depth of 0-0.75 km with a peak slip of 0.2 m. We calculated the geodetic magnitude to be Mw4.79. The Coulomb stress change on the fault plane is 0.024 MPa in 6-11 years after 2000. Conclusions: The rock extraction of the Le Teil quarry had been active during 1833-2019, and the extraction is even more intense after 2007. The Coulomb stress change on the fault plane could reach up to 0.1 MPa, which is much larger than the local tectonic loading rate, suggesting that the Le Teil earthquake is strongly related to rock extraction activities.
Objectives:In recent years, a great breakthrough has been made in the text generation image problem based on Generative Adversarial Networks (GAN). It can generate corresponding images based on the semantic information of the text, and has great application value. However, the current generated image results usually lack specific texture details, and often have problems such as collapsed modes and lack of diversity. Methods:This paper proposes a Determinant Point Process for Generative Adversarial Networks (GAN-DPP) to improve the quality of the generated samples, and uses two baseline models, StackGAN++ and ControlGAN, to implement GAN-DPP. During the training, it uses Determinantal Point Process kernel to model the diversity of real data and synthetic data and encourages the generator to generate diversity data similar to the real data through penalty loss. It improves the clarity and diversity of generated samples, and reduces problems such as mode collapse. No extra calculations were added during training. Results:This paper compares the generated results through indicators. For the Inception Score score, a high value indicates that the image clarity and diversity have improved. On the Oxford-102 dataset, the score of GAN-DPP-S is increased by 3.1% compared with StackGAN++, and the score of GAN-DPP-C is 3.4% higher than that of ControlGAN. For the CUB dataset, the score of GAN-DPP-S increased by 8.2%, and the score of GAN-DPP-C increased by 1.9%. For the Fréchet Inception Distance score, the lower the value, the better the quality of image generation. On the Oxford-102 dataset, the score of GAN-DPP-S is reduced by 11.1%, and the score of GAN-DPP-C is reduced by 11.2%. For the CUB dataset, the score of GAN-DPP-S is reduced by 6.4%, and the score of GAN-DPP-C is reduced by 3.1%. Conclusions:The qualitative and quantitative comparative experiments prove that the proposed GAN-DPP method improves the performance of the generative confrontation network model. The image texture details generated by the model are more abundant, and the diversity is significantly improved.
Objectives: A major problem in Beijing is land subsidence caused by long-term over-exploitation of groundwater. Since the opening of the south-to-North Water Diversion Project, the problem of water shortage in Beijing has been greatly alleviated. The south-to-north Water Diversion Project alleviates the land subsidence in Beijing to a certain extent. Methods: In order to analyze the development trend of land subsidence after the start of the South to North Water Transfer in the Beijing, ascending and descending time series interferometric synthetic aperture radar(InSAR) technique is used to monitor land subsidence in Beijing. Firstly, the mean deformation velocity and cumulative deformation in line of sight (LOS) in Beijing district from January 2015 to December 2020 obtained by the small baseline subset InSAR(SBAS-InSAR). Secondly, The Robust least square fitting method were used to fuse the deformation results of the lifting rail and compare the global positioning system (GPS) monitoring datas and the fusion results of lifting rail. Finally, analyze the variation trend between the deformation results by the Robust least quadratic fitting and groundwater. Results: The deformation results show that the center of Beijing is basically stable and the deformation distribution is not uniform. The maximum ascending annual deformation velocity and the maximum ascending cumulative deformation amount reach -134mm/year and -697mm, the maximum descending annual deformation velocity and the maximum descending cumulative deformation amount reach -135mm/year and -734mm.And the fusion results obtained by the least square fitting method have certain reliability and accuracy. Conclusions: The subsidence rate in Beijing district shows a decreasing trend with the gradual increase of groundwater level. In general, the middle route of South-to-North Water Transfer Project has greatly alleviated the expansion trend of land subsidence in Beijing to a certain extent.
Building aggregation is one of the main contents in map generalization at the micro scale, which is of great significance to map compilation and multi-scale representation of spatial data. In order to maintain the consistency of spatial characteristics, a building aggregation method considering the spatial structure is proposed. Firstly, based on distance-adaptive Delaunay triangles, the relationships of spatial structure between adjacent buildings are divided into six types. Meanwhile, these spatial structures are summarized as positive bridging type and oblique bridging type according to the bridging surfaces. Then, the bridging area is constructed according to bridging triangles, which are identified by projective overlap lines between adjacent buildings. The rectangular method is applied to the bridging area to maintain the spatial structure relationship. At the same time, the calculation of bridging distance between adjacent buildings is put forward for building clustering, which can meet the mapping requirements and cognitive habits. Finally, the effectiveness and universality of our method are verified by several experiments. Moreover, the comparative experiment shows that our method has advantage in maintaining the spatial structure characteristics and geometrical characteristics, including area consistency and rectangularity.
Objectives: The digital twin city seeks to restore virtually the geographic scene that is close to reality and provide the audience an immersive experience. 3D real urban scene is one of the key technologies to achieve the goal. Nowadays, the combination of Geographic Information System (GIS) and game engine becomes an industrial trend and provides new opportunities for 3D real urban scene. Oblique photogrammetric data, as being a fundamental data source for 3D urban modeling, is able to represent large-scale and high-precision scene. The game engine plays an excellent role in offering realistic effects and post-production systems that are greatly useful for geospatial data visualization. However, the diversity of specifications and formats of oblique photogrammetric data has caused many different data loading and transformation methods, resulting in a problem to be solved in 3D real urban scene. Methods: The study aims to propose a data transformation approach from oblique photogrammetric data to rendering resources in game engine. To stress, the proposed approach can allow us to load instantly massive amount of oblique photogrammetric data in Unreal engine, which is one of the most popular game engines. We further compared oblique photogrammetric data with rendering resources in game engine and discussed the significance of such a data transformation. The approach re-organizes the original structure of the data by parsing data into a tree structure that better suits transformation purposes. Also, the approach transforms respectively the geometry and texture data when iterating tree nodes, as well as considers the essential physical properties of rendering assets. All these features are combined into a package data format. Results: In the case study of Futian District, Shenzhen, the results show the proposed approach successfully converts oblique photogrammetric data to rendering resources in the Unreal Engine 4. In the transformation process, the geometrical structure and collision properties of triangular mesh are guaranteed, and color, bump and other visual elements of original data are kept in materials. The rendering resources are loaded to simulate the real urban scenarios at different time periods of a day in the Unreal Engine 4. Conclusions: The paper devises a novel approach which can convert the oblique photogrammetric data into rendering resource for a game engine to load and render, and meantime maintain the consistency of data's geometry and texture. For its scalability, it also supports parallel processing of large-scale data transformation when data is organized in the same directory. The proposed approach may be of great theoretical and practical value for other related 3D model data transformation problems.
Objectives: The emergence of deepfake technique leads to a worldwide information security problem. Deepfake videos are used to manipulate and mislead the public. Though there have been a variety of deepfake detection methods, the features extracted generally suffer from poor interpretability. To solve this problem, a deepfake video detection method using 3D morphable model (3DMM) of face is proposed in this work. Methods: The 3DMM is employed to estimate parameters of shape, texture, expression, and gesture of the face frame by frame, constituting the basic information of Deepfake detection. The facial behavior feature extraction module and the static face appearance feature extraction module are designed for the construction of feature vectors on a sliding window basis. The facial behavior feature vector is derived from the expression and gesture parameters while the appearance feature vector is calculated with the shape and texture parameters. The consistency measured by Cosine distance between the appearance feature vector and the behavior feature vector is the criterion for authentication of the face for each sliding window across the video. Results: The effectiveness of the proposed method is evaluated with three public datasets. The overall half total error rates (HTER) obtained on FF++, DFD and Celeb-DF dataset are 1.33%, 4.93% and 3.92% respectively. For the severely compressed videos, C40 of DFD, the HTER is 7.09%, showing a good robustness against video compression. The model complexity is around 1/4 of that of the most related work. Conclusions: The proposed algorithm has good person pertinence and clear interpretability. Compared with state-of-the-art methods in literature, the proposed algorithm demonstrates lower half total error rates, better resistance to video compression and less computational cost.
Objectives: In order to withstand the deep water pressure, the cylindrical structure and the conical structure are often used as the submarine pressure hull. The section roundness of the submarine pressure hull needs to be measured during construction and repair. Due to the limitation of environment conditions and the fusion uses a variety of different survey methods to obtain data, the survey points are often non-uniform. This case would result in the deviation between the results of classical least square circle fitting method and the actual situation, especially to the maximum deflection which used as the judgment condition of roundness. Therefore, it is necessary to adopt a method to eliminate the influence of non-uniform distribution of the survey points. Methods: A non-uniform sampling weighted least squares circle fitting method and its weighting rules were proposed based on the classical Pratt circle fitting method. There are three steps for present method. First, the survey points are fitted by the classical Pratt method. Then, the point weight is calculated according to the corresponding center angle. Finally, the survey points with weight are fitted by present method. Results: Numerical experiments on standard ellipse sampling analysis were taken out. The results shown that the maximum deflection was more affected than the fitted center and radius by the non-uniformity of the sampling points. As the roundness evaluation judgment of the submarine pressure hull is based on the maximum deflection, the fitting results of present method is more accurate and reliable than the classical circle fitting method. Conclusions: The proposed method can eliminate the influence of non-uniform sampling on circle fitting better, and has good accuracy and engineering practical value.
Objectives: Lanzhou New District is an important window for the development of northwest China. In recent years, the large-scale construction of mountain excavation and city construction project has led to different degrees of land subsidence, which has seriously threatened the economic and social development of the new district. Methods: Taking Lanzhou New District as the research object, this paper uses small baseline subset interferometric synthetic aperture radar (SBAS-InSAR) to process 58 ascending sentinel-1A image data covering the study area from May 2017 to February 2021. The distribution characteristics and spatio-temporal evolution of land subsidence in the study area were obtained, and the SBAS-InSAR measurement results were verified by field investigation. Then the related factors affecting land subsidence were discussed. Results: The results show that land subsidence in Lanzhou New District is mainly concentrated in mountain excavation and city construction area, with an area of about 33.5 km2 and a maximum subsidence rate of -68 mm/a. In space, with the development of excavation and filling projects, the land subsidence develops to the southeast of the new district, and the land surface presents a long-term continuous settlement process. Conclusions: Land subsidence is closely related to many factors such as mountain excavation and city construction project, thickness of excavation and filling, geology and road. Specifically, mountain excavation and city construction project control the spatial extent of land subsidence, and then the filling thickness control the magnitude of land subsidence. The nature of artificial fill is an intrinsic factor causing land subsidence, while road construction will also accelerate the process of land subsidence. The research results can provide scientific basis for the sustainable and safe development of the new district and urban planning and construction.
Objectives: Vegetation plays an important role in ecological environment monitoring. Studying vegetation cover change can provide reference for regional ecological environment protection. The region of Altay in Xinjiang belongs to the ecological function zone of water conservation mountain grassland, with rich natural resources and beautiful scenery. However, the ecological environmental problems were gradually emerging under the national development strategies, such as One Belt One Road and the Silk Road Economic Belt. Therefore, monitoring the dynamics of vegetation change in the area with multisource remote sensing data is necessary to explore the relationship between the economic development and ecological environment protection. Methods: This paper uses multi-source remote sensing long time series data with BFAST(breaks for additive seasonal and trend, BFAST), Topographical position and Geostatistical analysis methods to monitor the vegetation cover dynamics in Altay of Xinjiang, China during 2000-2019. Results: This paper processed large number of multisource remote sensing data along with human activity data and showed that during the study period, the number of vegetation breakpoints increased year by year from 2003 to 2009, and then gradually decreased after 2009. Meanwhile, seven types of vegetation breakpoint trends were detected, among them, the more number of breakpoints from degradation to growth type was identified, and the types of disturbance degradation and continuous degradation were less. From the interannual change curve, the dynamic change of vegetation coverage showed a trend of first decreasing and then increasing, that is, the vegetation coverage showed a degradation trend from 2000 to 2008 and a significant improvement trend from 2008 to 2019, and the degradation was greater than the improvement. Meanwhile, the east slope with an elevation of more than 900m and a slope of more than 15° (from north to east and from south to east) is the dominant topographic location of vegetation degradation type, accounting for 62.4%. Conclusions: The protection of ecological environment in Altay area of Xinjiang still needs to be further strengthened and effective protection measures should be taken.
Objectives: Accurately and automatically extracting buildings from high-resolution remote sensing images is of great significance in many aspects, such as urban planning, map data updating, emergency response, etc.The problems of missing and wrong detection of buildings and missing boundaries caused by spectrum confusion still exist in the existing full convolution neural networks (FCN). Methods: In order to overcome the limitations, a multi-feature fusion and object-boundary joint constraint network was presented in this paper. The method is based on an encoding and decoding structure. In the encoding stage, the continuous-atrous spatial pyramid module (CSPM) is positioned at the end to extract and combine multi-scale features without sacrificing too much useful information. In the decoding stage, more accurate building features are integrated into the network and the boundary is refined, by implementing the multi-output fusion constraint structure (MFCS) based on object and boundary. In the skip connection between the encoding and decoding stages, the convolutional block attention module (CBAM) is introduced to enhance the effective features. Furthermore, the multi-level output results from the decoding stage are used to build a piecewise multi-scale weighted loss function for fine network parameter updating. Results: Comparative experimental analysis was performed on the WHU and Massachusetts building datasets. The results show that:(1) The buildings extraction results proposed by the proposed method are closer to the ground truth. (2) The quantitative evaluation result is higher than the other five state-of-the-art approaches. Specifically, IoU and F1-Score on Massachusetts and WHU building datasets reached 90.44%, 94.98%, and 72.57%, 84.10%, respectively. (3) The proposed model outperforms the MFCNN and BRRNet in both complexity and efficiency. Conclusions: The proposed method not only improves the accuracy and integrity of extraction results in spectral obfuscation buildings, but also maintains a good boundary. It has strong scale robustness.
Integrating scene structure information into information annotation can effectively solve the problems such as unclear information indication, confusing occlusion representation and overlapping view layout in urban AR. Aiming at the problem of missing scene structure in information annotation, a method of building scene structure extraction for urban AR is proposed, which distinguishes the geographical entities and considers the accuracy, efficiency and robustness. Firstly, a scene perception network for building scene structure extraction is constructed to extract semantic label, scene depth and surface normal from a single scene image. Then, they are transformed into structure features such as building facade corners and orientation, and the best matching between them and the building outlines in 2D map is calculated. Finally, the scene image is reconstructed according to geographical entities, while the structure information such as region contours, scene depth and facades orientation are generated. During the experiments, 9470 sets of samples generated from Google Street View data are employed to train and test the scene perception network, while the effectiveness of scene structure extraction is tested in 32 sets of building scenes in Graz. The results show that the proposed method can extract the structure of building scene in near real time, and the facade contours are more regular. In the case of geo-registration error or partial occlusion, the quality of elevation extraction is significantly better than the method based on map to analyze the scene image, which proves that it has better robustness.
Combination of ascender and lander and ascender of CE-5 detectors successively landed on the moon successfully. The mission and landing modes of these two detectors are different, so we use different methods to determine their landing positions. The lunar positions are obtained with the combined the data from UXB (Unified X-band) tracking network and VLBI (Very Long Baseline Interferometry) network. The method of joint statistical positioning is investigated to determine CE-5 combination of ascender and lander. Although data precision between the data-transmission signal and side-tone signal of DOR(Differential of Range) is marked, their effects on the positioning are not significant. Longer scale of observation data help improving the position precision, but the total length of 1 hour satisfies the accuracy requirements. The kinematic statistical method by polynomial fitting is used to determine the landing trajectory and the lunar position of CE-5 ascender, and is compared with the point positioning method. Their differences are better than 40m. The landing moment of CE-5 ascender is also calculated to millisecond accuracy. Our method will provide a valuable reference for the lunar positioning of detectors in the future Phase IV of Chinese lunar project.
Objectives: Mine slope instability is one of the main factors restricting the safety production of open-pit mines in China. Methods: Ground-based SAR(synthetic aperture radar) interferometry technology has been gradually introduced into the application of slope safety monitoring and early warning prediction in open-pit mines. However, the high-frequency rolling update characteristics of ground radar interferometry data lead to large data error accumulation and unobvious curve mutation characteristics. Processing the original data by dislocation subtraction and velocity reciprocal method can effectively reduce the vibration of high-frequency data, improve the readability of critical sliding data. After data processing, it can highlight the trend characteristics of key deformation data. The research is based on the analysis of cumulative displacement curve, velocity curve and reciprocal velocity curve group treated with different periods. It is found that there are three characteristic points in the curve group:sudden deformation increase point, velocity increase point and stable vibration point. Results: Through these characteristic points, the slope landslide disaster can be predicted and predicted. The trend of key deformation data can be highlighted by using the three feature points of deformation sudden increase point, velocity growth point and stable vibration point. Conclusions: Through the identification of three feature points, the possible landslide can be effectively identified in advance and the landslide time can be predicted, which provides a new technical path and solution for landslide early warning and prediction analysis based on ground-based Interferometric Radar.
Objectives:When performing plantation surveys using laser point cloud data, there are missing points in the scanned point cloud data due to the occlusion and self-occlusion of trees during laser scanning, the felling of trees and other reasons. So, the locations of the missing trees are inaccurate, and the forest survey results have large errors. The key to solving this problem is to realize the filling of the missing tree point cloud. Methods:This paper defines a concept named degree-of-collinearity, and constructs a method based on degree-of-collinearity combined with straight line detection to fill in missing data. Results:For the experimental results of simulated data, the average accuracy of the proposed algorithm is 97.28%; for the experimental results of plantation data, the proposed algorithm detects the location of 9 missing trees, and the degree-of-collinearity rises from 0.2193 to 0.2705. Conclusions:The experimental results show that this method can realize the optimal inference of missing location, strengthen the collinear relationship of filled data and can also be applied to count the missing trees in the artificial forest.
Objectives: The key phenological information of oilseed rapeseed (Brassica napus L.) plays an important role in field management, viewing time prediction and yield estimation of the oilseed rape. It is also an important part of precision agriculture. Polarimetric synthetic aperture radar (PolSAR) technology shows great potential in phenological phase identification with its all-weather monitoring capability and its sensitivity to the crop structural information. Methods: In this study, we identified the 5 phenological phases of the oilseed rape on the test area with 5 time series full-polarization Radarsat-2 data, which covers the whole growth period of the oilseed rape. 6 typical Stokes parameters are extracted and applied in the identification of oilseed rape phenological phases, the extracted Stokes parameter included Averaged Intensity(g0), Normalized average intensity(g0m), Averaged Degree of Polarization(ρm), Perimeter Degree of Zero Orientation Route(PDor),Inclination Degree of Zero Aperture Route(IDap), and Arc Asymmetry Degree of Zero Aperture Route(AADap). The phenological phases of oilseed rape is identified by the decision tree algorithms (DT) based on the comparative analysis of the dynamic response of the 6 special Stokes parameters to rape growth stages. Results: Among the extracted Stokes parameters applied in this study, except ρm and AADap, other parameters showed great sensitivity to the change of the oilseed rape phenological phases. The decision tree algorithm also performed well in the classification of the oilseed rape phenological phases. Conclusions: The classification results agreed well with the field measured samples, and the overall classification accuracy was 87.4%, while the highest classification accuracy of each phenological phase was 94.3%.
The mm-level dynamic maintenance technology is essential to the realization of mmlevel terrestrial reference frames. The current dynamic maintenance technology mainly includes the linear maintenance based on linear velocity, the nonlinear maintenance technology that comprehensively considers the nonlinear motion of the stations and the geocentric motion, and the epoch reference frames technology. Firstly, the development status of linear maintenance technology is summarized. Then, the nonlinear maintenance technology and its research progress is discussed by reviewing the modeling method of coordinate nonlinear variation from the influence mechanism and data. And then introduce the realization process of the epoch reference frame and its application in the maintenance of the reference frames. Finally, based on the analysis of the status quo, several key issues that need to be solved to achieve the dynamic maintenance of the mm-level terrestrial reference frame are proposed.
Objectives: In recent 40 years, the risk of sea level rise in China's coastal areas is further increased with the acceleration of sea level rise, especially in serious land subsidence areas (e.g., Tianjin, Shanghai). However, it is difficult to know the real relative sea level (RSL) change at Tianjin coast due to time frame and subsidence correction of the public tidal data. To solve this issue, we propose an analysis method of RSL change by using the data of Satellite Altimetry and Global Navigation Satellite System (GNSS). Methods: The method is executed based on the idea of the collocating observation of GNSS and tide gauge. Meanwhile, to obtain RSL in different areas of Tianjin coast, we simulated 4 GNSS and tide gauge co-stations. Firstly, the absolute sea level (ASL) change and vertical land motion (VLM) of tide gauge stations are determined by using the data of satellite altimetry and co-located GNSS observations, respectively. Then, the relative sea level rise of Tanggu and four virtual tide gauge stations is calculated. Finally, the feasibility of our method is discussed based on multi-year leveling data. Results: The results show that the RSL rate was 13.45±0.45 mm/a at Tanggu tide station in the past 25 years, the RSL rate of four fictitious stations varied from 11.15±0.44 mm/a to 19.17±0.45 mm/a, and the mean rate along the Tianjin coast was 15.09±0.45 mm/a. Vertical land motion and its non-uniform distribution were the main influencing factors of the RSL rise and its regional differences, with the contribution rate more than 70%. Conclusions: Our research provides a new and feasible method for analyzing the RSL rise of Tianjin coast, however, it is still necessary to encrypt the tide observation facilities along the coast and retain and release the original tidal data, which can better serve the monitoring and research of sea level in Tianjin coastal area.
Underground coal fire is widely distributed and repeatedly treated, causing waste of resources and ecological damage. China is the country with the most serious coal spontaneous combustion disaster in the world, 80% of coal seams have the tendency to spontaneous combustion. Rapid, comprehensive, timely and accurate detection of hidden fire sources in coalfields is the basis and prerequisite for fire prevention, extinguishing and ecological management. Multi-source remote sensing has a great potential for the applications, but it needs to penetrate the surface and go deep underground, and there are many bottlenecks to be solved. Firstly, the problem of multi-source remote sensing detection of hidden fires in coalfields is abstracted into the key nodes of same source (same underground spontaneous combustion source), multi-phenomenon (various abnormal phenomena formed on the surface), multi-image ("photographed" by multi-source remote sensing, including a variety of surface image of abnormal information). Meanwhile, the research chain of multiple phenomena from the same source——phenomenon to image mapping——transmission from source to phenomenon——multiple image recognition source is analyzed. On these basis, the technical bottleneck of multi-source remote sensing detection of concealed fire sources in coalfields is discussed. Secondly, based on the research examples of concealed fire detection in coal fire areas of Fukang, Miquan and Baoan in the Xinjiang Uygur Autonomous Regions, China,the authors give the research progress and effects of polarized time-series InSAR fire area deformation detection, spatio-temporal temperature threshold method fire area delineation, combined thermal infrared + radar + optical satellite remote sensing fire area identification, and unmanned aerial vehicle fire area monitoring experiment. Finally, the development direction of integrating multi-source satellite remote sensing images and space-sky-ground-mine cooperative perception cognitions is prospected.
Taylor series expansion is often used in the downward continuation of potential field, and its performance depends on the accuracy and reliability of the vertical or radial partial derivatives (VPDs or RPDs) of potential field parameters. In order to avoid the singularity on spherical boundary and the uncertainty to the computational results by using the closed analytic kernel function to solve the partial derivative, considering the fact that all kinds of gravity observations behave as a type of limited spectrum bandwidth signal after being filtered, this research proposes to express the kernel function of the Poisson integral for the external gravity anomaly by a spherical harmonic series expansion, which is then truncated into a band-limited summation that has the same spectrum range as the gravity observation. After that, we derive a set of band-limited formulas to calculate the high-order RPDs, which are modified and applied to the downward continuation of the gravity anomaly by Taylor series expansion. The formulas are validated using the ultra-high-degree geopotential model EGM2008 by a two-stage procedure. The numerical tests of the band-limited formulas and the Taylor series expansion downward continuation model show that the proposed band-limited formulas have good reliability and validity, and are superior to other models in terms of stability and accuracy.
Objectives:Distances are functions of spatial positions. Precisely revealing the functional relationship which quantitatively embodies the transmission of uncertainty from spatial positions to their distance, a key scientific problem in need of being solved urgently in Geoinformatics, has important theoretical and practical significance. Methods:Aiming at the limitation of presently available solution of the above mentioned problem, under the premise of that the real position corresponding with the observed position of an uncertain point obeys the complete spatial random distribution within the error circle, this article derived the probability distribution function of distance uncertainty and the corresponding density function between a certain point and an uncertain point and between two uncertain points respectively in two-dimensional space. The latter has been employed to explore the transmission law of point uncertainty to distance uncertainty, opening up a new way for studying and solving the problem of distance uncertainty. Results:The results show that for all cases:(1) When the radius of the error circle (corresponding to the point position accuracy) and the observation distance between points change simultaneously, their ratio has a significant positive correlation with distance uncertainty. (2) When the former remains constant, the distance uncertainty has a significant negative correlation with the latter. (3) When the latter remains constant, the distance uncertainty has a significant positive correlation with the former. Conclusions:As far as distance uncertainty of cases containing an uncertain point and the one of those between two uncertain points are concerned, the latter is obviously greater than the former when the radius of the error circle and the observation distance between points are consistent for both of them. Otherwise they are not comparable.
Objectives:Establishing a unified global height datum is one of the core objectives of the international geodetic scientific community, and also a necessary infrastructure for geoscience research on a global scale and cross-border engineering applications. The International Association of Geodesy (IAG) released the definition of International Height Reference system (IHRS) in 2015 and then in 2019 set out the goal of establishing the International Height Reference Frame (IHRF). Methods:In this paper, the theory, methods and practical problems related to the IHRS and IHRF are reviewed and studied on the basis of the theoretical foundation and definition of the global height reference system. Four main aspects are considered:1) determination of gravity potential W0 of the global geoid; 2) determination of gravity potential based on high degree gravity field model; 3) determination of gravity potential based regional gravity field modeling; 4) investigation of two typical case studies:IAG Colorado geoid model experiment and realization of the IHRS in 2020 Qomolangma Height Survey. Results:The results of two typical case studies demonstrate that:the accuracy of gravimetric geoid model can reach 1 cm (0.1 m2s-2 in gravity potential) in flat areas and general mountainous areas, and it is expected to reach 2~3 cm accuracy (0.2~0.3 m2s-2 in gravity potential) even in very complex mountainous areas such as Mount Qomolangma. Conclusions:Based on the results of two case studies, observation technology, data resources, spatial distribution and other factors, a preliminary strategy for establishing the IHRF is proposed, including layout plan of IHRF reference stations, determination methods of IHRF related gravity potential, data requirements, standards/conventions to be followed and expected accuracy. In addition, we prospect the potential contributions of optical atomic clocks and relativistic geodesy to the unification of global height datum.
Objectives: With the rapid development of urbanization, traffic congestion has become a common problem faced by big cities all over the world. Scientific analysis of road network carrying capacity and traffic impact factors is a prerequisite for optimizing the spatial allocation of road traffic factors. How to give full play to the advantages of spatial information technology, efficiently and accurately analyze the balance of regional road network carrying capacity, and find the spatial change relationship between traffic state and influencing factors, is very important for alleviating urban traffic congestion. Methods: the grid model-based road geographically weighted regression (RG-GWR) analysis method is proposed for the first time. The carrying capacity ratio Q of regional road network is calculated by the "nine grid" model composed of two kinds of nested grids. By calculating the ratio Q of the central cell of the nine-grid and analyzing the Q value according to the law of conservation of flow, the unbalanced area of road network configuration is identified. By analyzing the regression relationship between the grid cell traffic situation and the influencing factors, the traffic space-time operation situation is obtained. Taking Chengdu as an example, three grid models of 3 km×3 km, 1 km×1 km and 1/3 km×1/3 km are constructed. Results: The results match the actual road conditions of AMap by 62.5% and 87.5%. By further analyzing the traffic influencing factors, the 1 km×1 km RG-GWR model is constructed, and the fitting degree of traffic situation in different periods reaches more than 80%. Conclusions: The results show that the grid model is an efficient and feasible method to analyze the road network carrying capacity and traffic impact factors from the perspective of space and has a broad prospect to serve the intelligent platform like Smart city and intelligent transportation.
The Ripley’s K-function has widely been applied for many fields such as ecology, criminology, and geography. Because of the mathematical confusion of spatial point processes, the corresponding papers contain many errors in measuring the distributions of spatial objects. This paper contributes to an improved understanding for the application for Ripley’s K-function. The focus is first on the estimation methods of spatial point pattern analysis based on Ripley’s K-function. The formula of Ripley’s K-function is corrected, then various definitions of edge effect correction methods applied for the K-function analysis are detailed compared. The relative merits of various algorithm methods are identified by considering the dynamicity of random point pattern, the definition desirability of parameters, and estimation reliability of edge correction methods. The modified algorithms have been employed for the point pattern analysis of the fruit plants in Xinping County, Yunnan Province. The results show: the number of Monte Carlo simulations has a great influence over the analysis of the observed point patterns. Trapped in the dynamicity problems, significance tests of observed patterns are changed that is responsible for changes in Ripley’s K-function measures with uncertainty as ArcGIS software package running. There are advantages of modified algorithms of edge effect correction, the rectangle of study areas is generally extended to complex shape that is robust enough.
Objectives: With the continuous development of modern observation techniques, the processing methods considering only additive errors cannot meet the requirements. Most of the existing methods for dealing with inequality constraints are based on additive error models, including Gaussian Markov models and errors-in-variables models, while there is less research on the processing methods for mixed additive and multiplicative (MAM) random error models. Methods: Based on the least squares principle and applying the ideas of zero and infinite weights, we construct a penalty function with the given inequality constraints. The simple iterative method (SIM) for the estimation of the MAM parameters under the inequality constraints is derived. Based on the original SIM, we add a penalty factor increasing with the number of iterations before the penalty term to address the defects of the original simple iterative method. Results: Two sets of cases show that the improved SIM can effectively solve the problem that the original method does not converge when used to deal with the MAM error model with inequality constraints. The structure of improved SIM is simple and easy to implement. In addition, it is shown that this method can obtain better parameter estimation by comparing other schemes. Conclusions: The feasibility and effectiveness of the improved SIM for parameter estimation of MAM error models with inequality constraints are verified, and it is verified that the method can be applied to the processing of large batches of data.
In the multi baseline stereo image MVLL matching process, the accurate elevation of object space points is searched along the ground plumb line, which can be equivalent to the accurate parallax search along the epipolar image space. Under the above conditions, the MVLL matching measure can be calculated and optimized by semi global constraint to obtain more reliable matching results. And then, an optimal solution of the multi baseline stereo image MVLL matching method is obtained under the semi global constraint. The effectiveness of the method is verified by experiments and analysis of various terrain features and local image areas, and the experimental results show that the method can optimize the object space matching measure of different terrain features, obtain more reliable matching results, and have higher image matching performance.
Objectives: The stochastic model of observation information describes the accuracy of observation information and its correlation, and also plays an important role in parameter estimation, quality control and accuracy evaluation. In the process of global navigation satellite system (GNSS) precision positioning, an accurate stochastic model is essential for improving the accuracy of float solution, enhancing the success rate of ambiguity resolution, increasing the accuracy of gross error detection, and obtaining the accurate and reliable positioning results. An optimization method for stochastic model of BeiDou navigation satellite system (BDS) was proposed by handling the properties of four classic elevation-dependent stochastic models based on the relationship between BDS satellite observation information and elevation. Methods: First, the accuracy of BDS four-frequency observation information was evaluated by the simplified Helmert variance estimation method. Then, the model parameters could be fitted based on the estimated precision of observation information. Finally, the statistical characteristics of four stochastic models, including piecewise function, sine function, cosine function and exponential function, were tested by the overall test and ω-test methods. Results: The results show that the accuracy of pseudorange and carrier phase for BDS-3 satellite are related to elevation, and the correlation degree is different by GNSS observation types. The stochastic model of exponential function shows the best performance on the fitting error, the overall test and the ω-test. The maximum fitting errors of pseudorange and carrier phase are 0.029 m and 5.484 mm, respectively. The false alarm rates of the overall test for the float solution and the fixed solution are 5.1 % and 4.9 %, respectively. In addition, the maximum false alarm rates of the ω-test are 6.8 % for the pseudorange and 4.9 % for the carrier phase. Conclusions: The stochastic model of exponential function can get the shortest convergence time and the highest positioning accuracy for BDS-3 quad-frequency precise point positioning. It can also accurately describe the accuracy of BDS observation information and improve the accuracy and reliability of BDS precise positioning results.
Objectives: The accuracy of the traditional gravity-geologic method (GGM) for inversion of seafloor topography should be improved. Methods: This paper proposes an improved GGM (GGM2) considering the nonlinear term. Results: The 1 arc minute seafloor topography of the South China Sea is inverted by GGM2, and its accuracy is evaluated through check points to verify the effectiveness. Conclusions: The results show that neglecting nonlinear term of seafloor topography results in a deviation of approximately 50 mGal in mountainous areas with undulation of approximately 2 km. The nonlinear term of seafloor topography could be recovered by the improved GGM from short-wavelength gravity anomaly. Compared with the traditional GGM (GGM1), ETPO1 and SIO V23.1, GGM2 has the best accuracy. The root mean square (RMS) of deviations between GGM2 and check points is 130.4 m. Compared with GGM1, the improvement of the proposed method near the Huangyan seamount chain is 10.8 m, and it is 4.7 m near the Zhongsha Islands.
Objectives : Accurate and reliable prediction of watershed groundwater storage can help ensure the sustainable use of a watershed’ s aquifers for urban and rural water supply. Methods : On the basis of precipitation information, gravity recovery and climate experiment (GRACE), and global land data assimilation system (GLDAS), the prediction of groundwater storage is performed by using the seasonal adjustment and non-linear autoregressive(NAR) neural network. Then the NAR model without the seasonal adjustment, the autoregressive (AR) model, and the seasonal autoregressive integrated moving average (SARIMA) model are compared. Results : Taking Changjiang Basin, Lena Basin, Ob Basin and Yenisey Basin as case studies, the results indicate that the deseasonalized precipitation and groundwater using the seasonal adjustment are independent and good fit to AR(1), while laying the foundations of deciding the number of a time delay of the NAR network. The performance of the NAR neural network using the seasonal adjustment falls into the excellent category for each basin and shows superiority over AR model and SARIMA model, with root mean square error (RMSE) less than 1 cm and correlation coefficient more than 0.96. Conclusions :The integration of seasonal adjustment technique and NAR neural network can not only improve the prediction accuracy of groundwater storage, but also reduce the convergence time. The proposed method can effectively forecast the groundwater storage with improved performance due to the seasonal adjustment that reduces the data complexity.
Objectives : Coastal sea level rise poses direct threats to human livelihood, understanding its causes is of scientific importance, and can provide useful strategies for adapting sea level rise. This paper aims to investigate the causes of coastal sea level change using satellite altimetry, satellite time-variable gravity, and Argo floats. Methods : Given that time-variable gravity suffers from serious leakage issue over coastal area, we use land mass variations from a mascon solution to simulate the leakage from land into oceans, which is estimated to be 0.68 mm/a. Results : On seasonal and non-seasonal scales, satellite altimetry measurements are well explained by the summation of time-variable gravity and Argo floats, demonstrating the closure of coastal sea level change. On the other hand, satellite altimetry suggests a rate of (3.32±0.45) mm/a for coastal sea level rise, but the summation of timevariable gravity and Argo floats yields a (2.25±0.51) mm/a rate. Conclusions : There is a 1mm/a rate discrepancy in coastal sea level budget, suggesting that closing the sea level budget for coastal zone is challenged by uncertainties. This is because the in situ measurements are sparse for Argo over coastal zone, which may underestimate the steric trend; furthermore, leakage correction and vertical land motion may also cause some uncertainties.
With the full completion of the BeiDou-3 satellite navigation system, its satellite atomic clocks have gradually entered stable operating states. Performance evaluation of the satellite-borne atomic clocks is crucial to improving its global service performance. This paper established the evaluation index of the frequency characteristics of the satellite atomic clock and the corresponding calculation method. The average frequency fitted hourly is used to reflect the frequency accuracy and frequency drift characteristics of the satellite-borne atomic clock, and considering the error characteristics of different clock bias products, the combined Hadamard deviation is proposed to accurately evaluate the frequency stability of the satellite-borne atomic clock. The accuracy of the clock bias determination and the frequency performance of the satellite clocks were evaluated based on the BeiDou-3 multisatellite precision orbit determination (MPOD) and two-way satellite time and frequency transfer (TWSTFT) clock products. The analysis results have demonstrated that, the random noise level represented by the root mean square (RMS) of hourly fitting residual of the TWSTFT clock bias is 0.2~0.25ns, and the random noise level of the MPOD clock bias is not larger than 0.02ns; the measurement accuracy represented by the RMS of daily fitting residual of the TWSTFT clock bias is 0.35~0.42ns, and of the MPOD is 0.1~0.18ns; the frequency accuracy of the satellite atomic clock is in the order of 10-12~10-11; the daily frequency drift of the passive hydrogen maser (PHM) is in the order of 10-15~10-16, the 30-day frequency drift of the PHM does not exceed the order of 10-13, and the daily frequency stability reaches the order of 10-15, while the rubidium clock frequency generally shows obvious frequency drift, and daily frequency drift reaches the order of 10-13. The frequency drift characteristics and long-term frequency stability of the BeiDou-3 PHM are overall better than the rubidium clocks.
Objectives: Incremental structure from motion (SfM) has become the widely used workflow for aerial triangulation (AT) of unmanned aerial vehicle (UAV) images. Recently, extensive research has been conducted to improve the efficiency, precision and scalability of SfM-based AT for UAV images. Meanwhile, deep learning-based methods have also been exploited for the geometry processing in the fields of photogrammetry and computer vision, which have been verified with large potential in the AT of UAV images. This paper aims to give a review of recent work in the SfM-based AT for UAV images. Methods: First, the workflow of SfM-based AT is briefly presented in terms of feature matching and geometry solving, in which the former aims to obtain enough and accurate correspondences, and the latter attempts to solve unknown parameters. Second, literature review is given for feature matching and geometry solving. For feature matching, classical hand-crafted and recent learning-based methods are presented from the aspects of feature extraction, feature matching and outlier removal. For geometry solving, the principle of SfM-based AT is firstly given with some well-known and widely-used open-source SfM software. Efficiency improvement and large-scale processing are then summarized, which focus on improving the capability of SfM to process large-scale UAV images. Finally, further search is concluded from four aspects, including the change of data acquisition modes, the scalability for large-scale scenes, the development of communication and hardware, and the fusion of deep learning-based methods. Results: The review demonstrates that the existing research promotes the development of SfM-based AT towards the direction of high efficiency, high precision and high robustness, and also promotes the development of both commercial and open-source software packages. Conclusions: Considering the characteristics of UAV images, the efficiency, precision and robustness of SfM-based AT and its application need further improvement and exploitation. This paper could give an extensive conclusion and be a useful reference to the related researchers.
High resolution satellite images (HRSIs) can provide spectral characteristics observation information of ground objects at low cost and high frequency,while Light Detection and Ranging (LiDAR) point clouds can provide fine geometric structure.The fusion of two kinds of data can realize complementary advantages,and further improve the accuracy and automation of ground object classification and information extraction.The realization of geometric registration with sub-pixel accuracy is the premise of two kinds of data fusion.Methods:In this paper,a fast registration method based on line element distance transformation model is proposed.The point clouds are used as the control source,and the typical line elements such as building edges in the point clouds are projected into the image space through the initial RPC parameters of the satellite image,and the iterative closest point registration is carried out with the line elements in the satellite image,so as to achieve geometric registration by means of refining RPC parameters.In this method,the distance transformation model is used as the search table of the iterative closest point,which greatly improves the operation efficiency.Furthermore,the latest progressive robust solution strategy is adopted at the same time of the closest point iteration,so as to ensure the robustness of registration in the case of too much noise.In this paper,registration experiments are carried out on GeoEye-2 data,Gaofen-7 data,WorldView-3 data with LiDAR point clouds data.Results:The results proves that the proposed method can achieve a registration accuracy of 0.4-0.7 m on three kinds of images by using the GCPs which are accurately measured and the operator's internal control points as checkpoints.Conclusions:It is significantly better than the strategy of mapping point clouds to 2D image and then registering through multi-mode matching.
Objectives :Individual driving destination prediction has important value in location-based services such as personalized services recommendation and intelligent transportation.However,most of the existing deep learning methods construct travel features on the basis of densely-sampled trajectory points,which generates data redundancy but only achieve limited information gain,and in turn introduces negative impacts on model training.Road intersection sequence can simplify the expression of driving trajectories that restricted on the roads,andreduce the model training cost.Meanwhile,transfer preference between intersections and current movement mode imply the spatiotemporal correlation between road intersections and destinations,and are capable to represent individual travel intention to some extent,but few researches have applied them to destination prediction. Methods :The paper proposes a novel individual driving destination prediction method that takes Intersection Transfer Preference and Current Move Mode into account,named ITP-CMM.Firstly,it constructs input features in units of each intersection,and adopts Graph Attention Network (GAT) to learn the individual-level transfer coefficients between intersections in different time slots.Then,it uses Long Short-Term Memory (LSTM) to capture the long-term dependence of transfer preferences.Secondly,ITP-CMM constructs time cyclic encoding features and driving features to learn current move mode by using two-layered LSTM.Thirdly,intersection transfer preferences and current move mode are fused and weighted through feature intersection and attention mechanisms respectively.Then,the stacked residual networks are introduced to output prediction results.Based on the trajectory data of twelve private car drivers in Shenzhen City for the whole year of 2019,the paper has conducted a group of experiments to verify the feasibility of the proposed method ITP-CMM. Results and Conclusions :We verify the effectiveness of the proposed method by carrying out ablation experiments and comparing the prediction accuracy and stability of ITP-CMM with four baselines,including Hidden Markov model (HMM),LSTM,Distant Neighboring Dependencies model (DND) and Attention-aware LSTM by considering Location Semantics and Location Importance (LSI-LSTM).Besides,we demonstrate the value of transfer preference in capturing the spatial correlation between intersections through visual analytics,and explore the impact of the number of transfer intersections on the prediction accuracy.
Objectives: Near surface air temperature (NSAT) is a key parameter in the land-atmosphere interaction process. Sparse NSAT observations from in-situ stations usually cannot fully describe the spatial distribution of NSAT, so estimating NSAT by land surface temperature (LST) and auxiliary variables has become an effective approach to obtain the spatial distribution of NSAT. Although there are some multisource LST products published, i.e., the LST from MODIS, Landsat, and Global Land Data Assimilation System (GLDAS), the applicability of each LST product in NSAT estimation still needs further investigation. Methods: Taken Yellow River Basin as the study region, summer NSAT from 2003 to 2019 was estimated based on the GEE (Google Earth Engine) platform with RF (Random Forest) algorithm in this study, and the mean, maximum and minimum NSAT was estimated in two scales (i.e., 30-m and 1000-m) using three LST data sources (Landsat, MODIS and GLDAS). The observed LST from in-situ stations over Yellow River Basin Region were compared with the estimated NSAT by the ten-cross validation method to evaluate the accuracy of different LST sources when estimating NSAT. Results: The results indicate that:1) In terms of mean NSAT, the differences of accuracy of the three LST sources are small. 2) In terms of the maximum and minimum NSAT, the GLDAS LST shows the significant higher accuracy than MODIS and Landsat LST. 3) The RMSEs of estimated mean NSAT are smaller than maximum and minimum NSAT estimation when using the same LST source. 4) For the spatial distribution of accuracy, the stations with higher error mainly located in the southern or western of the study region. Conclusions: The temporal resolution of LST source is significantly important in NSAT estimation. The GLDAS LST shows the highest accuracy in this study especially for extreme NSAT estimation. Besides, the mean NSAT estimation has higher accuracy than that of maximum or minimum NSAT for each LST source.
Sea-land segmentation is of great significance for tasks such as ocean target detection and coastline extraction in SAR image. To solve the problem of sea-land segmentation of multi-resolution SAR image in practical applications, this paper presents a sea-land segmentation method based on context and edge attention. The method uses the channel attention mechanism to fuse context features of different scales and levels, and designs an edge extraction branch to provide edge information for further improving the segmentation result of the boundary area. In addition, a sea-land segmentation dataset of multi-resolution SAR image based on the Gaofen-3 satellite data is provided. The dataset covers multiple resolution images, including various sea-land boundary types such as ports, islands. Experimental results show that the proposed method can work well for the task of sea-land segmentation, the average classification accuracy and mean intersection over union(mIoU) achieve 98.18% and 96.41%, respectively.
Objecives : As a complex and giant system, the BeiDou navigation satellite system (BDS) demands scientific, complete and efficient design and engineering implementation of time-frequency system. Methods : For the designation of time and frequency architecture for BDS-3, the first step is the realization of establishment, maintenance and synchronization of time and frequency architecture, and then the unification of BeiDou system time(BDT) and other navigation systems. The internal time synchronization for BDS-3 is designed as the methods of time comparison and synchronization among clocks of satellites based on inter satellite link, precise time comparison and synchronization between satellites and ground based on satellite-ground link, and comparison and precise synchronization between master station on ground and its subsystem based on satellite two-way and ground wired twoway time comparison link. The generation of BDT signal is realized by combined clock group and integrated atomic time method. Finally, the unification of BDT and other navigation systems is realized through direct or indirect traceability comparison and time difference monitoring technology. Results : The long-term monitoring results of BDS-3’s signals indicated that the frequency stability per day of BDT reached 4.6×10-15, the local time accuracy of satellite clocks reached 1.25×10-11, the 1000s frequency stability of satellite clocks reached 1.65×10-14, and the time difference between BDS and other navigation satellite system maintained within 50ns.. Conlusions : Within the operation of BDS-3, it is further proved that the time and frequency architecture for BDS-3 is complete in function design, scientific in organizational structure and advanced in system index, which can fully support the global service capability of BDS-3.
Map generalization is one of the core technologies of cartography and multi-scale spatial data transformation.Since the 1960s,the research on the automated generalization of map data has gradually developed and made great progress.Furthermore,there are many intelligent solutions on map generalization.However,due to the limitation of the artificial intelligence technology,these intelligent solutions on map generalization are not really intelligent and practical.In the past 10 years,deep learning,as a presentative artificial intelligence technology,was applied in many fields,and the deep-learning-based researches achieved remarkable results.And thus,many new attempts have been made in the intelligent research of map generalization.First,based on analyzing and abstracting models of the automated map generalization,the necessity of the intelligent research on map generalization is pointed out.Then,combining with the development of artificial intelligence,the intelligent map generalization is overviewed.Researches of intelligent map generalization based on traditional machine learning and deep learning are sorted and analyzed,and two common strategies of intelligent map generalization are summarized.Finally,focusing on some hot issues of intelligent map generalization,the development tendency of intelligent map generalization is discussed.
Due to the joint constraints of boundary conditions,including seabed topography,driven water level at open boundary (DWLOB) and bottom friction coefficient (BFC),the accuracy of the tidal numerical modeling in coastal and offshore waters is relatively low.This study intends to synchronously optimize the multiple boundary conditions,including seabed topography,DWLOB,and BFC,to improve the accuracy of the tidal numerical modeling in China's coastal and offshore waters for the hydrographic surveying and mapping.This study simulates the tidal model of Haizhou Bay of the Yellow Sea in China,using a two-dimensional tide numerical model (2D-MIKE21) and based on the synchronously optimized boundary conditions.The water depth with higher resolution and accuracy than the charted depth is used as the seabed topography.The DWLOB is calculated from 12 tidal constituents (including the two long-period constituent,Sa) of the regional tidal model of China seas,CST1.The calculation of the BFC takes into account the spatial variation of water depth.For validation,we compare the simulated model with the 1-year tide tables from 6 tide gauges in Haizhou Bay and the CST1 model at 24 randomly selected points,and get the total root sum squares of the 12 tidal constituents of 5.52 cm and 7.10 cm,respectively.The simulated tide model and the CST1 model are also compared with the 1-month observations at two tide gauges in Haizhou Bay,and the former has a smaller mean square error than the latter.The proposed strategy provides a new method for tidal numerical modeling in coastal and offshore waters.This study also shows that it is feasible to obtain the astro-meteorological constituent Sa by tidal numerical modeling.It should be noted that,the wind effect is also not considered in this study due to its strong randomness and the difficulty in obtaining data for one year.Next,we will use the simulated water level heights in this article and short-term wind velocity and direction as the better open boundary and initial conditions to carry out corresponding short-term tidal modeling in coastal and offshore waters,which will generate the residual water level (or storm surge).
Marine and airborne gravimetries are two principal means to obtain marine gravity field information. To meet the practical requirements of the building and expanding of civil military integration system of marine and airborne gravity surveying technology, we have been devoted to the research on theory and method of marine and airborne gravimetry and its application. A detail analysis and review is made on the research results with theoretical significance and practical value in the development and study of marine and airborne gravimetry and its application, which are achieved by our research team, mainly covering demand demonstration and top-level design, observation data reduction and accuracy evaluation, observation error analysis processing and separation compensation, upward continuation of observation gravity on the Earth's surface, downward continuation of airborne gravity, construction of gravity data model at sea, approximation of the Earth's external gravity field and geoid refinement, etc. The background, thinking and breakthrough point of each study topic, and the applicable prospect of research results are mainly analyzed and summarized. We pays our attention to the design of technical framework, the building of variation character models of marine gravity field and the surveying line layout for marine gravity survey, the design of key technical targets for marine and airborne gravimetry, the rising mechanism and compensation of marine and airborne gravity surveying errors, the integrated application of marine and airborne gravity measurement data, and so on. It can provide a reference for the future development of this study field.
Objects: The mountain peak extraction technology determines the accuracy of the hill position classification results and the efficiency of automatic classification of micro-landform. On account of the limitations of elevation and contour lines, such as loss of local mountain peaks and incomplete removal of false mountain peaks, it must be mined other landform factors to express the distribution feature around the mountain peaks. Methods: Based on the uniform distribution feature of the aspect around the mountain peaks, an efficient mountain peak extraction model is proposed in this paper, in which the aspect is centered on the mountain peak and gradually increases clockwise. Moreover, according to the ridge line fitting method and the recursive thought of depth first search algorithm, the false mountain peaks in the extraction results are removed, while the real mountain peaks are retained. Results: Considering the negative influences of the terrain fragmentation of the entity DEM, this paper experiments and analyzes with simulated DEM and entity DEM respectively. The result shows that the proposed method overcomes the uncertainty of subjective threshold when extracting mountain peaks based on closed contour lines. The accuracy of extracting simulated DEM mountain peaks can achieve up to 100% due to its continuity and smoothness. Considering that the terrain fragmentation of entity DEM will make it ill-posed, we improve it by adjusting the aspect distribution constraint condition and obtain average extraction accuracy of 96.1%. Conclusion: An efficient mountain peak extraction model is proposed in this paper based on the digital terrain analysis technology and the uniform distribution feature of the aspect around the mountain peaks, and the topographic feature points are extracted from the perspective of terrain geometry. Compared with the traditional mountain peak extraction methods, the method based on aspect distribution feature is relatively accurate and simple, effectively reducing the false mountain peaks appearing in the results.
Objectives: Co-seismic deformation monitoring is of great significance for the interpretation of Co-seismic deformation characteristics and intuitive understanding of fault geometric characteristics. For large earthquakes with surface rupture, Global Navigation Satellite System(GNSS) technique has a low spatial resolution, and Interferometric Synthetic Aperture Radar(InSAR) technology will cause phase decorrelation due to large deformation gradients, it is impossible to obtain specific deformation around the fault. Optical image correlation(OIC) and pixel offset tracking(POT) based on sub-pixel cross-correlation can solve these problems well. Methods: The main idea of the sub-pixel cross-correlation method is to use the pre-event image in the two images as a reference, and then compare the post-event image with it to evaluate the similarity between the two images, and then retrieve the displacement between the homonymous points. OIC can obtain east-west and north-south deformations, and POT can obtain range and azimuth deformations. In this paper, the deformation in each direction of the Kaikoura earthquake obtained from Sentinel-1 data and Sentinel-2 data is used to form different combinations, and the least square method is used to calculate the three-dimensional deformation. Results: The combination of OIC+POT_As_Des has the best constraint. The accuracy of the north-south deformation obtained by OIC+POT_Range is not as good as that of OIC+POT_As_Des. Compared with POT_As_Des, the north-south deformation precision obtained by it is higher, and the deformation performance on the right side of the KeKerengu fault and the left side of the PaPatea fault is also better. In the northeast of the Kaikoura earthquake epicenter, very complex and huge surface deformation and multiple ruptures were detected in two close areas, and the vertical deformation was mainly uplift. Conclusions: For Sentinel-1 and Sentinel-2, the combination of OIC+POT_As_Des is most suitable for obtaining the three-dimensional deformation of the Kaikoura earthquake. This earthquake is a dextral strike-slip earthquake with reverse.
With the rapid development of the reality acquisition technologies, such as laser scanning and structured light scanning, point cloud has become a high-precision three-dimensional holographic representation for the physics world. As the third important data source, point cloud is very suitable for presenting 3D model and geographic and spatial information, and pushes forward an immense influence on smart city, autonomous driving application and augmented reality. However, the massive, unstructured, and uneven density of point cloud data brings challenges to onboard and offboard storage as well as real-time transmission. Hence, efficient compression methods, which balance between bit rate and quality, are mandatory for ensuring the storage and transmission of such data. This paper summarizes the state-of-the art of domestic and foreign static point cloud compression algorithms, the standard specifications released by Moving Picture Experts Group (MPEG) and evaluation metrics for point cloud compression. First, we describe different families of approaches in details and summarize the basic technologies that are usually used in 3D point cloud compression. Moreover, we provide detailed description of three open source point cloud codec algorithms and their coding performances. Finally, the promising development tendency of the static point cloud compression is summarized.
In this study, we construct high-quality three-dimensional(3D) coseismic deformation field of 2017 Jiuzhaigou earthquake using ascending and descending SAR(Synthetic Aperture Radar, SAR) data from sentinel-1A satellite with constraint of the elastic dislocation model. Firstly, using differential interferometry method to generate LOS(Line of Sight, LOS) coseismic deformation field with SAR images. Then, a two-step inversion algorithm is used to subdivide the fault plane, and obtain the geometric parameters and the optimal fault slip distribution. With constraint of this fault model, we take the variance component estimation approach to reconstruct 3D coseismic deformation field based on InSAR(Interferometric Synthetic Aperture Radar, InSAR) measurements. The results show that the coseismic deformation field is dominated by horizontal displacement. In the North-South(N) direction, the maximum displacement of the hanging wall is -19.81cm and 14.38cm in the footwall. In the East-West(E) direction, the maximum displacement of the hanging wall is 18.37cm for the northwestern, while that of the footwall is -7.84cm for the Southeastern. In the Vertical(U) direction, there is a little uplift in the northern of the fault and the maximum is about 3cm. Finally, the derived North-South and EastWest displacements were also compared with the GNSS(Global Navigation Satellite System, GNSS) investigations, indicating that combing InSAR measurements and elastic dislocation model to reconstruction 3D seismic deformation field is feasible and effective, which overcomes the problem of insufficient geodetic data and offers extensive future usage for measuring earthquake deformation.
The straight baseline, which has the advantages of easy management, less correlation with the change of low tide line, and being conductive to the parties to obtain a lager jurisdiction area, is widely used in the baseline points selection and baselines determination of territorial sea in coastal countries. Aiming at the problems in the selection of baseline points in the current straight line system, based on the determination of ideal baseline points by convex hull construction algorithm, and combined with the limitation of baseline length in the "United Nations Convention on the Law of the Sea", the baseline graph construction and prioritization theory based on alternative baseline points is analyzed firstly. Then, by erecting the intervisibility judging principle between baseline points, a fast judgment and optimal selection model of alternative baseline points under the principle of maximum internal water area is established, and an alternative baseline point order determination steps for optimal baseline graph is designed. Lastly, the paper proposes an optimal selection algorithm of territorial sea baseline points with the limitation of baseline length threshold, which realizes the rapid selection of alternative baseline points under the limitation of baseline length threshold and internal water area maximization. The results show that:the algorithm can minimize the polygon area of the baseline graph (maximum the area of internal water) constructed by the optimal base point under the condition of baseline length limitation, and has a high algorithm efficiency, which can provide technical support for the selection of the baseline point of territorial sea of coastal countries, especially the archipelago countries and the determination of the straight baseline.
From the perspective of map cognition, this paper analyzes the important role of mental map in a successful event of family search. Firstly, several basic concepts of map cognition such as spatial cognition, mental map and cognition mapping are clarified, and the position and function of mental map in cartographic triangle proposed by Academician Gao Jun are expounded. A detailed empirical analysis is made on the cognitive process of both the seeker and the sought in this case. Three geospatial cognition problems in the interview are discussed:the adaptability difference of map tools to spatial cognition stage, the spatial scale and temporal characteristics of mental map matching, and the top-down influence of cognitive subjects' knowledge experience on cognitive results. Finally, the paper puts forward the development trend of geospatial cognition entering the new era under the promotion of brain science and artificial intelligence research.
Precise point positioning (PPP) combines the advantages of standard point positioning (SPP) and relative positioning, which can achieve centimeter level positioning with one station. With the development of BeiDou satellite navigation system (BDS), more and more BDS satellites begin to provide global positioning, navigation and timing services (PNT), which also promotes the development of multi-frequency and multi-system PPP. For a long time, because of the atmospheric delay and hardware delay of satellite and receiver, the ambiguity of PPP is not an integer. PPP needs a long time to converge, which greatly limits the its application. The results show that the ambiguity can be restored to integer and the convergence time can be shortened with the help of fractional cycle bias (FCB). In order to improve the effect of precise point positioning-ambiguity resolution (PPP-AR) of BDS as a whole, we estimate the FCBs of GPS and BDS based on the observation data from August 1 to August 31 in 2020 of globally distributed stations. The single difference between satellites is used to eliminate the influence of hardware delay at the receivers, and the single difference ambiguity vector is solved by the whole network adjustment to obtain the FCB estimation of each satellite. We mainly analyzed the time series of BDS-3 wide lane (WL) and narrow lane (NL) FCBs. The results show that the WL FCBs has long-term stability, the change of BDS-3 WL FCBs in 31 days is less than 0.2 weeks, and the change of GPS FCBs is less than 0.1 weeks. The FCBs of BDS-3 NL can keep stable for a period of time, and the change is less than 0.1 week. The percentages of GPS WL and NL FCBs residuals within 0.15 weeks are 99.8% and 99.3% respectively, and the percentages of BDS-3 are 99.7% and 98.1% respectively. In order to reflect the improvement effect of FCBs on PPP, static and dynamic PPP-AR tests were carried out at 8 stations around the world. The results show that under the static condition, the average fixed time and convergence time of BDS-3 are 31.5 min and 24.9 min respectively, which is 24.8% shorter than the float PPP. The errors in E, N and U are 1.03cm, 0.60cm and 1.72cm respectively, and the fixed rate is 89.8%. Under the dynamic condition, the average fixed time and convergence time of BDS-3 are 33.3 min and 50.7 min respectively, which is 17.4% shorter than the float PPP. The errors in E, N and U are 2.57 cm, 2.29 cm and 3.71 cm respectively, and the fixed rate is 83.9%. Taking FCB products as prior information for PPP-AR can shorten the convergence time of PPP to a certain extent, but the improvement of positioning accuracy is not obvious after complete convergence. BDS-3 FCBs stability is limited by precision products and observation data, and its PPP-AR is slightly worse than GPS.
Objectives: The national first order leveling results are valuable data for studying the vertical movement of crust in Chinese mainland. The trend of vertical movement in the past 50 years is analyzed with three period results of first order leveling network of China. Methods: Firstly, on the same reference datum, a static adjustment method is used to calculate the height of the first order leveling points. By analyzing the location, height difference and record of leveling points, the coincidence points of different period results are obtained. And then the velocity of leveling coincidence points can be calculated by height difference and time span. Secondly, the residual distribution of checking points and fitting points is used to evaluate the pros and cons of the interpolation methods, such as Hardy function interpolation, Kriging interpolation and inverse distance weighted method, one of which is determined to establish vertical movement model. Finally, the vertical movement model in Chinese mainland is obtained with the method of Kriging interpolation. Results: The residual distribution of checking points and fitting points shows that the fitting precision of Hardy function is slightly lower but more stable than other methods. And the precision of inverse distance weighted method is slighter higher, but there are some outlier in the fitting results. The valid fitting region of triangulation with linear interpolation is the smallest convex polygon consisting of the outermost leveling points, so the blank area near the boundary of Chinese mainland is existed. The precision is basically equal between Kriging interpolation and minimum curvature interpolation, but from the perspective of the residual, Kriging interpolation is slighter better. Conclusions: From the vertical movement model, the characteristic of vertical movement is qualitatively analyzed:in the past 50 years, the North China Plain, Jiangsu-Shanghai area, Fen-Wei basin, Xinjiang and Hainan are subsiding. Among them, the North China Plain and Jiangsu-Shanghai area are severe subsiding condition with velocity more than 40 mm/a. The south of Tibet, North-east China, Fujian and north of Shaanxi are uplifting, and the trend of uplifting is kind of fierce in the south of Tibet and east of North-east China. Other areas such as South China are relatively stable.
On September 16, 2021, general secretary Xi Jinping pointed out in his congratulatory letter to the first international summit on BeiDou scale application:At present, the development of global digitalization is accelerating, and spatiotemporal information, location and navigation services have become an important new infrastructure. With the advent of the new infrastructure era, geomatics people should think about what the country wants, be anxious about what the country is anxious about, and devote themselves to the wave of "the second 100 years" and "new infrastructure". Firstly, this paper expounds the definition of new infrastructure, and analyzes the difference between new infrastructure and traditional infrastructure. Secondly, it discusses the mission of geo-spatial information science in the three systems (information infrastructure, integration infrastructure and innovation infrastructure) of new infrastructure. The authors believed that we have walked through the so called "traditional surveying and mapping" that mainly serves topographic maps, and developed into the so called "ubiquitous surveying and mapping", the services of geo-spatial information. At present, we should seize the opportunity and expand the new mission of geo-spatial informatics science in the new infrastructure era, provide spatiotemporal data with good integrity, strong reality and high accuracy for the new infrastructure, and realize digital industrialization, industry digitization and intelligence.
The study on the spread of major animal diseases and its evolution of public opinion are of great significance to the improvement of epidemic prevention and public opinion guidance. With the development of Web 2.0 technology and the popularity of smart phones, various forms of social media platforms become important channels for obtaining, sharing and discussing hot topics. A large number of texts with geographical location information were generated, which have provided a new way for the research of animal epidemic and other emergencies. Taking Sina microblog data during African swine fever spread in our country from 2018 to 2019 as the case study, the objective of this work is to establish the spread spatio-temporal characteristics analysis and public opinion mining model. Firstly, the Mann-Kendall mutation detection method was introduced to objectively divide the epidemic transmission cycle and investigate the spatial distribution characteristics of different stages. Then, the latent dirichlet allocation theme clustering model was used to describe the evolution of public opinion topics among different ASF epidemic stages. Finally, the primary factors influencing public opinion attention were explored based on the geographical detector method. The results show that the spread of African swine fever in China showed a trend of spreading from northeast to southwest, and experienced four stages: incubation, outbreak, fluctuation and recession. At each stage, public opinion around outbreak itself and the specific theme is derived, and with the development of epidemic derivative subject is more abundant, popular sentiment also from at each stage, public opinion around outbreak itself and the specific derivative topics, and derivative topics became more abundant with the development of epidemic, public sentiment also gradually changed from negative to positive. Regional awareness of ASF is strongly influenced by pork consumption and production, rather than by local education and urbanization levels.
Geographic Information System has made great progress in theory and technology in recent 60 years. The application field of GIS has expanded to all aspects of society, and the social influence is increasable growing. The architecture, development mode and service mode of GIS have undergone profound changes. In order to promote the further development of GISystem, this paper focused on the following three issues based on summarizing the three meanings of "S" in "GIS", namely "System" "Science" and "Service", and the fruitful achievements of "GIS" in recent 60 years. The first issue is how to understand GISystem. This issue should be analyzed from two aspects, and one is to discuss the connotation of geographic information system through analyzing the three key words (namely system, information and geographic) of GIS. The other one is to analyze the relationship between GIS and map, computer mapping and map database. This paper holds the idea that GIS originates from and goes beyond map, computer mapping and map database. It should be considered that GIS has the characteristics of equipment. The second issue is how does GIS evolve. This issue focuses on the social demand background, technical background and discipline background of the development and evolution from "Geographic Information System" to "Geographic Information Service". It also focuses on the main manifestations of the development and evolution of GIS from the aspects of application field, expansion of data resources and functions, architecture, development mode and service mode. The third issue is where will the future development of GIS go. We discussed that the limit application domain expansion of GIS and point out that the future development of GIS must face the urgent needs of national economic construction and national defense construction. Based on the analysis and comparison of three existing GISystem service modes, we consider that the "hybrid" technology system of "grid integration" and "elastic cloud" is the best choice for GISystem service mode. Finally, this paper puts forward six key technical problems that must be solved in the implementation of "hybrid" spatiotemporal big data platform technology based on "grid integration" and "elastic cloud" and designs the application mode of "spatiotemporal big data platform".
Objectives: Global navigation satellite system (GNSS) coordinate time series provide important data support for the study of crustal movement and deformation, and plate tectonics. Due to the noise caused by various external factors, the GNSS coordinate time series cannot reflect the real motion information of the station well. To effectively reduce the noise in the GNSS time series, we adopted a noise-reduction method, GA-VMD, combining genetic algorithm (GA) and variational mode decomposition (VMD). Methods: Firstly, the genetic algorithm was used to optimize VMD parameters, and the envelope entropy of the input signal was used as the fitness function of the genetic algorithm to find the optimal VMD parameter combination suitable for the signal. According to the optimized parameters, the signal was decomposed by VMD to obtain A series of modal components. Then we calculated the multi-scale permutation entropy (MPE) of each component and then regarded the MPE as the criterion of the noise component. Finally, according to the MPE, the noise components were identified and removed, and the remaining components were reconstructed to obtain the noise-reduced signal. In this paper, the noise reduction effect of GA-VMD was analyzed through the example of noise reduction of analog signal and observation data, and compared with wavelet denoising (WD) and empirical mode decomposition (EMD) methods. Results: The results show that:(1) the noise reduction results from the analog signals show that WD and EMD have the incomplete and excessive troubles on the noise reduction, respectively. However, GA-VMD can effectively eliminate noise and retain effective signals. From the evaluation index, compared with WD and EMD, the signal-to-noise ratio were increased by 5.18dB and 2.91dB, the correlation coefficient by 0.05 and 0.02, respectively, when using GA-VMD. (2) For the complex observation, we used the noise and velocity uncertainty as accuracy indicators to evaluate the noise reduction effects of the three methods. The results show that WD can only extract a part of the white noise, while EMD and GA-VMD can completely remove the white noise. GA-VMD can reduce the flicker noise to the range of 0 to 6 mm·a-0.25. For the velocity uncertainty, the average gain rates of GA-VMD relative to the WD and EMD is 69% and 15.33%, respectively. GA-VMD has an average correction rate of 79.89% and 84.46% for the velocity uncertainty and flicker noise of GNSS coordinate time series. Conclusion: Therefore, GA-VMD is the most effective one among the three noise reduction methods, which can effectively reduce the noise in the GNSS time series and improve its accuracy. However, in this paper, we only discussed the effect of GA on VMD parameter optimization without comparing it with other method. Hence, it will be the key for studying the advantages and shortcomings of those optimization algorithms in the selection of VMD, and improving the accuracy on the GNSS time series in the future.
Objectives: With the improvement of geodetic observation accuracy, higher requirements are put forward for the seismic inversion algorithm. Methods: In view of this problem, we successfully develop a novel Grey Wolf Optimization (GWO) algorithm to invert the seismic source parameters. The weighted distance Grey Wolf Optimization (wdGWO) algorithm with the strategy of the nonlinear decreasing convergence factor based on the cosine law is proposed to instead that of the original linear decreasing. Subsequently, a combination approach with the improved wdGWO algorithm and the Simplex algorithm is configured and the introduction of the latter algorithm is to stabilize the performance of the proposed wdGWO algorithm. Thus, the combination algorithm has better advantages for both convergence and stability. Finally, we achieve synthetic tests to evaluate the performance of the basic wdGWO algorithm, the genetic algorithm and the combination algorithm. Results: The simulated experimental results show that the estimation of seismic source parameters via the proposed algorithm is superior to the wdGWO algorithm, which expresses excellent stability and accuracy. On the other hand, the stability of seismic source parameters is validated between the combination algorithm and the genetic algorithm, and we find the superiority of the combination algorithm. Furthermore, the availability of the combination algorithm is tested by the 2014 Napa earthquake and the 2017 Bodrum-Kos earthquake. The results show that the combination algorithm can achieve the inversion precision of genetic algorithm, and exhibit better parameters stability. Conclusions: Considering the accuracy and stability of the inversion results is particularly important for the accurate determination of seismic source parameters, the combination algorithm has potential applications in the inversion of seismic source parameters.
The leveling and GNSS results are provided important data for studying the vertical movement in mainland. Give full play to the advantages of high-precision leveling points and uniform distribution of GNSS points and help improve the reliability of the vertical movement model. In the process of fusion, that the lack of coincidence points between leveling and GNSS causes joint adjustment of velocity fusion to fail, a fusion method based on final model is proposed. The method consists of two steps. Firstly, establish respective vertical movement model by leveling and GNSS with the method of inverse distance to a power. Secondly, fuse two types of models by weighted average according to the grid point precision and nearest point principle depending on the distance between grid point and measured point. Aiming at the problem that the weight associated with the distance and the velocity precision affects the final model when inverse distance to a power is applied, a method of multiplying the weight is proposed to determine reasonable weight value for each factor. The vertical movement model of Chinese mainland is established by comprehensively using the national first order leveling result, national GNSS geodetic control network and so on. In order to measure the improvement of GNSS results on the vertical movement model, utilize 20% of the modeling points selected from leveling and GNSS points uniformly to statistics the precision of leveling and fused vertical movement model. The results show that the fused vertical movement model has increased by 35.3% in Tibet, 53.6% in other region of China and 50.8% in Chinese mainland. Therefore, GNSS results can improve the precision and accuracy of the leveling vertical movement model, which is especially obvious inside the leveling loop. According to fused vertical movement model, the characteristic of vertical movement is analyzed:North China Plain and Jiangsu-shanghai area are severe subsiding, where the velocity of individual areas is up to 100 mm/a, the North-east of China and Tibet areas are uplifting, and the maximum velocity exceeds 5 mm/a in some local area, the vertical movement in other areas is relatively stable.
An undifferenced and uncombined integer ambiguity resolution method between BDS long-range reference stations is proposed. Firstly, the error observation equation is established directly by using the observations of different frequencies, and the relative zenith tropospheric wet delay error and ionospheric delay error are estimated by random walk strategy to increase the constraint between epochs. Then a linear calculation method of undifferenced integer ambiguity real-time is used to obtain the undifferenced integer ambiguity of all satellites in the current epoch of the reference network. It solves the problem that Ambiguity needs to be inherited or re-superimposed on normal equations in the reference star transformation. Because the information of each frequency observation is fully utilized, the influence of linear combination amplification noise on integer ambiguity fixing is avoided, and the ambiguity fixing success rate of uncombined method is much higher than that of ionosphere-free combination method. The results show that the average fixed speed of ambiguity of each reference station is 20 epochs. The integer ambiguity resolution of the carrier phase of the reference station can be realized quickly.
As BeiDou Satellite Navigation System-3 (BDS-3) starts to provide service for global users, it is possible to get global-coverage and all-time positioning service for space application using BDS alone. The performance of space-borne BDS positioning is thoroughly analyzed with the in-orbit data of GNSS Occultation Sounder (GNOS) aboard FengYun-3D (FY-3D) satellite. Firstly, the visibility and position dilution of precision (PDOP) of BDS satellites in different LEO orbits are calculated based on real BDS ephemeris, and the orbit and clock error of broadcast ephemeris and Signal-In-Space Range Error (SISRE) are studied. The results show that the global coverage usability from ground to 2000 Km height orbit has already been 100%. The mean visible BDS satellite number across the world is 50% larger than that of GPS. For BDS broadcast ephemeris, the 3-D orbit error is 1.5m and clock error is 2.4ns. SISRE is about 0.79m and the clock accuracy of BDS-3 has reached the same level of GPS. Second, the real visible satellite number, signal strength, precision of pseudo-range and position accuracy are verified with the measurement data of GNOS. The code biases of BDS-2 satellites are focused on. The in-orbit data results show that GNOS in FY-3D could get 100% positioning with BDS-2 signals in service areas, and 3-D position accuracy is 5.53m. All BDS-2 satellites including Geosynchronous Earth Orbit (GEO), Inclined Geosynchronous Orbit and Medium Earth Orbit satellites have code biases. When the elevation is less than 40 degree, the code bias of GEO is firstly measured directly. The total electron content above 836Km LEO orbit is measured using BDS dual-frequency measurements, which can cause relative pseudo-range delay of about 0.6m. The research in this paper is of great significance to the space-based application of BDS and lays the foundation for the design of space-based BDS receivers.
With the advance in artificial intelligence, using high-resolution images to detect geological hazards has gradually become a research hotspot. Visual interpretation of landslides heavily relies on expert experience, and conventional automatic landslide detection approaches are sensitive to the presence of bare land, roads and other ground objects. To address these, a Mask R-CNN with simulated hard samples is presented in this paper for landslide detection and segmentation. Based on existing landslide samples, hard landslide samples are simulated by utilizing the shapes, colors, textures, and other characteristics of landslides to make each of the samples with a more complicated background. The original imagery and simulated hard samples are then fed into the Mask R-CNN for landslide detection and segmentation. Since the number of landslides is often limited in reality, small sample learning in the frequency domain is also presented in this paper to reduce the number of input samples while ensuring the accuracy of detection and segmentation. The experimental results in Bijie, Guizhou Province, showed that the detection and the average pixel segmentation accuracies of the proposed Mask R-CNN method with simulated hard samples are 94.0% and 90.3%, respectively. It is seen that the proposed method has high performance on landslide detection and segmentation with low false alarm rates. In addition, the performance of the proposed small-sample-based learning method in frequency domain can be improved even with a half of the data input. The effectiveness of the proposed Mask R-CNN method is further proved by the successful detection of Tianshui landslides in Gansu Province.
Penguins are representative organisms in Antarctica. Monitoring the population and distribution of penguins is of great significance to the study of environmental changes in Antarctica. In the past studies, due to the limitation of medium-high resolution images, the accuracy of penguin recognition is difficult to be further improved and the existing time series analysis of penguin distribution and population is based on indirect identification method. The Penguin Island in East Antarctica was selected as the study area where the Chinese Antarctic Scientific Research Team used remote sensing UAV to make aerial observations in January 2017, January 2018 and December 2019, and obtained centimeter-level resolution images. Based on object-oriented classification, the shadow pixels of penguins in 3 images were extracted, the penguin habitats were marked, and the penguin population was calculated. The overall accuracy is 91%. The experimental results showed the dynamic changes of penguin population of which the distribution of penguin habitat was relatively fixed, but the number of penguins fluctuated with 1,068 pairs, 1,003 pairs and 1,081 pairs in 3 images respectively.
Tianwen-1 is China's first independent interplanetary mission. It will complete orbiting, landing, and roving tasks in one operation. Exploiting tracking data from gathered during this extended mission, this paper explores possible ways to improve the Mars gravity field model through simulations. We designed two types of orbits, a polar and a near equatorial large eccentricity orbit and recovered six gravity solutions considering various error sources. By the power spectrum of these gravity models was analyzed and evaluated, finding that a month of tracking data from the polar orbit or the combined polar and near equatorial orbit could be used to properly reconstruct the Mars gravity field model with orders and degrees of 42 under the 0.1 mm/s measurement noise. The results show that after considering the influence of comprehensive error, the accuracy of gravity field solutions from the two types of orbits was similar. Nevertheless, the orbit with large eccentricity near the equator has a slightly stronger constraint on more than 35 order and degree coefficients.
Aiming at the mixed observation data of multiple distribution forms, a P-model distributed mixed model is established. Considering that the mixed number in the model belongs to incomplete data, the EM algorithm is introduced to estimate the parameters of the mixed model and the P-model mixed model parameters are derived in detail The estimated iteration formula and the corresponding iteration steps are given. Finally, the mixed Gaussian distribution data, Laplacian distribution and Gaussian distribution mixed data, and the residual data of measured GPS observations are used to verify the correctness and adaptability of the formula in this paper. The results of the calculation examples show that, compared with the single probability distribution, the P-parameter hybrid model can accurately reflect the actual situation of the data distribution, and the model parameters estimated by the EM algorithm have higher accuracy.
The priori studies have revealed that the Chandler and annual wobbles (CW and AW) in polar motion (PM) are unstable over time. In order to further enhance the prediction accuracy of polar motion (PM), a harmonic model for the linear trend, CW and AW with time-varying coefficients is developed in this paper. The developed model takes into account the variations in both amplitudes and phases of the CW and AW. The PM predictions are calculated by means of two schemes to valid the effectives of the developed model:the least-squares extrapolation of the variable harmonic model for the linear trend, time-varying CW and AW in combination with the autoregressive technique, denoted as VLS+AR, and the combination of the least-squares extrapolation of the invariable harmonic model and AR filtering, referred to LS+AR. The results show that the accuracies of the PM predictions obtained by the VLS+AR are better than those generated by the LS+AR, especially for medium- and long-term predictions.
Objectives: In the process of digital elevation model (DEM) modeling, the existing interpolation methods do not take into account the local topographic characteristics near the breaklines, which makes the elevation of the local area of the breakline smoothed, thus leading to topographic distortion. Methods: A weight function with respect to considering the characteristics of breaklines is constructed, and a weight radial basis function (RBF) method is proposed. The new method makes full use of the gradient and direction information of the sampling points near the fracture line. First, the distance between the sampling point and the point to be sought is calculated adaptively by capturing the structure tensor of each sampling point, and then the distance is used to assign a suitable weight to each sampling point, and finally the DEM modeling is realized by using weighted interpolation. Results: Real-world examples on 10 public and 1 private dataset of DEM construction with airborne LiDAR point clouds indicates that (1) the calculation results of each interpolation method gradually decrease with the decrease of sampling points. (2) Compared to RBF and the classical interpolation methods including inverse distance weighting(IDW), ordinary Kriging(OK) and constrained triangulated irregular network (TIN), our method has a better ability to maintain terriain features in the breakline area. Regardless of sample density, our method is always more accurate than the other methods. Conclusions: Overall, the proposed method with the merit of terrain feature preservation is helpful for the construction of high-accurate DEMs, which play an important role in some geoscience applications with data quality as the most important factor.
Very Long Baseline Interferometry (VLBI) is one of the main space geodetic techniques for monitoring Earth Orientation Parameters (EOP). China is building three VLBI Global Observing System (VGOS) antennas. In order to improve the EOP measurement accuracies, it is necessary to optimize VGOS observing network by extending the domestic VGOS network to an international network. We set the three Chinese VGOS stations located at Shanghai, Urumqi and Beijing as core stations. By adding two international stations selected from four candidate sites located at Hobart of Australia or Bandung of Indonesia, as well as Johannesburg of South Africa or Hawaii of USA, we could form four different 5-station VGOS networks. The performance of EOP measurement accuracies for each network were analyzed based on generation of bulk observing schedules and subsequent large-scale Monte-Carlo simulations. We use the repeatability value defined as standard deviation of EOP estimates as an indicator to evaluate the performance of each schedule and each network. We also compared the EOP formal errors of current VGOS observing sessions to our simulation results. The results show:(1) The EOP measurement capability of the expanded 5-station networks are all much better than that of the 3-station domestic network. (2) The optimized 5-station network, which consists of 3 domestic antennas, Johannesburg in South Africa and Hobart in Australia, has the best EOP measurement results. Compared to the 3-station domestic network, the repeatability of dUT1, the pole motion XP and YP components are decreased by a factor of 5.7, 2.8 and 18.3 respectively. (3) The optimized 5-station network could reach equal or even better EOP estimates than that of the current IVS VGOS observing networks. We demonstrated that the EOP measurement accuracies can be improved by optimizing the observing network based on Monte Carlo simulations. The simulation results can be served as a start point for the future development of high-precision EOP observing program in China.
Satellite Based Augmentation System (SBAS) improves the positioning accuracy and integrity by broadcasting ephemeris corrections and associated integrity parameters through Geostationary Earth Orbit (GEO) satellites. SBAS GEO satellite can also be used as a ranging source together with Global Positioning System (GPS) satellites to improve the system performance. User range error (URE) of the GEO satellite and their effect on positioning results are investigated. URE of SBAS GEO and GPS satellite is determined by weighting the observation residuals which are derived with fixed station coordinates. SBAS messages are applied to correct the orbit and clock errors contained in broadcast ephemeris and the ionosphere delay. Ranging data from GEO satellite is engaged in the SBAS positioning process to explore the impact on positioning accuracy, integrity and availability. SBAS messages broadcast by Wide Area Augmentation System (WAAS), BeiDou Satellite Based Augmentation Systems (BDSBAS), GPS Aided Geo Augmented Navigation (GAGAN) and MTSAT (Multi-functional Transport Satellite) Satellite based Augmentation System (MSAS) and real data from International GNSS Service (IGS) stations are applied to perform the assessment. European Geostationary Navigation Overlay Service (EGNOS) and System for Differential Corrections and Monitoring (SDCM) are not included because of the absence of the ranging capability. It was found that WAAS GEO satellite has the best performance with ranging accuracy better than 1.6m. The 99.9% error bound is less than 6.8m while the broadcast User Differential Range Error (UDRE) for the GEO satellite is 7.5m, which meets the integrity requirement. The 3 GEO satellites of BDSBAS show ranging biases of 14.32m, 12.64m and 17.44m respectively, and the accuracy is better than 2.9m. After removing the bias, the related 99.9% error bound is 8.60m, 7.80m and 11.60m which suggests an UDRE of 11~12. An URA of 15 is broadcast in message type 9 for the BDSBAS GEO satellites. URE of the GAGAN GEO satellite is better than 13.9m and for MSAS it is better than 3.2m. The UDRE of GAGAN and MSAS is 14. URE of GPS satellite after augmented by SBAS is also calculated for comparison purpose. Ranging accuracy of GPS is 0.60m, 0.53m, 0.21m and 0.34m for WAAS, BDSBAS, GAGAN and MSAS respectively. WAAS GEO satellite is selected to perform the positioning analysis whose UDRE is less than 14 so that it can be weighted properly in the solution. Engagement of GEO satellite in SBAS positioning will lead to lower Position Dilution of Precision (PDOP) and reduce the protection level especially for the blockage circumstance. The system availability of Localizer Performance with Vertical guidance 200 (LPV200) approach is improved from 99.984% to 99.997% with collaboration of 3 GEO satellites' observation. With sufficient GPS satellites, the combination of GEO satellites will decrease the positioning accuracy because of the relative larger range error. Results suggest that SBAS GEO ranging data should be included in the SBAS solution for aviation users.
Objectives The polar motion (PM) is an important part of the Earth rotation parameters (ERP). the prediction error of ERP can be effectively reduced by improving the prediction accuracy of PM. Methods Aiming at the complex time variation characteristics of PM, a high-precision prediction method based on the Volterra adaptive algorithm was proposed for the first time, which taken the PM series as chaos. Firstly, the maximum Lyapunov exponent was calculated by using the small data sets method. This analysis proves that the PM has chaotic characteristics. Then two experiments were performed for the second order Volterra adaptive algorithm. Results The results of the experimental results were compared with the Earth Orientation Parameters Prediction Comparison Campaign (EOP PCC) and Bulletin A, the official forecast product of IERS. The results show that the prediction accuracy of this method is higher than that of EOP PCC, and Xp component prediction accuracy is improved significantly, Yp component can be also slightly improved. Compared with Bulletin A, the accuracy of the two forecast results has advantages and disadvantages. Conclusions The example further proves that our method can obtain good forecast results in the short-term polar motion forecast, especially the prediction period is more accurate than that of the small period.
Global Navigation Satellite System(GNSS) position time series usually includes both tectonic and non-tectonic deformation information, have the characterizes of complex components, difficult modeling, and difficult to effectively separate the non-tectonic signals from original time series. How to remove the non-tectonic deformation information is very important for the accurate and effective application of the observation data. Empirical Mode Decomposition(EMD) is an adaptive time-frequency processing method, we use this method to correct the period term of 24 GNSS continuous station time series in Sichuan and Yunnan areas. The results show that the correction of the periodic term is necessary, EMD method is able to extract the periodic components of different frequencies and amplitudes adaptively according to each station's own characteristics, which is also more in line with the actual situation. The average Root Mean Square error(RMS) is reduced by 19.96%, 11.57% and 38.50% compared to the original time series of N, E and U direction, respectively. It is a more accurate and effective method than harmonic model correction. Then, we use the modified continuous station time series to simulate the mobile observations, found that relatively reliable motion velocities can be obtained after 5~6 years/period of observations. The stability and reliability of the EMD method were verified by correcting the period term of the mobile station with the actual continuous observation station at a closer distance, which also provides a reference and theoretical basis for the implementation of mobile GNSS observations and the correction of observation data.
Based on the method of time differenced carrier phase, the mathematical model of multi-GNSS velocity estimation is introduced and the error sources are analyzed. Using the real data, the accuracy and performance of velocity determination at each GNSS frequency point and multi-GNSS are tested and analyzed. The results show that the accuracy of velocity determination at different frequency is different, the velocity estimation at B1I, B1C, B3I frequency points in BDS and E1, E5a, E6, E5b, E5 frequency points in Galileo have the same precision, with the horizontal direction better than 1.5 mm/s and the vertical direction better than 3 mm/s; the velocity estimation accuracy at B2I frequency in BDS is the same as that at L1, L2 and L5 frequency points in GPS with the horizontal accuracy of 1.5-2 mm/s and the vertical accuracy of 3-4 mm/s; the velocity estimation accuracy at G1 and G2 frequency in GLONASS is the worst, with 3-4 mm/s in horizontal direction and 5-5.5 mm/s in vertical direction. The accuracy of double frequency ionospheric free combination is lower than that of single frequency due to the amplification of the observation noise. In addition, the combination of multiple GNSS increases the number of visible satellites, reduces the PDOP value, and can significantly improve the velocity measurement accuracy. Compared with the single GPS system, the accuracy of GPS/BDS/GLONASS/Galileo velocity estimation is improved by 40% in horizontal direction and 46% in vertical direction. At 40° elevation cut-off, the availability rate is improved from 48% to 98% under the condition of the horizontal velocity better than 1 cm/s and the vertical direction better than 2 cm/s.
Spatial co-location pattern discovery aims to mining subsets of geographic features which are frequently located in close proximity. Due to spatial heterogeneity, the frequency of different features being co-located usually varies across the space, thus forming the regional co-location patterns. Currently, most methods for discovering regional co-location patterns focus on planar geospatial spaces which can hardly support the corresponding analysis on network space such as urban roads. Therefore, this paper proposes a regional network co-location pattern mining method based on spatial scan statistics. First, a network-constrained path expansion method is developed to detect the candidate paths where co-location patterns could occur. These candidates are further validated using a significance test, where the null model is constructed using a network-constrained bivariate Poisson distribution. Experiments using simulated data and taxi datasets show that the proposed method is more effective for discovering regional co-location patterns on the network space than a baseline method.
Objectives: To solve the problems of high cost and high complexity of the traditional estimation method of the moving vehicles attitude angle, a innovative vehicle heading and pitch estimation model based on optimized time-differenced carrier phase is proposed. Methods: Firstly, this model only uses the observation data of one GNSS receiver to obtain accurate vehicle displacement vector using a low-complexity algorithm, time-differenced carrier phase. Then, the vehicle heading and angle are estimated using accurate displacement vector. Among them, in order to improve the estimation efficiency, the model optimizes the traditional time-differenced carrier phase algorithm. Results: Static and dynamic tests show that:(1) The optimized time-differenced carrier phase is more efficient than the general carrier phase time-difference, and the processing time can be saved by about 40%. (2) The proposed heading and pitch estimation model can provide accurate heading and pitch, whose root mean square error is less than 0.2ånd maximum error is less than 1.5°. (3) The accuracy of the model will not be affected by the accumulation of errors within one hour. Conclusions: This model uses only one receiver to obtain heading and pitch of moving vehicles, which has the advantages of high accuracy, low cost, low complexity and high efficiency.
Based on the first asteroid exploration plan announced by China Space Administration on April 19, 2019, we develop an optical orbit determination software for the main belt comet 133P/Elst-Pizarro (7968), which is one of the mission targets. The 133P/Elst-Pizarro's ground-based optical observation data from July 24, 1979 to October 28, 2019 are analyzed. Compared with the well-known OrbFit software system, it is found that the residual distribution is consistent, the measurement statistical residual RMS is less than 0.01″, and the internal coincidence accuracy of orbit determination is also consistent with each other. The results suggests the reliability of our software. Furthermore, we carry out a simulation orbit determination analysis aimed at 133P/Elst-Pizarro to discuss the orbit determination accuracy from ground-based optical data. When we use 20-year optical observation data measured once a mouth from Yunnan and Chile station and adding Gaussian white noise which is close to the current actual observation level, the results reflect that the optical orbit determination accuracy of the asteroid is at 50 km level. At the same time, it also shows that the optical orbit determination accuracy of the asteroid can be effectively improved by increasing the observation data or reducing the observation noise.
The pollution of heavy metal has become increasingly serious in recent years. The accumulation of heavy metals in the soil will be a threat to ecological balance and human health. Therefore, we need to obtain heavy metal content in soil quickly and accurately. With the development of Hyperspectral technique, it makes it possible to estimate heavy metal content fast and at low cost by spectral data. Because the field spectra tends to be affected by enviro nmental factors, and the number of samples is insufficient in existing studies, this paper proposed a method of constructing model by combining field spectra with laboratory spectra. First of all, Direct Standardization algorithm (DS) was employed to eliminate the environmental factors of the field spectra. Secondly, laboratory spectra was introduced to joint modeling to enhance the diversity of the samples. Finally, the characteristic spectra of iron oxide were extracted for modeling to increase the rationality of model. This method was validated by the spectral data of 70 soil samples from Xiongan farming area in Hebei province. The model established by field spectra without DS correction and characteristic spectrum obtained an accuracy (R2) of only 0.22. However, the R2 of the proposed method in this paper reached 0.91, and the model had excellent estimation ability. It indicates that spectral modeling of Pb content can be significantly improved by removing the influence of environmental factors on the field spectra, extracting the iron oxide characteristic spectra of the combined field and laboratory spectra.
Objectives: The performance of the GNSS spaceborne atomic clock affects the entire navigation system, by having direct impact on GNSS measurement quality, ranging precision, clock prediction and satellite autonomous navigation capabilities. Different time Synchronization systems may have different impact on the evaluation of spaceborne atomic clock. Precise clock bias data determined by inter-satellite link (ISL), two-way time transfer (TWTT) and orbit determination and time synchronization (ODTS) system are used to further evaluate the performance of Beidou-3 on-orbit atomic clock. Methods: Quadratic polynomial model and total Allan/Hadamard variance are used to analyze the performance of the Beidou-3 satellite clock data of the three different time synchronization systems. Results: The result shows that the frequency accuracy and frequency drift of the above three clock bias determination systems are consistent. The frequency accuracy of all satellites is within the range of (-4~2)×10-11. Frequency accuracy of hydrogen clock is better than that of rubidium clock. The frequency drift based on the ISL system is slightly better than that of the ODTS; the three clock determination systems have their own advantages in evaluating the stability of the atomic clock, respectively. For short-term stability, the ODTS whose 3000s stability reaches 3×10-14 is better than that of ISL, and hydrogen clock is better than rubidium clock. For medium-term and long-term stability, when the averaging time is greater than 1×104 s, the result of ISL is closer to the actual condition of the Beidou-3 spaceborne clock; for long-term stability more than 7 days, the broadcast ephemeris bias based on TWTT system can be used for rapid evaluation, which result is close to ODTS and ISL. Conclusions: The three clock bias determination systems can be used to evaluate the frequency accuracy and frequency drift of on-orbit atomic clocks with basically consistent statistical results. The three clock bias determination systems have their own advantages in assessing the stability of the orbiting satellite clock when selecting different averaging time.
The global constellation networking of BeiDou Navigation Satellite System (BDS) has been completed, which means that the BDS-3 has entered a new era of providing high-quality positioning, navigation and timing service for global users. To comprehensively compare the performance of BDS-3 uncombined precise point positioning (PPP) with other global navigation satellite systems (GNSS), three aspects are focused:the consistency of BDS-3 precise orbit and clock products among different analysis centers, the satellite availability of BDS-3/GNSS, the positioning performance of BDS-3/GNSS single-system and multi-system PPP. Based on the precise orbit and clock products from five analysis centers, the three-dimensional root mean square error of BDS-3 static PPP is about 2.31 cm~4.00 cm, and its convergence time is significantly slower than that of other GNSS. The introduction of GPS observations can achieve the most obvious improvement among BDS-3/GNSS dual-system joint PPP. Besides, the quad-constellation observations can effectively shorten the convergence time of PPP and improve the positioning accuracy on kinematic mode.
Objectives: In the large-scale of virtual reality scene, it is difficult to add all graphics data into the video memory for rendering. Removing the occluded objects in advance by visible query technology can reduce the amount of data loaded on the display end to improve the rendering efficiency. Therefore, the research of visible query method for regional objects has important application value for real-time rendering of large-scale urban scene. Methods: In this paper, we put forward a distributed visible query method based on Map-Reduce. In the map phase, we apply a hierarchical axis-aligned bounding box as viewpoint space partition. When the number of 3D objects in viewpoint space partition exceeds the threshold, the axis-aligned bounding box continues to be divided into sub-boxes. After the above process, the map tasks produce GeoTuples with the VSPID as key and visible query candidate set as value. In the reduce phase, a viewpoint is created for each leaf axis-aligned bounding box where the binary space partitioning trees are build and the visible set is calculated using real-time occlusion algorithm. Results: The study experimented with a building compound, containing more than 200,000 geometric solids, in Shenzhen, China. The experimental results show that:(1) There is no simple linear relationship between the running time of distributed visible query and the number of viewpoint space partitions. (2) Running time and parallelism are not simply inversely proportional. The computational efficiency of each process first increases and then decreases with the increase of parallelism. About 48 parallelism, the process has the highest efficiency. (3) Whether the distributed approach is better than the traditional approach depends on the number of 3D objects. After the amount of 3D objects reaches about 40,000, the distributed algorithm begins to be better than the traditional algorithm. Conclusions: The computational experiments reveal the proposed algorithms outperform competitors in terms of the processing efficiency and feasibility, which can meet the requirement of visible query in large-scale scenarios.
Objectives: Rockfall occur frequently in the mountainous areas of southwest China, which is easy to cause huge casualties and property losses. A rapid simulation method of the collapse process is urgently needed. At present, the commonly used collapse simulation software still has obvious shortcomings, such as low terrain accuracy, failure to consider the structural characteristics of rock mass, failure to realize block collision and fragmentation, etc., and the logic operation based on Central Processing Unit (CPU) only limits the calculation speed. In this paper, a new method for rapid simulation of the three-dimensional motion process of collapse is proposed. Methods: The Unmanned aerial vehicle (UAV) aerial photography modelling combined with field investigation is used to obtain the slope surface model and determine the characteristic parameters of rockfall. The simulation software of large-scale collapse movement process is developed by using Unity3D platform which integrates PhysX physics engine and Central Processing Unit-Graphics Processing Unit (CPU-GPU) parallel computing capability. Results: The software can reproduce the whole process of collapse-impact-fragmentation-accumulation; It can output the three-dimensional trajectory, velocity, energy and jumping height of collapse, which provides reliable basis for the design of collapse prevention and control. Finally, taking Xiejiayan collapse in Nayong, Guizhou Province as a prototype case, three-dimensional simulation and verification of the collapse movement process are carried out. The simulation results of the rockfall accumulation range are in good agreement with the field investigation of the rockfall accumulation range at the bottom of the slope, and the movement characteristics of the simulated single block rock conform to the real physical laws, which indicates the feasibility and practicability of this method. Conclusions: This method can solve the problem of simulation, analysis and visualization of the whole process of three-dimensional movement of rockfall.
Machine learning language has become an ideal modeling tool in the field of dam health monitoring with its powerful nonlinear data mining ability. However, the minimum fitting mean square error (MSE) is determined as the only optimization objective in the traditional modeling process, which is likely to cause over-fitting problems. To overcome this problem, based on the relevance vector machine (RVM), a probabilistic prediction model is established under the constraint of double optimization objectives, which integrates the deformation spatial association and MSE. The deformation spatial association is quantified by the shape similarity index (SSI) at first. The double objective is then established with the combination of the MSE and SSI, and is achieved by making the MSE as small as possible, while the SSI is as large as possible. Engineering example of the Jinping-I arch dam shows that the average decrease proportion of the root mean square error (RMSE) and maximum absolute error (ME) of the proposed double objective RVM model is 31.2% and 24.8%, respectively, and the prediction performance can be further improved by using the multi-kernel function. The prediction confidence bandwidth of the RVM model is significantly smaller than that of the traditional multiple linear regression model, with an average decrease proportion of 75.1%. Therefore, the multi-kernel double objective RVM model established for the displacement of super high arch dams can effectively improve the prediction performance and reduce the uncertainty.
Objectives: In order to explore the impact of the decrease of human activities on the air quality in China during the period of the Corona Virus Disease 2019 (COVID-19), the temporal and spatial abnormal changes of Aerosol Optical Depth (AOD), Precipitable Water Vapor (PWV) and Temperature (T) were analyzed, and the impact of human activities on the air quality was revealed. Methods: Firstly, the accuracy of AOD, PWV and T is verified by comparing with AOD provided by AERONET and PWV and T provided by radiosonde. Then, we analyze the long-term trends of AOD, PWV and T during the weekend and the week, and find that human activities have a certain impact on the air quality. Secondly, the temporal and spatial changes of AOD, PWV and T during the period of COVID-19 were studied, which confirmed that there was a good correlation between human activities and air quality. Finally, 184 cities of different grades in China are selected for further analysis to determine the impact of population density on AOD, PWV and T, and further reveal the specific response relationship between human activities and air quality. Results: Through the verification of the accuracy of the data used in this paper, it is found that the data selected in this paper have high accuracy, which can be used in this experimental study. By analyzing the COVID-19 PWV, AOD and T changes, we found that PWV, AOD and T were all affected by the epidemic. Conclusions: Due to the influence of COVID-19, AOD, PWV and T show different trends. At the same time, it is found that the main reason for this phenomenon is the influence of the intensity of human activities.
To solve the problems in pedestrian dead reckoning algorithms for indoor positioning, of which the step recognition accuracy for step detection is not high enough, the synchronous control is not precise enough, and there is a large location deviation. An algorithm of improved finite state machine step detection for the activity of flat holding smartphone was proposed. A finite number of states were set to correspond to the trend of resultant acceleration variation during walking. Step detection and step cycle estimation were realized based on adjacent resultant acceleration difference and several thresholds of climbing and descending times. Experimental tests were conducted by two testers in 211 meters corridors with flat holding smartphone, respectively. Experimental results show that the accuracy of two step detection tests are both 100% by using the improved algorithm. It is 0.004 seconds earlier on average than the actual time for each step. And the average location error is 0.384m. Compared to the auto-correction analysis and acceleration differential based on finite state machine algorithms, the accuracy of step recognition, synchronous control and location estimation are improved at least 0.7%, 60% and 21.15%, respectively. The proposed algorithm behaves better than the existing algorithms in the aspects of the step recognition, the synchronous control and location estimation.
Integration of GNSS and INS can provide continuous and accurate positioning information for vehicles. However, the accuracy of low-cost GNSS/INS vehicle integrated navigation systems is unreliable during GNSS outages, which are common in urban areas. So, a long and short-term memory (LSTM) networks-aided GNSS/INS integrated navigation system based on extended Kalman filter (EKF) is proposed in this paper. LSTM networks are trained to learn the relationship between position error and INS output when GNSS available. When GNSS outage occurs, LSTM networks predict and correct errors of the integrated navigation system to improve location precision. The experiment shows that the north error and east error of the GNSS/INS integrated navigation systems based on EKF is 1.93m and 13.92m during the 15s GNSS outage. Meanwhile, the north error and east error of the GNSS/INS integrated navigation systems based on LSTM-EKF is 1.17m and 0.84m. The comparison results indicate that the proposed system can effectively improve location precision during GNSS outages.
With the development of deep-space technology of China, the Jupiter exploration program has been added into the schedule. The precise orbit determination (POD) and gravity field recovery play an important role in Jupiter exploration. The paper focuses on the precise orbit determination of Juno and low degree gravity field recovery of Jupiter. First at all, the coordinate system and dynamics model of the Jupiter probe are given, and the Juno precision ephemeris of JPL is used for verification. The dynamical fitting position difference is on the order of 10 m, and the velocity difference is less than 6 mm/s. Then, the deep-space Doppler measurement model is presented and the trajectory of Juno is calculated using the tracking data. The difference with the reference orbit given by JPL is better than 1 kilometer. Finally, the simulation data is used to verify the reliability of gravity field solution, and the measured data near 4 perijove points of the Juno is used to calculate the gravity field coefficients of Jupiter, obtaining the gravity field zonal coefficients up to 8 degrees.
Traditional remote sensing-based air temperature (Ta) estimation method usually used the global models, which ignored the effects of spatiotemporal heterogeneity, especially for the researches in large regional areas. Taking the Yangtze River Economic Zone as a typical research area, this paper introduced the geographically and temporally neural networks for high-precision Ta estimation. The influence of spatiotemporal heterogeneity was considered by establishing the local models in the generalized regression neural network. Remote sensing data, assimilation data and station data were fused in this study to obtain the spatially continuous near-surface Ta. The model performance was evaluated by the site-based ten-fold cross-validation method. The results showed that the geographically and temporally weighted neural network had effectively improved the estimation accuracy with the RMSE=1.899℃, MAE=1.310℃ and R=0.976. Compared with the multiple linear regression method and the traditional global neural network, the MAE value decreased by 1.112℃ and 0.378℃ respectively. The Ta mapping results indicated that the model used in this paper can well reflect the spatial distribution differences, which means that this study is possible to provide a new way for Ta estimation with high precision.
Objectives: Differential Interferometric Synthetic Aperture Radar (D-InSAR) has been widely used in large-scale surface deformation monitoring. However, the surface deformation obtained from spaceborne SAR data is easily affected by atmospheric noise, and the long revisit period leads to the incoherence. In order to effectively reduce these effects, this paper proposes a method to monitor the deformation of highway slope by car-borneInSAR system. Methods: Due to the orbit control of thecar-borne dual-antenna InSAR system, the spatial baseline of the interferometric pairs is close to zero. Therefore, when the D-InSAR data withthis system is used to extract the deformation information, it can avoid the flat phase, which greatly simplifies the differential interference processing process. Results: A certain area of Wuhan, Hubei Province is chosenasthe experimental area. Seven corner reflectors are deployed in the test area to evaluate the accuracy of deformation information extracted using the car-borne dual-antenna InSAR system.The deformation of seven reflectors is calculated by using the method proposed in this paper. The root mean square error of deformation between the real value and thecalculated value is 2.206 mm. Conclusions: In this paper, a deformation monitoring method of zero space baseline usingthecar-borne dual-antenna InSAR system is proposed and verified in Wuhan city. The D-InSAR method based on zero space baseline can avoid the process of the flat phase. At the same time, our system is small in size, and the design of the trajectory according to the actual needsis flexibly, which is very practical for small-scale highway slope deformation monitoring. Because of the small revisit period, the car-borne InSAR Data will not produce phase error due to atmospheric delay, and the deformation measurement accuracy is high.
Objectives Point cloud has no topological structure, current deep learning semantic segmentation algorithm is difficult to capture geometric features implied in irregular points. In addition, the point cloud is in three-dimensional space with a large amount of data size. If we blindly expand the captive field size during extract neighborhood information, it will increase the number of model parameters, which will make model training difficult. Methods To this end, we propose a point cloud semantic segmentation model based on the dilated convolution and combining elementary geometric features such as angle as the model input. First of all, during feature extraction, basic geometric features such as the relative coordinates, distance and angle between the centroid and the neighboring points are used as the model input to mine the geometric information. Secondly, in the process of building local neighborhoods, we expand the image dilated convolution operator to point cloud processing, the point cloud dilated operator can expand the receptive field size with no increasing the number of parameters of the model. Then the dilated convolution operator, multi-geometric features encoding modules and U-Net architecture are combined to form a complete point cloud semantic segmentation model. Results Semantic3D is applied to verify the proposed algorithms. The results show that compared with the traditional neighborhood structure, the OA of dilated neighborhood structure is increased by 1.4%. Compared with the model that only uses coordinates as input, multi-geometric features encoding module is increased by 10.7%. The final model based on the two proposed algorithms get mIoU and OA are 91.2% and 68.2%, respectively. Conclusions The dilated neighborhood structure can effectively extract point cloud information in a larger range without increasing the number of model parameters. multi-geometric features encoding module can maximize the capture of shape information in the neighborhood.
To find a high accuracy method for SCB prediction based on characteristics of SCB data, a preprocessing strategy (WMAD) of a wavelet threshold method based on the median absolute deviation(MAD) is proposed to preprocess the SCB, aiming at the small error of the middleweight of the SCB data. Firstly, the wavelet threshold method is used to decompose the SCB data to obtain the decomposed high frequency coefficient and low frequency coefficient. Then the MAD method is used to deal with the high frequency coefficient of each layer affecting the threshold setting, and the processed high frequency coefficient is used to calculate the threshold, so as to improve the ability of eliminating small outliers by the wavelet threshold method. Finally, the clock bias data of beidou-2 satellite are used to verify. The results show that this method can effectively eliminate the small error of median magnitude in the historical observation sequence of clock difference and predicting the satellite clock bias.
On July 31, 2020, China's third generation Beidou satellite navigation system (BDS-3), which operates independently, will be fully completed and officially opened for service. The signal system of BDS-3 has been redesigned to provide public service signals of five frequencies:B1I, B3I, B1C, B2a and B2b. Based on the observations of international global navigation system monitoring and assessment service (iGMAS) and Multi-GNSS experiment (MGEX) tracking stations, this paper analyzes the characteristics of pseudoranges multipath error, signal to noise ratio, Geometry-Free Ionosphere-Free (GFIF) combined observations of the new signal system, as well as the precise orbit determination (POD) performance evaluation of BDS-3 satellites. The results show that the multipath noise level of BDS-3 satellites signal is better than that of BDS-2 satellites, and there is no systematic deviation related to altitude angle, B1C is more significantly affected by multipath and noise; the GFIF sequences of different signal combinations of BDS-3 satellites show periodic systematic error related to satellite, and the peak value is about 2cm. For BDS-3 satellites, the "one-step" POD method is used. The two frequencies ionospheric free combination of B1I & B3I and B1C & B2a is adopted respectively. The orbit boundary discontinuity and satellite laser ranging (SLR) carried out an orbit accuracy check, and the results show that when the number of available observations is less than B1I & B3I, the orbit accuracy of B1C & B2a is equivalent to that of B1I & B3I, and the radial internal coincidence accuracy is 6.1cm and 6.6cm respectively.
Objectives: Most of the characteristics of land subsidence are analyzed separately from the perspective of temporal or spatial, and the hidden information and possible laws in the data cannot be discovered simultaneously. Temporal Principal Component Analysis (TPCA) can be used to extract temporal and spatial characteristics of temporal-spatial data in the field of geosciences. The land subsidence in the Beijing Plain has typical temporal and spatial characteristics. Therefore, TPCA makes full use of the advantages of long-term coverage of land subsidence obtained by InSAR measurement. Methods: 1. Permanent Scatterer Interferometric Synthetic Aperture Radar (PS-InSAR) technique provide a convenient method to measure land subsidence in sub-centimeter precision. 51 Envisat ASAR data acquired from 2003 to 2010 in the Beijing Plain were used to produce 50 interferograms and obtain time-series deformation with nonlinear model. The critical steps include preprocessing like master and auxiliary images registration and registration of control points; differential interference; extraction of PS(Permanent Scatterers); removal of atmospheric phase; unwrapping to obtain the final deformation. 2. Based on the land subsidence of about 100,000 points and 51-time series, construct the original data matrix X100000*51, calculate the correlation coefficient matrix, and use the TPCA method to analyze the temporal and spatial evolution characteristics of land subsidence in the Beijing Plain. The Eigenvector from TPCA is a time series, which represents the correlation between the PC spatial pattern and the subsidence. The principal component score is the spatial pattern obtained by TPCA decomposition, which represents different spatial characteristics, and further analyzes the characteristics of the new variable-principal component score. Results: It is found that:(1) The Eigenvalues determine the amount of information explained by each component. The information explained by the first three principal components (variance contribution rate) is 86.18%, 8.66%, and 2.37% respectively. (2) The eigenvector represents the degree of correlation between the principal component scores after linear combination and the original variables, and also represents the time trend of the principal component features. The eigenvector of PC1 remained stable around 0.15, indicating that the development trend of land subsidence remained consistent during this period. The overall variation range of the feature vectors of PC2 and PC3 is larger than that of PC1. The PC2 and PC3 can reveal some seasonal variation characteristics of land subsidence through further calculation. (3) The first principal component obtained by TPCA analysis represents the long-term development trend of the spatial distribution of land subsidence. (4) The area that the second principal component that is positive has a correlation in spatial distribution with the area of compressible layer thickness above 130m. (5) The PS points where the first principal component scores are negative and the second principal component scores are positive are distributed in the severe subsidence area above 30mm/a. There is an obvious classification of land subsidence and seasonal variation between north and south area in the severe subsidence area. Specifically, in the northern subsidence area, the amount of subsidence in spring and summer is larger than in autumn and winter, it is an opposite variation in the southern subsidence area. Conclusions: In general, the temporal and spatial variation of land subsidence could be studied for urban safety monitoring by TPCA. It also can identify the main characteristics of the space and the law of temporal and spatial evolution. Since TPCA is a linear combination, by finding the direction with the largest variance for projection, the variables obtained are just uncorrelated and not independent of each other. PCA only uses the second-order statistical information of the original data and ignores its high-order statistical information. Therefore, it is necessary to optimize by rotating principal components to find more physical meanings of principal components.
In view of the problems of high miss-alarm rate of small targets, heavy model weight, and detection speed that fails to meet real-time requirements in side-scan sonar shipwreck detection method based on the YOLOv3 model. The paper introduces the YOLOv5 algorithm and proposes a model based on YOLOv5 according to the characteristics of the side-scan sonar shipwreck dateset.Try YOLOv5a, YOLOv5b, YOLOv5c, YOLOv5d, YOLOv5s, YOLOv5m, YOLOv5l and YOLOv5x under the basic framework of YOLOv5 with eight different depth and width model structures.Choose the best structure use GA+K (Genetic Algorithm+K-mean) algorithm to optimize the detection frame, and improve the loss function through CIOU_Loss. The experimental results show that the improved YOLOv5a model is 0.3% and 0.6% higher than the original model in AP_0.5 and AP_0.5-0.9, and has a substantial improvement compared with the YOLOv3 model, in which AP_0.5 and AP_0.5-0.9 are improved by 4.2% and 6.1% and the the detection speed reaches 426 frames per second, which is almost doubled that of YOLOv3, which is more conducive to practical applications and engineering deployment.
Objectives: Researchers notice that the quality of training samples will impact the effective of training phase and then further will have an influence on the overall classification accuracy in the testing phase. In fact, representativeness or typicalness of training samples is able to reflect the quality of training samples in a way. Especially for the currently popular deep learning methods, it has needed thousands or millions of training samples. Therefore, how to reduce the number of training samples for deep learning method becomes a very important problem. In another hand, from the actual application angle, it is also very expensive. Therefore, we research one method of reducing the training samples as less as possible based on the representativeness or typicalness of training samples. Method: selection of training samples based on oblique factor model is proposed and it relaxes the independent condition among common factors in the orthogonal factor model, which is able to better describe the real world. Results: Experimental results show the proposed method is feasible and effective and it is able to select more representative training samples than the method of selection of training samples based on orthogonal factor model and achieve better performance in the overall classification precision and stability. Experimental results show that selection of training samples based on oblique factor model outperforms selection of training samples based on orthogonal factor model. And the distribution of selected samples becomes more decentralized and reasonable and the overall classification accuracy averagely improves about 3%. Conclusions: the proposed method, not only supports how to optimize capturing data in the theory, but also is able to guide how to effectively capture data in the actual application.
Objectives: As an important part of cultural heritage, immovable cultural relics are facing increasing risks of rainstorm and flood disaster. In order to improve the capabilities of prevention on rainstorm and flood disaster for immovable cultural relics, this study proposed a risk assessment method based on the theory of natural disaster risk assessment considering the seasonal variation of rainstorm and flood disaster, and took 24 National Ancient Sites in 18 counties in Fujian province as an example. Methods: The regional L-moment method was used to acquire the rainfall values of various return periods in different season, and analyze the characteristics of seasonal rainstorm and flood risk. We estimated the risk from three aspects:hazard factors, disaster formative environment and hazard bearing body of cultural heritages. The coefficient of variation method, entropy weight method and Delphi method were adopted to calculate the weight of those three aspects. Results: The results show that the seasonal difference of the risk assessment results is obvious, which indicates that the assessment method is feasible. The risk of rainstorm and flood was the highest in the second quarter, followed by the first and third quarter, and the spatial distribution of risk in different seasons was significantly different. In the first and second quarters, the risk of rainstorm and flood was high in coastal and northern counties, and low in central counties. The risks in the third and fourth quarters were high in the coastal areas and low in the inland areas. Conclusions: The proposed model for immovable cultural relics can well reflect the seasonal difference of rainstorm and flood risk, and is suitable for regions with large seasonal difference of rainstorm and flood disaster. The results of seasonal differences can provide scientific reference for disaster prevention and mitigation planning for immovable cultural relics.
It is a very important highly accurate application of optical video satellites to discover a moving target and obtain its geographic position and moving speed. Although the existing stabilization approaches for optical video satellites can obtain stable and smooth videos, the lack of geometric information in the satellite videos makes them difficult to meet the above application requirements. To solve this problem, a geo-coded stabilization approach for optical video satellites in object space is proposed in this paper. The proposed approach takes full advantage of the orientation parameter information of satellite video frames. An interframe motion model based on the video frame orientation model is first established. Then, the orientation parameters of the auxiliary frame are motion compensated to achieve the geometric consistency between the video frames. Finally, all the video frames are geo-coded in object space, and the geo-coded smooth video can be generated. The experimental results of a ZhuHai-1 satellite video showed that the proposed approach could effectively eliminate the influence of satellite jitter errors and satellite position and attitude errors. Both the geometric accuracy between the video frames and the video stabilization accuracy achieved by the proposed approach reached better than 0.3 pixels.
Objectives: BDS-3 officially opened its global service on July 31, 2020. To evaluate the BDS-3 global positioning performance in detail. Based on multi-day measured data from 16 MGEX tracking stations around the world. Methods: Net_Diff software was used to carry out the global BDS-3 single-frequency, dual-frequency non-ionospheric combination and dual-frequency non-combination model under dual-frequency, three-frequency non-combination model and three-frequency non-ionospheric pairwise combination model, three-frequency pseudorange single Point positioning solution experiment. And compare with GPS, Galileo some frequencies. Results: The results show that the number of satellites and spatial geometry of BDS-3 in Asia, Europe and Africa are better than GPS and Galileo. For BDS-3 single-frequency pseudo-range single-point positioning, the four frequencies of B1C, B1I, B2a, and B3I have four frequencies of E direction accuracy. It is better than 0.5 m, the accuracy of the N direction is better than 1 m, and the accuracy of the U direction is better than 2 m. Compared with GPS and Galileo, the positioning accuracy relationship is:B1C>B1I>L1>B3I>B2a>E1>L2>E5a. For BDS-3 dual-frequency combined pseudorange single-point positioning, the B2aB3I combined positioning is poor and not suitable for positioning. The B1CB2a, B1CB3I, B1IB2a, B1IB3I combined frequency positioning accuracy in the E and N directions is better than 1m, and the positioning accuracy in the U direction is better 2 m, compared with GPS and Galileo, the positioning accuracy relationship is:B1CB2a> B1CB3I> L1L2> B1IB3I> B1IB2a> E1E5a> B2aB3I. For BDS-3 triple-frequency combined pseudo-range single-point positioning, the positioning accuracy of B1IB2aB3I and B1CB2aB3I in the E and N directions is better than 1 m, and the positioning accuracy in the U direction is better than 2 m. Compared with GPS and Galileo, the positioning accuracy relationship is:B1CB2aB3I> B1IB2aB3I> L1L2L2> E1E5aE5b. B1CB3I, B1IB3I, B1CB2a, B1IB2a dual-frequency combinations are suitable for positioning using non-combined models, B2aB3I is suitable for positioning using non-ionospheric models, and B1IB2aB3I and B1CB2aB3I are suitable for positioning using non-combined models. Conclusions: Combining the experimental results can draw conclusions, BDS-3 has better positioning performance on a global scale. Even some frequency positioning performance is better than GPS and Galileo, which can provide a certain reference for future BDS-3 related research.
Objectives: In the field of line matching and checking, the geometric attributes of the individual line are weakly stable, and non one-to-one matching results are difficult to be checked. To address the problems above, a line matching algorithm based on pair-wise geometric features and individual line descriptor constraints was proposed. Methods: For matching process, firstly two line segments which satisfied certain geometric constraints in the neighborhood were grouped into a line pair and matched as a whole. Secondly, the epipolar constraint of the intersection in the line pair was used to determine the matching range. Then characteristic angles within line pairs, distance ratio between line segments and the radiation information of the neighborhood of line pairs were used to narrow the range of the matching candidates. Finally, the final matching pairs was obtained by caculating the gray similarity of triangle region. For checking process, initially according to the angle of line with epipolar and the slope of line, the corresponding relation between individual lines was established. Moreover, each corresponding line pair was split into two groups of corresponding individual lines. Then, descriptors were established for two single lines in each group and the similarity between the two line descriptors was calculated. Eventually, collinear geometry and descriptor similarity were combined to check the matching results, eliminate the false matches. Collinear lines in the results were merged and one-to-one matching results were obtained. Results: Aerial images with typical texture features and close-range images of different transformation types were selected for experiments. The results demonstrate that the proposed algorithm has a high matching accuracy. The matching accuracy rate is higher than 95% in complex scenes with similar texture, perspective change, rotation change, scale change and illumination change. Conclusions: The proposed algorithm has good robustness to different types of images, and also has advantages for complex line matching check.
With the ability of providing Real-Time Precise Point Positioning (RTPPP) service for China and surrounding countries, the application of BeiDou Navigation Satellite System(BDS-3) PPP-B2b signals have seen a rapid rise in recent years. And the performance evaluation is a vital issue for large deployment in the future, such as the evaluation about the accuracy of orbit and clock error, as well as the analysis of precision and convergence time for Precise Point Positioning(PPP). We hereby follow this idea in this paper. Based on the September observations of the China branch stations of international GNSS Monitoring and Assessment System(iGMAS), the accuracy of orbit and clock error, as well as the positioning accuracy of B1I+B3I and B1C+B2a signal combination are carefully evaluated in this dissertation. The results show that the average accuracy of PPP-B2b orbit products in R, A and C direction is 0.1 m, 0.31 m and 0.3 m, respectively. Besides, the root mean square (RMS) of clock error correction is 2.26 ns, In terms of PPP convergence, the B1I+B3 signal combination using GBM products is the quickest among them, with the highest final convergency accuracy. The second is B1I + B3I (PPP-B2b), and B1C + B2a(PPP-B2b) is at the last. Summarizing the above discussions, BDS-3 PPP-B2b signals is capable of providing regional PPP service in China.
Objectives: Land subsidence in urban areas brings loss of ground elevation, damages urban infrastructure and buildings, and affects surface runoff and hydrological cycle. Monitoring the status of land subsidence and revealing its formation mechanism is of great significance for sustainable urban development. Methods: Using ALOS-PALSAR images from 2007 to 2011 and Radarsat-2 images from 2015 to 2019 as data sources, based on SBAS-InSAR technology to obtain the land subsidence rate and time series, and using geographic detectors to reveal the dominant driving factors and the interaction mechanism between the driving factors of land subsidence at the scale of planning unit. Results: The results show that:①The average land subsidence rates from 2007 to 2011 and 2015 to 2019 were -3.53 mm/year and -1.48 mm/year, respectively. The hot spots for land subsidence from 2007 to 2011 are Hankou, the coast and north of Shahu Lake, the west of Nanhu Lake, and the Baishazhou area. From 2015 to 2019, it is Hankou, the north of Shahu Lake and the Baishazhou area. ②The temporal and spatial evolution of land subsidence in Wuhan is localized, staged, and related to natural conditions and human activities. Rapid subsidence only occur in certain region, shows the different trends at various stages, and is closely related to the regional natural conditions and human activities. ③Hydrogeological conditions are necessary conditions to form the spatial-temporal pattern of land subsidence in Wuhan through interaction with various factors such as ground load, underground space development, and engineering construction. The interactive effects between engineering construction and hydrogeological conditions from 2007 to 2011 are found to be significant, so do ground load and hydrogeological conditions from 2015 to 2019. Conclusion: The geographic detector can quantitatively identify the driving factors of land subsidence and the interactions between them. The interactive effects between hydrogeological and conditions engineering construction, ground load to a large extent affected spatial variation of land subsidence from 2007 to 2011 and 2015 to 2019 respectively. In the future, continuous monitoring of land subsidence and multi-scale research on the formation mechanism of land subsidence should be carried out to further enrich the theory and method system of land subsidence research.
Discrete global grid systems are the preferred data models supporting multisource geospatial information fusion. Hexagonal grids have become more popular in many applications due to their geometric characteristics within uniform adjacent. We design a uniform tiles hierarchy on the surface of the icosahedron according to the characteristics of the 4-aperture hexagonal refinement, using complex numbers to build a unified coding and operation model. We also design algorithms including interoperating between geographic coordinates and codes, querying neighborhood codes. Compared with traditional methods, the principle of the uniform tile hierarchy is simple and easy to understand, the algorithm complexity is low, and the computer execution efficiency is high. The results of the experiments indicate that interoperation between geographic coordinates and codes efficiency of the proposed algorithm is approximately 2.74 and 1.73 times that of the traditional method respectively and that neighborhood codes query efficiency of the proposed algorithm is approximately 7.46 times that of the traditional method. As the grid level rises, the advantages of the proposed method become more obvious. The research results of this paper are expected to provide theoretical and technical supports for the unified organization, management, processing and analysis of multi-source Earth observation data.
Objectives: National park is the main body of China's natural protected area system, and land cover mapping in national parks plays an important role in understanding the status of natural resources, identifying existing ecological security threats and responding to them quickly. This analysis attempts to develop an accurate and cost-effective model for mapping land cover dataset in national parks. Methods: Two land cover classification methods were developed based on Google Earth Engine (GEE) environment by combining Sentinel-1 and Sentinel-2 images, topographic and textural derivatives to map land cover types in the Qianjiangyuan National Park. One used pixel-based random forest (RF) classification algorithm, the other used object-oriented simple non-iterative clustering (SNIC) segmentation in partnership with RF algorithm, which indicated that the input featureswere first segmented into superpixel image objects then classified by RF algorithm. To optimize classification features, a method taking the advantages of cloud computing platform to design different groups of comparative experiments to finalize feature combination with the highest classification accuracy was proposed, followed by using a recursive feature elimination method to further screen the features. Results: Experimental results showed that both the pixel-based method and the object-oriented method had good performance in land cover classification, and the corresponding highest overall classification accuracies of the two methods were estimated at 92.37% and 93.98%, respectively. Furthermore, integration of synthetic aperture radar (SAR) data into the classifications could substantially improve the classification accuracy when using the pixel-based method, but there was no apparent escalating effect for the object-oriented classification. Conclusions: Experiments showed that the land cover classification map generated from the SNIC+RF algorithm in GEE platform was more complete and the algorithm requires fewer features (15 features, including multispectral bands and spectral indices) and runs quickly in the GEE platform. Thus, this algorithm deserves to be popularized in national park management practices.
The differential code biases have a great influence on the estimation of ionosphere with GPS observations, which should be precisely calibrated when obtaining ionospheric slant total electron content (STEC). So far, the estimation of GPS satellite DCB is mainly based on the ground-based GPS observation data. With the increasing number of low-orbit satellites, DCB estimation of LEO receivers becomes particularly important for the topside ionosphere research. In this study, onboard observations of GPS satellites by the Swarm constellation are applied to estimate GPS DCBs and receiver DCBs. Since there are three satellites in the constellation, two different estimation schemes are designed, individually estimation and combined estimation respectively. Compared with the individual estimation strategy, The combined estimation strategy achieves more stable satellite DCB with a stability improv ement of 16.6% for GPS satellites, and it's satellite DCB presents better consistency with the reference DCB provi ded by the two analysis centers.
Backscatter coefficient (sigma0) is one of observations of satellite radar altimetry. It is related to the physical and geometric characteristics of land surface under the influence of global/regional climate change. Sigma0 can be used for monitoring surface features under climate change, data calibration and verification of satellite altimeters, inversion of surface features (e.g. soil moisture, snow thickness, etc.) and other fields. GlobeLand30, as the global land cover information with the resolution of 30 m, was produced in China. The Geophysical Data Record (GDR) data of Jason-2 was used to extract and isolate the Ku-band sigma0 data of the Tibetan Plateau (TP). By using the GlobeLand30 2020 version as the basis for surface classification, the latitude and longitude paired sigma0 data give surface attributes, and then we obtain the time-varying sequence of sigma0 under different types of surface features. The singular spectrum analysis (SSA) interpolation is used to fill in missing data. The sigma0 time change trend and period information of the entire TP and different surface attributes are extracted and identified with SSA, and the period results are analyzed by FFT. From the analysis of sigma0 under different surface attributes, its time-varying sequence has different characteristic results:(1) The sigma0 is higher in waters and wetland areas, and the sigma0 is lower in permanent snow and ice areas. (2) There are stable annual, semi-annual and quarterly signals for sigma0 in the TP. The surface properties of the artificial surfaces, bare land, and shrubland area are stable, and the annual sigma0 change is not significant. The changes of sigma0 in other regions have significant annual and semi-annual periods. The amplitude of the semi-annual and quarterly signal is varies with the nature of the surface. (3) The sigma0 changes in the TP show an increasing trend. It is caused by climate change on the TP and wet surface. The sigma0 data of forest, grassland, and shrubland have the increasing trend, the sigma0 of wetland has a trend of decreasing. Besides geophysics and ocean dynamics research, satellite radar altimeter is also feasible for monitoring land environment. The sigma0 obtained by altimeter is closely related to ground properties and climate change. The effects of different geographical attributes on sigma0 in the TP show different time-varying status:(1) the change cycle of sigma0 is mainly annual cycle, and different land surface states have different periodic attributes, which is related to the response state of different land surface to climate change; (2) The sigma0 value in water and wetland is significantly higher than other areas, which may be caused by the difference of surface complex dielectric constant.
Objectives: Terrain simplification, which uses minimal amount of effective terrain information to express the overall terrain, can solve the contradiction between massive terrain data and computer hardware, and meet the needs of multi-scale terrain applications. However, itis difficult for most of the existing terrain simplification algorithms to take local fluctuations and overall characteristics of the terrain into account at the same time. Aiming at these deficiencies, a terrain adaptive simplification algorithm based on centroidal Voronoi diagram is proposed. Methods: Centroid Voronoi diagram has the characteristic that site points converge to the region with larger density. Meanwhile, more data points should be assigned to the region with larger relief and rough surface in terrain simplification. Therefore, centroidal Voronoi diagram, which is generated by considering the topographic relief as density function, is used to simplify the terrain adaptively. Firstly, based on thedensity function given by topographic relief extracted from Digital Terrain Model (DEM), centroidal Voronoi diagram is generated using Lloyd's algorithm. Secondly, the DEM is simplified and reconstructed according to the sites of the centroidal Voronoi diagram that distributed in areas with large topographic relief, and Voronoi vertices, which mostly distributed on the terrain feature lines. Thirdly, the effect of simplification is verified qualitatively by contrasting the feature lines, such as collection waterlines, ridge and valley lines and contours, of the reconstructed and theoriginal terrain. Finally, Root Mean Square Errors (RMSE) of the proposed algorithm and 3D Douglas-Peucker algorithm are computed and contrasted to access the effect of the simplification quantitatively. Results: (1) Feature lines, such as collection waterlines, ridge and valley lines and contours, extracted from simplified terrain and the original terrain have high degree of overlap. Therefore, the simplified terrain maintains the features of the original terrain better due the use of Voronoi vertices mostly distributed on the terrain feature lines. (2) The RMSE of the proposed algorithm is lower than that of the 3D Douglas-Peucker algorithm at each level of simplification. This is mainly because the proposed algorithm takes both centroidal Voronoi sites that distributed in areas with large topographic relief and centroidal Voronoi vertices that mostly distributed on the terrain feature lines into consideration, while 3D Douglas-Peucker algorithm only preserves the points distributed on the ridge and valley lines. Conclusions: The proposed centroidal Voronoi diagram-based algorithm simplifies terrain according to points in areas with large topographic relief and points on the ridge and valley lines of terrain. Both local fluctuations and overall characteristics of the terrain are taken into consideration in the algorithm, so it has higher accuracy compared with 3D Douglas-Peucker algorithm while maintaining the overall features of the terrain.
Sampling inspection is a fundamental approach in quality management and control of surveying and mapping achievements. Sampling scheme,which includes sampling plan and lot-judging rules, is the critical contents of sampling inspection as it directly determine the result of quality inspection. This paper focuses on the sampling schemes and investigates the validity and suitability of sampling inspection by attributes. A new conception of QUI (Quality Uncertain Interval), which is used to descript sampling scheme's characters, is constructed based on the theory of probability and hypothesis test. Because one sampling scheme can only determinate one QUI, both the QUI length and the quality limits of TYPE I and Ⅱ errors are used to evaluate the effectiveness and validity of sampling schemes. Four typical sampling schemes adopted in GB/T24356-2009 and six simulated sampling schemes are selected to test and analyze. Numerical results show QUI is rational and appropriate for inspection risk controlling of both Geographic products and all kinds of attributes-based sampling industrial products.
Objectives: The structure of the tourism flow network is of great significance for understanding the choices of tourists and the role of attractions in the network. The previous studies mainly focused on the structure of the tourism flow network of all tourists. However, the analysis on the disparities of tourism flow network for different tourists is still lack in the thorough research. Therefore, we analyzed the tourism flow networks constructed by different types of tourism routes from online travel notes. Methods: Based on the online travel notes, the text mining and social network analysis methods are used to construct and analyze tourism flow networks. First, we use text mining to extract the multi-dimensional preferences of tourists, and cluster tourists into different groups. Second, the destination sequences of different tourist groups are used to construct various tourist flow networks. Finally, the structural characteristics of these tourism flow networks and the role of each destination node are analyzed from multiple perspectives. Results: The experiment takes Yunnan Province as the case study area, and the tourists travelled in Yunnan in 2019 are clustered into five groups, then five travel flow networks are constructed. The results show that:(1) The tourism flow network structures of five clusters of tourists are distinct, demonstrating the disparities of spatial interaction patterns among travel destinations and different degrees of network centralization; (2) The travel destinations of cost sensitive and time sensitive tourists are primarily a few popular attractions and some attractions around them. The networks of these two types of tourists show a single-core structure. As for other types of tourists, their travel destinations are more diverse and their travel routes have a larger spatial span. The networks of these types of tourists present a typical multi-core structure. (3) Some travel destinations like Lugu Lake, Xizhou and Dian Lake take opposite roles in the tourism flow networks of different clusters of tourists. Conclusions: Our research is helpful for tourism management department to clarify the characteristics of tourism flows and optimize the cooperation mechanism of travel destinations in the tourism network. In the future work, we will focus on exploring the influencing factors of different tourism flow network characteristics, and applying the results to personalized tourism route recommendations.
Objectives: Spectral analysis is to analyze and study various spectra of potential anomalies with wave-number as the independent variable, to solve the problems of anomaly conversion, filtering and forward and inverse analysis in practical work. The spectral characteristics of the potential include the spatial distribution, lithology and tectonic characteristics of the subsurface stratum, which helps the interpretation and analysis of geophysical data and provides information for further understanding of the study area. The amplitude spectrum of the potential anomaly spectrum is directly related to the burial depth, width of the subsurface geological body; the phase spectrum reflects the information of the horizontal position and dip angle of the subsurface geological body. However, in previous studies, most of the researchers' work had been focused on making use of the characteristics of fast calculation of the bitfield data in the frequency domain, doing a lot of forward work as well as applying the relationship between the spectrum and the parameters of the geologic body for inversion, but the analysis of the spectral characteristics of different geologic bodies and the specific effects of the geologic body parameters on the spectrum have been less studied. On the other hand, there are few studies on the joint inversion using different spectra in the frequency domain inversion. Methods: In order to better carry out the study of gravity anomaly in the frequency domain, using the characteristics of simple, solvable and easy to program and fast computation of the expression for the positive inversion of gravity anomaly in the frequency domain, in this paper, the spectral curves of various models are calculated, and the characteristics of the spectral curves, and details the effects of model parameter changes on the spectral curves are analyzed too; it also includes how to interpret the amplitude spectral inversion and phase spectral inversion, and proposes a joint inversion method in the frequency domain to accurately invert the geometric parameters of the models. Results: The gravitational anomaly spectrum of a horizontal cylinder, which is of the periodic fluctuation type with monotonically decreasing amplitude spectrum and its phase spectrum is a straight line with a negative slope; the gravity anomaly spectrum of the tilted thin plate is periodically fluctuating, with monotonically decreasing amplitude spectrum and fluctuating phase spectrum; the gravity anomaly spectrum of the tilted thick plate is periodically fluctuating with fluctuating amplitude spectrum and fluctuating and progressive phase spectrum. By analyzing the influence of different parameter changes of the thick plate on the model spectrum, the trend of the subsurface geological body is judged according to the spectrum changes. The amplitude spectrum can determine the burial depth of the model, and the change of its first zero value point position is determined in the horizontal direction of the model, and the phase spectrum is directly related to the model parameters. The joint inversion in frequency domain proposed in this paper is an effective method to combine the advantages of amplitude spectrum inversion and phase spectrum inversion to accurately calculate the geometric parameters of the model. The method was applied to the inversion of the target stratigraphy in the Wuqing Sag, and the precise depth and width of the target substratum in the two measured sections were obtained, providing a basis for further seismic studies in the area.
Compared with the traditional single-polarization SAR, the full-polarization SAR can obtain more abundant polarimetric scattering information and describe the geometric and physical characteristics of the target more comprehensively. To make full use of the polarimetric scattering information of ground objects, the paper utilizes the polarimetric coherence matrix used to describe distributed targets and the polarimetric likelihood ratio test (PolLRT) based on complex Wishart distribution to accurately evaluate the temporal similarity between master and slave image blocks. Compared with the traditional method, the method not only considers the cross-correlation information between the same polarization, but also considers the cross-correlation information between different polarization modes, to improve the matching performance of the time series polarization information. Finally, in the real experiment, two fully polarized Unmanned Aerial Vehicle SAR (UAVSAR) data are used as the experimental data, and the external global positioning system (GPS) deformation data is used as the reference data. The experimental results show that the proposed algorithm has higher deformation extraction accuracy and shows more robust deformation extraction performance under different matching window sizes.
To explore the relationship between urban ecological environment and human activities is an important research content in the current urbanization process. With the in-depth development of the era of big data, multi-source data ubiquitous in the Internet has been fully excavated, which has played an important role in promoting the research of urban ecological environment.Based on multi-source data, the paper proposed toconstruct human activity indicators (residential area walkability index, street vitality index, urban function mixing index) by using Point of Interests (POIs), Open Street Map (OSM) and residential area data, and urban ecological environment indicator (remote sensing ecological index) by using remote sensing images. Combing machine learning models such as polynomial regression(PLR), random forest regression(RFR), eXtreme Gradient Boosting Regression(XGB) and support vector regression machine(SVR), it is effective to make regression analysis on urban ecological environment and human activity indicators.By comparing the performance of different models in this data set, the relationship between the urban ecological environment and human activities is revealed.We demonstrate theapplication of our method using a case study of Nanchang city. The results show that:①The three indexes of human activities all present a central high and gradually decrease to the surroundings, while the urban ecological environment indicators show an opposite trend.②From the analysis of the performance results of each model on the data set, XGB has the best regression effect, followed by PLR.③There is a strong negative correlation between the urban ecological environment and human activities, and the street vitality index, the urban function mixing index are more relevant to the urban ecological environment, and the walkability index of the residential area is less relevant to the urban ecological environment.④In areas where human activities have less impact, the urban ecological environment will be disturbed by other factors, resulting in the low prediction accuracy, while the prediction accuracy in areas with strong human activities is high.
The high precision registration of point cloud data is the key to ensure the integrity of the three-dimensional data on the surface of space objects. Aiming at the problem of the position, posture and scale difference of the cloud data of adjacent stations, a new registration method using dual quaternion description under the constraint of point and surface features is proposed. First, the dual quaternion are used to represent the rotation matrix and the translation vector of the space similarity transformation, on this basis, the scale factor is taken into account. There is a vertical and parallel spatial topological relationship between the vector constructed according to the point in the plane and the point out the plane and the normal vector of the plane, and use this as the constraint condition of the spatial similarity transformation. The adjustment model is constructed based on the least square rule. Then, the Levenberg-Marquardt method is introduced to solve the adjustment model to avoid the iterative non-convergence caused by the improper initial value or the symmetric matrices constructed by the Jacobian matrix is close to singularity. Finally, through a comparative analysis of two sets of experiments and existing methods. The experimental results show that the method that takes into account the scale factor and uses the dual quaternion to achieve spatial similarity transformation under the constraints of point and surface features has strong practical value.
Objectives: Observation quality is one of main indexes to evaluate the performances of satellite navigation system. Under the background of the global BeiDou navigation system (BDS-3) officially opening global services, the high-precision and high-reliability Positioning, Navigation and Timing (PNT) services are the prerequisites of BeiDou innovations and developments, and global uses. As the main error in the processing of Global Navigation Satellite System (GNSS) observations, researches about rapidly high-precision and mitigated multipath delays are always the focus of the GNSS areas. In consideration of the drawbacks that the traditional multipath modelling strategy, namely first trend then random terms, cannot obtain the global optimal solutions, this research proposes a real-time multipath correction model based on the prior constraints. Methods: Firstly, the regularization algorithm combined with Total Variation term are used to denoise and refine the multipath series in aspect of estimated BeiDou navigation system (BDS) multipath delay. Secondly, the trend and random parts of multipath delays are separately processed and together modeled, in which the strategy of least square plus autoregression (LS+AR) is used to estimate model coefficients in one step. Thirdly, to meet with the requirements of real-time or near real-time BDS applications, the multipath mitigation model is further optimized based on the prior information. Results: According the real-time experiments of BDS Precise Point Positioning (PPP), it is indicated that the accuracy of the regional navigation satellite system (BDS-2) PPP can be improved with 10.6%-64.9%, 0.0%-59.1% and 12.6%-67.2% (B1I/B3I) in the directions of East(E), North(N) and Up(U), respectively. Moreover, the accuracy of stations coordinates using BDS-3 observations can be optimized with at least 13.9%, 60.0%, 45.9% (B1I/B3I) and 19.1%, 46.5%, 23.9% (B1C/B2a) for E, N and U directions, respectively. Conclusions: Therefore, the BDS observation quality can be well improved by the proposed strategy in mitigation multipath delays.
Addressing the problem of high complexity of trajectory sub-segment similarity matching and result sensitivity of trajectory noise, this paper proposes a multi-level trajectory code tree structure that integrates adaptive Hilbert geographic grid coding. A hierarchical organizational form and sub-segment subordinate relationship expression structure are formed from the entire segment of the trajectory to the smallest segment, and a sub-segment similarity matching algorithm is designed on the basis of the trajectory segment code tree to transform complex spatial calculation into string matching operation, which greatly reduces the computational complexity of similar matching of sub-segments. Experiments on actual trajectory data show that the efficiency of the proposed method achieved more than an order of magnitude improvement over the classical distance-based similarity measurement method without affecting accuracy.
Individual location prediction is of great significance in applications such as precise prevention and control of infectious diseases and scientific planning of public facilities. Existing location prediction algorithms mainly focus on mining and modeling individual longitudinal historical trajectory characteristics, and realize location prediction, and less consider the regular characteristics of users with horizontal similarity. For this reason, based on the framework of Graph Convolution Network (GCN) and Long Short-Term Memory (LSTM) model, this paper proposes an individual location prediction algorithm that takes into account the characteristics of horizontally similar user trajectories and the characteristics of vertical historical regularity. First, construct a user trajectory similarity algorithm and screen users with high similarity, then use the graph convolution model to extract user trajectory features with high similarity of users to be predicted, and finally use the long and short-term memory model framework to extract historical trajectory features and integrate similar user trajectory features, So as to achieve individual location prediction. Based on the data of more than 80,000 users in a city for 4 consecutive working days, the results show that the accuracy of the method proposed in this paper decreases with the increase of the prediction time step, and the accuracy of night prediction is significantly higher than that of the day, but compared to the previous All models have an improvement of more than 10%. When 15 minutes is used as the prediction time step, the model accuracy rate reaches 80.45%.
Focusing on the problem of global absolute positioning of drones under global navigation satellite system (GNSS) denied environment, a reference satellite image retrieval method that aggregates deep learning features is proposed. First, the pre-trained deep learning model is used to extract local convolution features of the drone images and satellite images. Then, the local aggregation descriptor vector is used to generate the global expression of the images. Finally, the global feature of the image is used to perform similarity retrieval and the post-processing method of matching precisely and reranking the retrieval results is used, which further improves the retrieval accuracy. A new satellite reference image data set for absolute positioning of drone is designed and tested. The results show that the accuracy of the method used to retrieve the satellite reference image in the drone image adaptation area is 76.07%, which can provide a reference for the subsequent absolute positioning of vision-based drones.
Existing population spatialization methods mainly use administrative-unit-level data to train regression model, and transfer it to grid cell-level to achieve population allocation. However, the significant scale difference between the analytical units in training and estimation leads to the issues of cross-scale model transfer. Meanwhile, only the attributes of current cell are considered in cell-level feature modeling, which causes the innate spatial association between cells to be eliminated and cells to be isolated. Therefore, this paper proposes a novel population spatialization based on random forest by considering pixel-level attribute grading and spatial association (PAG-SA). In the cell-level feature modeling, we firstly constructs the night light grading features embedded with building category constraints based on natural breaks, and counts the grid proportion of each grading level at the administrative-unit-level as the training input to reduce the cross scale error; secondly, the influence and distance attenuation of neighborhood point of interests (POIs) upon the current cell is modelled by using kernel density estimation; thirdly, based on overlay analysis, the numbers of POIs in the contours of different building types are counted to improve the precision of feature modeling. To verify the effectiveness of the proposed method, we selected Wuhan city as the experimental area and compared its spatialization accuracy with the datasets of WorldPop, GPW and PopulationGrid_China at street scale. The results show that the mean absolute error of PAG-SA is only 1/6-1/3 of the comparison datasets. In addition, the influence of feature composition, grid size and kernel density bandwidth on the accuracy is also discussed.
Since there is only nadir radiometer onboard for the wide-swath altimetry, the wet tropospheric delay can only be corrected by models inside the cross-swath or substituted by the nadir radiometer measured data, which lead to the lower accuracy. In order to improve the accuracy of wet tropospheric correction (WTC) inside the cross-swath, an optimum interpolation method to fuse the nadir radiometer WTC is proposed, and it is verified by taken the SWOT (Surface Water and Ocean Topography) wide-swath altimeter as example. Inside the cross-swath, when the WTCs from ERA5 (ECMWF Re-Analysis 5th Generation) dataset are used, the residual of WTC after corrected by optimum interpolation can be reduced by 40% compared with the nadir radiometer WTC substituted. When the simulated WTCs from the spectrum of radiometer measured WTCs are used, the residual of WTC after corrected by optimum interpolation can be reduced by 80% in all latitude areas compared with the nadir radiometer WTC substituted. In addition, the optimum interpolation method has much better accuracy than the nadir radiometer WTC substituted in the case of high water vapor variability.
Special Recommend

- All visit:
- Today's visit: