2018 Vol. 43, No. 8
As the basis of point-feature label placement, the candidate label position directly affects the implementation and quality of labeling algorithms. However, fixed-position model and slider model are mainly used in previous researches, which restrict the further improvement of labeling results when considering the conflicts with point, line, and area background features. In this context, the candidate label region model based on the plane collision detection theory is proposed to accomplish conflict-free labeling by utilizing the surrounding zone of point feature. Compared with other studies, this algorithm makes a greatly improvement in label quantity and quality to meet the demand of map production.
This paper presents a Morphing method of polylines based on shape matching by preserving local neighborhood structures. Although the absolute distance between two points may change significantly during the map generalization, the global context and the neighborhood structure of points are generally well preserved and more stable. We first introduce a shape matching method by combining the shape context and relaxation labeling which takes advantages of the global context and local neighborhood structure. Using shape context descriptors, we get the matching costs between points which can be used to initialize the matching probability matrix during relaxation labeling. Afterwards, weiterate the support functions to update the matching probability matrix until we get the optimal match-ing results. The two polylines are divided into two groups of sub-segments by the matching results. Finally, by using the linear interpolation method, we make Morphing for every pair of the corresponding sub-segments. Extensive experiments have shown that our method can well preserve both the global context and local neighborhood structures, and can improve the accuracy of Morphing transformation.
The traditional clustering method is inappropriate for the gradual merging process of buil-dings, especially for a large span between two scale datasets. In order to resolve this problem, this paper proposed a multilevel identification approach to structured building clusters through inserting series middle scale between initial scale and target scale. Based on the spatial cognition and Gestalt principles, the structures of building group are abstracted and summarized into five typical patterns, and an identification approach is presented for building groups based on compactness network diagram and five typical patterns. Firstly, the neighborhood relationship is captured and the compactness network diagram is constructed with Delaunay triangulation. And the strongly compact loops, weakly compact loops and extended lines are generated for detecting the typical patterns. Then, the middle scale dataset can be obtained under the given constraints and thresholds, so that to achieve continuous visualization of multi-scale spatial data. Finally, experiments show that the identified result reflect the spatial distribution characteristics of buildings more clearly and seems more consistent with the habit of human cognition.
Trajectory data have been extensively used in human mobility studies. Activities, especially conducted on a static and local space are basic elements of people's daily life, and they are represented as stays in trajectories. Hence detecting stays from trajectories has become a base for many activity-oriented studies. The temporal sampling interval (TSI) of trajectory data can impact the result of stay detection. However such impacts have not been systematically studied yet. This study proposes a probability-based framework, which aims to quantify the probability of an activity that with a specific duration time can be detected as a stay with different TSIs. Moreover, this framework can support further analysis on the evolution of daily movement network with different TSIs. We demonstrate the impacts of TSIs on stay detection and movement networks construction by using a trip survey dataset and a mobile phone location dataset of Shenzhen, China respectively. This study provides both metho-dological and empirical guidance on the decision-making of a TSI selection as well as the estimation of the results of activity-oriented studies.
The auto-selection of road network is the core content of the road data generalization. Aiming at the shortage of current researches which ignores the effect from the neighbor node and the road density when calculating the road node importance degree. This paper proposes a method based on weighted PageRank algorithm. Firstly, the road stroke is generated and treated as the basic calculation unit. Then the road network is treated as a weighted directed graph that the road stroke as the node of the graph, as well as the road junctions are treated as the link between the nodes. The stroke length is selected as the link weight between two graph nodes. When the road graph is built up, the weighted PageRank algorithm is used to calculate the importance degree of the node which stands for the road importance degree and take the effect from the neighbor node into account. Next, thinking about the influence of the road density, the SpamRank method is selected. The SpamRank is just contrary to the PageRank and could be used to modify the importance degree exception caused by the road density. After revised by the SpamRank it will get the latest PageRank on which the road selection is based. Finally, using the Zhengzhou road data for experimental verification, the results show that this method can effectively maintain the original road connectedness and the whole structure compared with the network century method.
To address the problems that the traditional probabilistic relaxation method only adopted geometric constraints as one of road matching criterions and could not respond to M:N matching pattern, we propose an improved probabilistic relaxation method from the combined views of local optimization and global one, integrating geometric indicators with topology ones to achieve an effect with local optimization, as well as identifying M:N matching pattern by inserting virtual nodes to achieve a globally optimal effect. Then we design the matching strategies and corresponding implement algorisms for different matching patterns. The case test showed that the overall matching accuracy of each evaluation indictor reached over 90%, increasing by 7%-14%; the evaluation indicators on both spatial and attribute properties increased by 3%-7%; the proper buffer threshold can be defined as twice the average value of the closest distances from all nodes in the candidate matching dataset.
The construction of intelligent public transport system is an effective way to solve the pro-blem of urban traffic and to facilitate the residents to travel. Auto fare collection (AFC) system and vehicle GPS, which records passengers' trip and bus track data, are widely used in hyper-megalopolis. Using the bus big data efficiently to identify passengers' alighting stations is very important for urban transportation operations and organization. Based on AFC and GPS data, this paper presents an algorithm to identify passengers' alighting stations. We use the time matching method and density clustering to identify the bus stops. Considering the passengers' trip-chain and trip-section, this paper proposes an algorithm that combines the high frequency sites and site heat to identify the location of passengers' alighting stations possibility. The distance between the actual stations and the weight of the estimated points determines the accuracy of the forecast. The results illustrate the effectiveness and usefulness of the proposed method in identifying the passengers' alighting stations.
It is more popular to develop the navigation applications using mobile augmented reality (AR) with virtual-real fusion in near-earth view. Limited by the complex outdoor environments, such as the restricted visibility, the computing and storage capability of mobile devices, the target cannot be identified correctly under the conditions of perspective, narrow and close view angle. Furthermore, the virtual-real fusion display will be affected by the far distance and large camera angle. In this paper, the acquirement strategy for registration viewpoints based on image template is first analyzed, and then a registration strategy of mobile AR based on cone-view partition of outdoor building environments is proposed under the restricted conditions of occlusion and visibility. Finally, this strategy is successfully applied to the development kit of mobile AR and the experimental results demonstrate feasible for the typical buildings in campus.
In the process of spatial calculation, spatial object is often described as minimum bounding rectangle (MBR), which makes the rectangular constraint a key subset of spatial relationship. On the basis of rectangular algebra, we illustrate the 169 rectangular direction constraints with a 2×2 matrix, which is called F-matrix, basing on the interval relations between projected intervals of rectangles. According to the neighborhood relation between rectangular direction constraints, we build the neighbor grid for rectangular directions using a 4-dimensional coordinate system. In the research, we calculate the distance between rectangular directions via the shortest path between the corresponding vertexes in the grid. The relational distance indicates the neighborhood of two relations, and analyzes how a rectangular direction turns into another one due to the deformation of rectangles, such as sca-ling and translation. During the rectangular deformation, a set of new rectangular relations will be created. According to the initial and final rectangular constraints, we use the Cartesian products of corresponding feature value tuple intervals, to calculate the F-matrixes of newly created rectangular relations. Besides, we also explore and predict the corresponding rectangular directions during the rectangular deformation, for example, if the current constraint is meeting, the next rectangular constraint must be disjointed or overlaid. In the last section of paper, we analyze and conclude the characteristics of corresponding F-matrixes during the deformation of rectangles. According to the current rectangular constraint and impending deformation, more detailed predictions can be made.
As a main tool of spatial data mining, spatial point clustering does offer interesting methods to address data effectively. At present, the study of spatial point clustering is mature and current methods may divide the initial data into different clusters. However, few methods consider geographi-cal features by linear distribution. Here, a novel spatial point clustering using the rolling circle (SPCURC) is proposed, which derives from the rolling sphere method. SPCURC uses a circle with a known radius is used to roll from the initial point to another point. The rolling does not stop until the condition is met. Then, a polygon cluster or a linear cluster will be generated with points contacted by the rolling circle.This paper also introduces the theory, the detailed calculation procedure and the algorithm complexity. In order to verify the proposed method, the simulated and actual experiments have been performed respectively. DBSCAN algorithm and hierarchical clustering method were employed as the comparative clustering methods. With two synthetic data sets, simulated experiments show that SPCURC is superior to the comparative methods in acquiring linear clusters and can get different types of clusters no matter whether the areas are low-density or high-density. With two real datasets, which contain a residential area in the south and some global seismic data, the actual experiments confirm that SPCURC has the advantages and the applicability to find the clusters that are in a linear way by comparison to DBSCAN and hierarchical clustering. The results indicate that the proposed algorithm is feasible, effective and practical providing the linear clusters and the clustering maps tallying with the initial data.
Pseudorange code biases will influence high precision data processing in BeiDou navigation satellite system (BDS). Based on Melbourne-Wübbena(MW) combination, we have analyzed the effect of code biases on baseline resolution according to the code biases correction models and the residuals of double differenced wide-lane ambiguity of baselines, respectively. The results show that the influence of code biases on double differenced wide-lane ambiguity resolution with MW combination is gradually obvious as the baseline becomes longer. Moreover, the impact on the 300 km baseline may reach up to 0.36 cycles. In addition, the results suggest that the code biases of geostationary earth orbit(GEO) have great influence over both short and long baselines. For inclined geostationary orbit (IGSO) and medium earth orbit (MEO), the code biases have little effect on baseline solution when the baseline is within 300 km, however, the residuals of wide-lane ambiguity of partial satellites have obvious biases when the baseline is over 300 km. Furthermore, the fixing rates of wide-lane ambiguity of these satellites are lower, so we should take into account the influence of code biases on baseline resolution at this time.
A framework of distributed parallel computing is proposed and developed to meet the performance and requirements of GNSS data processing. Global ionospheric modeling by using distributed parallel estimation is performed based on this framework. The efficiency of global ionospheric modeling is tested and analyzed by using single computer with multi threads and multi computers with distributed parallel computing respectively. The results indicate multi-threaded parallel computing can promote the efficiency significantly. The efficiency is promoted to the highest when the number of threads is set to equal to the number of CPU cores. Also, the efficiency can be further enhanced by using multi computers with distributed parallel computing. The time consumption of modeling can be reduced approximately 60% by using 4 desktop computers compared to using single desktop computer. If 2 servers are used for modeling, the time can be reduced approximately 18% compared to using one server. Multi computers are organized for distributed parallel computing to promote the efficiency of global ionospheric modeling. It's very helpful for fast releasing of ionosphere products, algorithm testing of modeling and so on. It also has good reference value to multi-GNSS precise orbit determination and positioning as well as estimation of huge GNSS network.
Currently, there are five testing satellites of BeiDou Global Satellite Navigation System broadcasting new signals. Quality analysis of testing satellite observations is a significant content for the verification of the new signal system. Based on single station measurement of BeiDou testing satellites, the code minus phase combination (CC) and multipath combination (MP) are employed to analyze the code noise and multipath errors of civil signals and Bs signal of the test satellites. The result shows that the pseudorange measurement accuracy of inclined geosynchronous satellite orbit (IGSO) satellite is better than that of the medium earth orbit (MEO) satellite; B2a+b signal has the highest pseudorange measurement accuracy as well as the best anti-multipath performance, while the B1C signal performs the worst in both aspects; the pseudorange measurement accuracy of Bs signal is rela-tively poor, yet better than that of the B1C signal and a systematic error related to elevation exists in the code multipath series of Bs signal with its maximum value up to 0.5 meter.
Pseudo-stochastic pulses can improve the accuracy of LEO reduced-dynamic orbit determination effectively, but the pseudo-stochastic pulse priors (i.e., time interval and a priori sigma) will affect the values of pseudo-stochastic pulse, thereby affecting orbit accuracy. In this paper, single-day GRACE orbit experiments based on reduced-dynamic method show that time interval decreases from 240 min to 6 min, a priori sigma increases from 1×10-4 mm/s to 1×10-1 mm/s, the total values of pseudo-stochastic pulse increases from 1×10-2 mm/s to 1×101 mm/s, and the orbit accuracy improves from tens of centimeters to 2 cm; when a priori sigma is greater than 1×10-1 mm/s, a priori sigma continues to increase, the values of pseudo-stochastic pulse remain unchanged and the orbit accuracy unimproved. Therefore, for single-day orbit solution, time interval is reduced to 6 min, a priori sigma goes up to 1×10-1 mm/s, the values of pseudo-stochastic pulse increases, the orbit accuracy improves; a priori sigma continued to increase, the values of pseudo-stochastic pulse keep unchanged, and the orbit accuracy is unimproved. Using Swarm orbit experiments of different heights can be used to verify the validation of the conclusion.
Because of the angular response effect, a significant imbalance of backscatter strengths always occurs in the multibeam image, which seriously affects the sonar image quality and limits further application of sonar images (for example, seabed target recognition and sediment classification). Current existing methods are mostly based on mathematical interpolation method or acoustic models, but there are still many deficiencies when dealing with complex situations. To solve these problems, this paper proposes a method of weakening angular response effect of the multibeam sonar image based on the angular backscatter characteristics of different seabed sediment type. Firstly, the suitable angular response parameters are reasonably chosen, and the k-mean unsupervised classification method is used to classify the angular response parameters to different sediment types. Secondly, the ave-rage of angular backscatter strength curves corresponding to different seabed sediment type is calcula-ted to obtain the echo characteristic curve of each sediment types. Finally, the angular response effect is weakened by subtracting the echo characteristic curve of each sediment type from the original echo strength curve and adding the average backscatter strength value of each sediment type. Following the steps, the consistency of the echo intensity is achieved and the quality of the sonar image is improved. During the processes, as to the problem of k selection when using the k-mean method, this paper gives out an iterative selection method. In the experiment, the multibeam sonar data measured in the waters of Jiaozhou Bay was used to verify the method, and it was proved that the method can effectively weaken the angular response effect.
In this paper, a method for inversion of sound velocity profile in multibeam survey is pre-sented. Firstly, the sound velocity profile samples are analyzed with empirical orthogonal function (EOF) and the reconstruction coefficients are calculated by sound velocity perturbation matrix and previous order of EOF. Secondly, the reconstruction coefficients of EOF are estimated by simulated annealing algorithm, the target function is constructed with the seafloor distortion which is obtained by multibeam sounding. Finally, the inversion of sound velocity profile is reconstructed. The analysis of examples show that the inversion of sound velocity profile is closer to the real sound velocity profile and the seafloor corrected with inversion of sound velocity profile is closer to the real seafloor.
The outlier problem is always the hot topic in surveying data processing. Due to the complexity of detecting multiple outliers simultaneously, the more efficient method may be highly desirable. The quasi-accurate detection of outliers is a method to identify and position outliers through the estimates of real errors, which is relatively complete in principle, computing and applications. The key part for this method is just to select the quasi-accurate observations. Taking above into consideration, a new method to choose quasi-accurate observation for two parts by combining L1 norm minimization method with median is proposed. The criterion for determining quasi-accurate observations is built. Firstly, the L1 norm minimization method is developed to obtain the robustified residuals, and the observations whose residuals are approximately zeros will be treated directly as the first part of quasi-accurate observations. Then, a new vector would be formed by computing the absolute values of the remaining residuals. By obtaining the median of the new vector of residuals, the second part of quasi-accurate observations are the observations whose residuals are less than the given median. The detailed analysis of GPS network adjustment and GNSS single point positioning example has been conducted to assess the performance of the proposed method. The results show that the proposed method for selecting quasi-accurate observations is effective and feasible.
In order to efficiently obtain high quality information of highway alignment parameters, vehicle position orientation system is adopted to obtain information of dispersed coordinate and attitude, and line type parameters are calculated through post processing. This paper firstly presents a method of initial recognition based on curvature characteristic which is calculated by azimuth filtered by fast median filtering algorithm. Secondly, calculations and accurate recognition of line and circle type parameters are performed. Finally the parameters of approach curve are calculated by adjusted line and circle curve parameters. The method in this paper has been testified at a highway, and the results show that this method is practical.
The radiosity is one of the popular algorithms for radiation simulation within a virtual canopy. However, owing to the complexity of plant architecture, enormous computation on computing the form factors has become a serious burden. Thus, a new optimized strategy of radiosity based on CUDA and 3D voxel traversal was developed to improve computation efficiency. Taking simulation of radiation transfer within a virtual Loquat canopy as an example, our proposal is based on uniform partition of bounding box and voxels traversing along a 3D line to identify occlusion between the light source and tree model facets, combining GPU to compute form factor in parallel using CUDA. Furthermore, we adopted reduction algorithm and shared memory to optimize the radiation flux calculation. Compared with serial implementation on CPU, the results are good in terms of execution times with speed-ups about 150. By comparative analysis with ray tracing and traditional radiosity model (progressive refinement radiosity algorithm), the simulation results of PAR distribution are similar and consistent. The results of comparison show that the new method not only improve computation efficiency, but also insure the accuracy.
Climate studies at regional and global scales, require accurate descriptions of the light reflecting behaviors of the underlying surfaces at atmospheric boundary layers. A regularized constraint retrieval method is correspondingly proposed for this purpose. The key to the presented method is the determination of the regularization parameter (RP). In order to stabilize the quantitative retrievals of land surface reflective property (SRP) parameters, the optimized RP is obtained via the corner point of the L-curve. Numerical retrieval tests in Beijing-Tianjin-Tangshan region and entropy reduction results suggest that information indices in visible-red and near-infrared channels reach 11.682 2 and 10.072 6; and that the average information indices in these two channels, before and after SRP retrie-vals using L-curve-based regularization (RLC) method, are 0.440 0 and 0.354 6, with the greatest increase reaching 2.467 2 and 2.290 5, respectively. The advantage of the RLC method lies in the independence of the a priori surface parameters knowledge. The RLC method is very effective and useful for the retrievals of surface parameters in the cases of insufficient satellite observations.
The classical target detection methods may only perform well with certain assumptions. To overcome the shortcomings, this paper proposes a novel local decision adaptive information-theoretic metric learning (LA-ITML) target detector. Firstly, the proposed method uses the ITML method as the objective function for learning a Mahalanobis distance to separate similar and dissimilar point-pairs. Then, a locally decision adaptive constraint is applied to shrink the distances between samples of similar pairs and expand the distances between samples of dissimilar pairs. Finally, we can make the detection decision by considering both the threshold and the changes between the distances before and after metric learning. The experimental results demonstrate that the proposed method can obviously separate target samples from background ones and out performs both the state-of-the-art target detection algorithms and the other classical metric learning methods.
During the orbit imaging of the wide-field remote sensing camera, it is affected by the earth's rotation, satellite jittering, attitude maneuverings and other factors, resulting in a decrease in image quality. Therefore a image motion velocity modeling is put forword, which is suitable for the wide-field of the remote sensing camera and considers the effect of off-axis angle on the calculation accuracy, to deduce the off-axis three-mirror camera image motion velocity and drift angle. Taking a satellite as an example, the distribution of the image motion velocity and the drift angle in the focal plane of the three typical imaging modes is simulated. The simulation results, which are consistent with the qualitative analysis results, verify the validity of the image motion velocity model. On this basis, a corresponding image motion compensation strategy is proposed against the scroll and pitch imaging mode. The compensation effect shows that, when the satellite is imaging as scrolling and pitching angles are both 35°, the global optimization drift angle matching strategy can guarantee that the MTF of the whole focal area is greater than 0.95 (16 integration stages). The MTF of the focus observation target is greater than 0.95 (96 integration stages) with the local optimization drift angle matching strategy. Using the proposed method of image motion velocity matching strategy, the MTF of the whole focal area is greater than 0.95 (16 integration stages) when dividing the row cycles into 11 groups. The simulation results show that the proposed strategy can effectively solve the image quality degradation problem when scroll and pitch imaging and can provide a reliable basis for the image motion compensation of the wide-field remote sensing camera.