2016 Vol. 41, No. 10
This paper details a DSM (digital surface model) generation method using ZY-3 images based on object semi-global optimization. This method avoids the limitations imposed when building a match cost cube in the standard method. The proposed method combines semi-global optimization and an image pyramid to dynamically determine the search space of every pixel in the next pyramid layer according to the match result of the previous layer. Combining outlier detection technology and using mutual information and CENSUS as cost function, it realizes high-precision DSM generation from ZY3 images. We analyzed the key factors affecting DSM accuracy through experiments.
UAV(unmanned aerial vehicle) images suffer from big registration and projection errors when UAV images are captured due to unstable rotary wings. In this paper, we propose a new method for change detection using UAV images, that compensates for these sources of error. Our method combines feature points matching and image segmentation. By merging the results of unmatched feature points and low-similarity segmented objects, the changed areas will be detected. By using the value of image registration error as searching buffer radius, mutual cross correlation calculations of the corresponding segmented objects are employed to leverage the impact of inconsistent segmentations on change detection results. Experimental results illustrate that the proposed method outperforms traditional methods as it integrates the context texture and spectral information from segmented objects, which can weaken the impact of image registration and projective errors resulting from the large rotation angle and improve the accuracy of change detection to certain extent.
Trajectory processing and analysis is now a research hotspot in database development, spatial information, and other related fields. Because of the complex scenarios between trajectories and directed lines, including topological semantics like intersections, touches, overlaps, returns, and stops, a key point oriented topological movement process model of trajectory-directed line is proposed, depicting semantic topological relations of trajectories over time with respect to directed lines such as direction relations, location relations, and semantic information. A planar spatial reference framework is established upon buffers building on the local effectiveness of direction relations. Then, 172 semantic topological relations, are classified into 14 basic topological relations, by detecting and analyzing the key points of trajectory-directed lines from a topological point of view. The resulting model expresses the topological relations of key points by means of character encodings with explicit semantics, and depicts complex movements of trajectories in relation to directed lines through key point encodings.
In this paper we propose a practical model for representing geographic information with a semantics proximity measurement, realizing the technical path from semantic modeling to proximity measurement. Based on the analysis of the content and representation scale of semantic geographic information, the basic structure for the description of semantics of geographic information is presented., We further refine the semantics of geographic information into feature items with different granularities, and construct a LOD representation model for the semantics of geographic information. We calculate the semantic proximity between geographic information based on matching relationships among the relevant semantic feature items. In a case study of land-use categories and proximity calculation, by comparing the experimental results and ground truth evaluation we verify that the model has strong practicality.
The integration and sharing of marine environment data is one of the important research goals in marine GIS. Several problems have blocked the effective sharing of marine environmental data in the past, such as load unbalance in data services, limited data sharing modes and unobvious data servers. With the advent of cloud computing technologies, great changes occurred in the modes for data sharing. Cloud computing relies on sharing of resources to achieve coherence and economies of scale; similar to a utility (like the electricity grid) over a network. The theoretical foundation of cloud computing therefore is the broader concept of converged infrastructure and shared services. This paper presents a data sharing architecture for the marine environment based on cloud computing. The architecture providers of aIaaS(Infrastructure as a Service) offer computers-either physical or more often virtual machines and other resources. The DaaS(Data as a Service) mode in the architecture is based on the concept that a product, data in this case, can be provided on demand to the user regardless of the geographic or organizational separation of provider and consumer. The PaaS(Platform as a Service) mode in the architecture providers deliver a computing platform, typically including operating system, programming language execution environment, database, and web server. Application developers can develop and run their software solutions on a cloud platform without the cost and complexity of buying and managing the underlying hardware and software layers. Through this architecture, the user acts as both user and provider. The architecture provides some core functions for user, such as data release, data needs release, data discovery, needs discovery and feedback functions. This marine environment data sharing mode can inspire marine researchers to contribute their data thus ensuring effective and sustainable data resource integration. A prototype system for marine environment information cloud computing platform was realized, and simultaneously the feasibility and practicality of our technical solution was tested.
Space-based satellite sensors are the important data sources needed to be planned in an emergency mission. Usually, one single space-based satellite sensor cannot fully provide the observation information. According to the dynamic and complex observation needs of emergency task, the program of multiple space-based satellite sensors' observation combination should be generated. In this article, we identify the problem model for a combination of space-based satellite observations based on the needs of a emergency tasks. By setting the weights of the observation requirement factors on-demand, we propose an emergency mission oriented evaluation method for a combination of space-based satellite sensors' observations that can realize a reasonable assessment for selecting the sensors combinations. Experimental results show that the use of this proposed method is a feasible solution for evaluating the program of combinations of observations from different space-based sensors.
Harbor detection and recognition is an important SAR(synthetic aperture radar) remote sensing ocean application. By analyzing the structure and microwave scattering characteristics of harbors, a harbor detection and identification algorithm for SAR remote sensing images based on prior constraint is proposed. A closure measure is defined according to the semi closed characteristics of water to extract harbor water, and the jetties are detected with the multi direction scanning method, so the harbor can be initially detected by combing two results. The entrance direction and the representative points of jetties are determined according to the minimum enclosing rectangle of the harbor. Finally, harbor identification is finished with a closure measure for the coastline between representative points of jetties. Experiments show that the proposed algorithm can detect harbor and the entrance directions accurately, and extracts jetties more completely.
Aiming to solve the problem of low efficiency and accuracy when simulating large spreading field forest fires using traditional forest fire spread simulation models, we constructed an improved model coupled with Cellular Automata to ensure=accurate timing of forest fire spread.. We evaluated the impact of time steps on the simulation accuracy to determine an optimal time step value that improves the accuracy and efficiency of large field forest fire simulations. A case study simulation of a spreading forest fire that occurred on Daxing'an Mountain in May 2006 showed that the optimal time step of the forest fire spread geography using a cellular automata simulation algorithm was 1/8 faster when cells were combusted completely. Compared with the real fire situation interpreted from TM image indicates that this model has a higher time and spatial consistency with an average Kappa coefficient of 0.6352, and the average accuracy was 87.89%. This algorithm can be used to simulate and predict forest fire spread in practical applications and the algorithm is reversible.
In the building height inversion of urban area, as the phase center of different scatterers within a resolution cell will cause the ambiguity in the interferometric phase, compromising the accuracy of building height inversion To solve this problem, this paper proposes a dual-baseline PolInSAR building height inversion method. We use dual-baseline PolInSAR data to addobservations. Then, we use optimal criteria for removing the phase ambiguity within a resolution cell. To validate the proposed method, we selected three sets of high-resolution TerraSAR-X data covering Tai'an City, Shandong for experiments and compared the experimental results with ground truth data. The inversion accuracy of the proposed method was improved over the results obtained from the traditional single baseline method.
In the InSAR co-registration procedure, the transform model between images in image pair can be fitted by the rational polynomial, and model parameters can be inverted by tie points at high accuracy and uniform distribution. However, traditional algorithms consider neither the importance of tie points or take into account the spacial distribution of tie points, which reduces the polynomial fitting accuracy. This paper proposes a Voronoi diagram based co-registration algorithm. It takes into account both weight and distribution of tie points. The algorithm was applied to process X band interferometry images acquired by the Cosmos-Skymed-1/2 satellite from somewhere in the west China. Compared to traditional methods, tests show that the Voronoi diagram based algorithm improves the correct matching ratio, acquires tie points at higher coherence with more uniform spatial distribution, thus achieving transform parameters at higher precision.
Based on the three-dimensional coordinate transformation model and the properties of the Roderick matrix, a linear fitting equation for single target point cloud orientation was derived. A constraint equation makes it conditional that the homonymy laser beam intersects with public targets. Orientation parameters for the entire point cloud in the whole region are then resolved simultaneously at the same time. Compared to the traditional non-linear model for point cloud orientation, the algorithm proposed in this paper is capable of achieving total orientation for a multi-station point cloud in the whole region, making it unnecessary to calculate initial parameters in the computational process. The accuracy of the each station orientation computed by this method vis a vis the traditional was high.
Online geocoding services are the most common technique for transforming non-spatial into spatial information. The increasing number of online geocoding services, however make it difficult for requestors to choose a better service. There are two aims in this paper: (1) to provide guidance for service requestors when selecting the most suitable service; (2) to discover the defects of online geocoding services, thus providing a basis for further improvements. In this paper, a comparative evaluation of geocoding quality was conducted among the most four popular online geocoding services: Baidu, Amap, Sougou, and Tencent. Four types of address data associated with basic public necessities were used as test data, and three metrics for geocoding quality: match rate, positional accuracy and similarity, were calculated to evaluate the quality of these four online geocoding services. The following conclusions were drawn: (1) online geocoding service quality mainly depends on the quality of the reference database; (2) Amap produced the highest match rate and lowest positional accuracy; (3) Overall, the Tencent map service performs the best, and produced more complete address data with higher data quality.
There are two main methods for 3D modeling of orebodies. The first is the contour-matching method, and the second is the isosurface-from-volume method. The contour-matching method models by selecting a corresponding relationship between contours with human interaction, but is limited by weak constraints and arbitrariness. In addition, because of the non-uniform distribution of the points on the contours of an orebody, the model constructed by this method has lomanydegenerated triangles, thus the model quality is poor, especially in cases of complex orebodies. The isosurface-from-volume method requires a spatial interpolation in the region of mineralization. A model constructed by this method diverges from the actual form of orebody, because spatial interpolation lacks spatial constraints. Based on these two methods, in this paper we present a new method that can realize t automatic high quality 3D modeling of complex orebodies.The method converts an orebody contour to distance fields by a distance function to build the volume data, and 3D modeling of the orebody is built automatically by extracting an iso-surface from the volume data. In this way, 3D modeling of an orebody is performed without having to specify the corresponding relationship between contours. We apply the non-euclidean distance transform and the divide and conquer algorithm in order to reduce the time and space complexity. A special unit, a non-coplanar quadrangular prism,is used as the voxel for volume data in order to adapt the methos to the particularities of orebodies. Experimental results show that the method can model a complex orebody rapidly and effectively.
We study PPP-AR rapid ambiguity re-convergence in the case of short-term loss or disruption of observation data, offer a reasonable extrapolation time threshold for using augmentation information, demonstrate the reliability improvement of existing URTK services when using this method, and design a corresponding real-time testing scheme to verify its availability. The results show that, augmentation information can be extrapolated for at least 30s, and that the PPP-AR rapid ambiguity re-convergence method can improve the reliability of URTK service for control center and rovers even in poor observation conditions. Rovers can independently realize ambiguity rapid re-convergence without the support of regional URTK augmentation service, which will reduce the real-time data communication burden between rovers and control center significantly.
Ambiguity fixing is the key to GNSS high-precision dynamic positioning. In order to analyze the performance of a GPS/BDS combined system on ambiguity search efficiency and the success rate of ambiguity fixing on a short baseline, this paper proposes an ambiguity search region correlation method based on the square roots of the eigenvalues of the variance/covariance matrix of ambiguity. This method compares and analyzes the ambiguity search region in a single system GPS,a single system BDS and combined GPS/BDS system, respectively. Experimental results show that the GPS and BDS combination influences the variance/covariance matrix of each single system, and reduces the search region of each single system. Furthermore, the statistical results show that the combined GPS/BDS system can improve the success rate of ambiguity fixing and the ambiguity search efficiency significantly within single- and dual- frequency observations.
A single-epoch ambiguity resolution algorithm without a search process is proposed. Four basic satellites are selected, whose integer ambiguities are estimated by using the INS position, then (1,-1) and (-3,4) ambiguity combinations are formed. The ionospheric error of the (-3,4) combination is polynomially fitted. After systematic influences such as the ionospheric error are compensated, the ambiguity combination can be fixed by simple rounding. The ambiguities of each frequency can be determined by the combination and returned to resolve the receiver position. Based on the receiver position, the ambiguities of all other satellites can be calculated directly and fixed by rounding. The algorithm was validated by field test GNSS/INS data.
Based on the characteristics of large-scale GNSS baseline vector network adjustment and the IGGⅢ scheme, an improved double-factor equivalent weight with different weight-dropping efficiency schemes were derived. Methods and adaptability of parallel programming are compared, and a calculation procedure based on multi-task classification and processing is proposed and it achieves parallel estimation. Data from IGS are used in an experiment, and the results show that the proposed method not only takes full advantage of the prior information of coordinates and effectively restrains the influence of the baseline vector outliers, but also makes full use of the hardware platform, then significantly improves the computational efficiency.
According to the OKADA fault dislocation theory, the model for spatio-temporal inversion of fault slips is built based on Kalman filtering using the GNSS displacement time-space series. To acquire more subtle distribution of a fault slip, the fault is divided into many subfaults. The priori information and the Laplacian smoothing constraint is taken into account. According to the high spatial coherence of surface deformation from fault slip,the spatially-uncorrelated noise is separated effectively by a Kalman filtering inversion of the whole GNSS network. Simulation experiments indicate that since the displacement from the fault deformation is equivalent to the noises level, and GNSS point distribution intervals in strike and dip are equivalent to the length and width of the subfault at least, then the spatiotemporal distribution of the fault slip can be obtained accurately. When the distribution density of the observation stations continues to increase, improvements of the inversion effect are not apparent. However, the high distribution density of stations is very helpful to improve the Signal Noise Ratio(SNR) of the inversion.
In geomagnetic field model and geomagnetic navigation research, methods for evaluating the precision of geomagnetic field model are required. The World Magnetic Model (WMM) and INTERMAGNET observation data were used to evaluate the precision of WMM2010 model globally, and the accuracies for different areas such as Europe, North America and China were analyzed,. Based on this analysis, we propose a gridding method to evaluate the geomagnetic field model according to the relationship between truncated degree of the geomagnetic field model and wavelength of spatial resolution, Our WMM model precision results at the global scale and for local areas are detailed.
In the field of the precise underwater acoustics, the constant gradient ray-tracing method is the normal choice for positioning. But when the incidence angle is too large, the iteration of the incidence angle will diverge. In this paper, we analyze the reason behind this problem, and present a new iteration method for cases of large incidence angles. The key element of this method is to change the iteration parameter from the Snell refraction constant p to the radius of curvature R. We use simulation data to test the effectiveness of our method. The results show that when the incidence angle is not too large (e.g.≤80°), our new method and the original constant gradient ray-tracing method have the same accuracy, but when the incidence angle is large (e.g. >80°), the accuracy of our method is better than the original method.
The mutual transformation of coordinates in different coordinate systems is a very common geoprocessing task. High-accuracy transformation parameters are the foundation for coordinate transformations, obtained by similarity transformations. Traditional least squares similarity transformation only considers common point coordinates in one coordinate system, and does not conform to many practical situations. Because common points have errors in two coordinate systems, the G-M model is not correct. The coordinates parameterization least squares, method overcomes this problem. Based on the principle of similar transformation, the traditional least squares method is used to calculate plane datum transformation parameters.
The Very Low Resolution (VLR) problem happens in many face recognition application systems given the increasing demand for camera-based surveillance applications,. Currently, the existing face recognition algorithms cannot deliver satisfactory performance with VLR face images. While face super-resolution (SR) methods can be employed to enhance the resolution of the images, the existing dictionary learning-based face SR methods are inadequate for VLR face images. To overcome this problem, we propose a novel SR face reconstruction method based on non-local similarities and multi-scale linear combinations and subsequently, a new approach for VLR face recognition based on resolution scale invariant features. Experimental results show that the proposed approach based on dictionary learning outperforms the existing algorithms in public face databases, obtaining a good visuality suitable for face recognition applications subject to the VLR problem.