2017 Vol. 42, No. 10
A gully is a kind of earth erosion resulting from many external forces. Given the huge labor intensity required in traditional gully monitoring methods and the great influence of complex terrain, a terrestrial laser scanning (TLS) method for gully monitoring and erosion estimation in the area of sparse vegetation is presented in this paper. A typical gully on the east bank of Guanting reservoir in Hebei Province was taken as a case study for TLS-based gully monitoring and erosions estimation using three phases of TLS data collected in a two year period. Applying TLS point cloud acquisition, registration, filtering, resampling and surface fitting, the surface models of the gully were reconstructed at different spatial resolutions, and the terrain information of the gully was extracted from the surface models. Erosion was estimated using the Yang Chizhong Filter at different spatial resolutions with resampled TLS point clouds, and compared. The results show that: 1) the estimated erosion was more stable and accurate when the spatial resolution of resampled TLS point cloud was equivalent to the size of concavo-convex structure (2~6 cm) of the gully, 2) the general elevation of the gully decreased 2~20 cm during the monitoring period, 3) the gully surface with the largest curvature showed the most significant erosion, and.
This paper describes an automatic method for building extraction from high resolution images. It consists of automatic classification and automatic post-processing using shadow information. Shadow and vegetation are automatically detected; bare land samples are selected manually. In addition, by analyzing the distribution categories in the shifted shadow regions, building sample regions are acquired automatically. Based on these four categories of samples, the classification can be implemented using a SVM classifier, from which the initial building results are extracted. The results are optimized in post-processing, including mathematical morphology processing to enhance the completeness of detected regions, region growth to supplement undetected building regions, as well as non-building removal using shadow rates on the intersection boundaries. Optimization yields the final results. Experimental results indicate thatan approach using shadow analysis can extract building sample regions accurately, completely, and automatically, guaranteeing classification precision. The post-processing strategy effectively improves the completeness of building region detection and removes most non-buildings without shadows. Therefore, the accuracy of the final result increases greatly. In all, the automation of this method is high, and applies to buildings in suburb areas.
Zebra Crossings have played an important role in public traffic safety, so the reconstruction of zebra crossings is very helpful for reducing the occurrence of traffic accidents. An automatic approach using high-resolution aerial images for zebra crossing extraction and reconstruction is proposed in this paper. In the approach, zebra crossings are extracted by JointBoost classifier based on GLCM (Gray Level Co-occurrence Matrix) features and 2D Gabor Features. A geometric parameter model based on spatial repeatability relationships is globally fitted to reconstruct the geometric shapes of zebra crossings. Representative experiments under interfered conditions such as zebra crossings covered by pedestrians, shadows and color fading were conducted to verify the validity of the proposed method in both the extraction and reconstruction of zebra crossings.
Due to the scalar effects of changing window sizes in grid-based digital terrain analysis, it is important to determine an appropriate window size for calculating local topographic attributes in practice. A reasonable appropriate window size must be spatial-variant but constant, such as the often-used window size of 3×3. Currently, a potentially-available approach for determining an appropriate window size characterizes a scale-effect curve derived from a specific kind of local topographical attribute calculated for different window sizes, as proposed by Schmidt and Andrew (2005). However, there has ben little evaluation of this approach, such as research on the effects of different topographic attributes used, applicability in areas with different terrain conditions under different grid sizes. In this paper we conduct an experiment to evaluate this approach. We test two kinds of local topographic attributes (i.e., slope gradient and profile curvature) in three study areas with different terrain conditions under different grid sizes. Experimental results show that the results for appropriate window size achieved based on different topographical attributes were totally different. However, further analysis shows that slope calculation in real applications, the results of spatially-variant appropriate window size by the tested approach perform almost same as the constant window size of 3×3, which is the traditional option. So the current approach to determining spatially-variant appropriate window size might not be effective. Further study on a design for a new and effective approach is needed to determine the spatially-variant appropriate window size for calculating local topographic attributes
The distance metrics under a network context such as road network distance, and travel time measurement have been commonly applied in the domains of spatial analysis and spatial statistics. In practice; however, it might be difficult to calculate these metrics properly due to the limits of data accessibility and accuracy problems. The Minkowski distance function is a generalized distance metric in the Euclidean space, and it may present various kinds of metrics when different values of the power parameter p are specified. In this article, we use the Minkowski distance function to approximate the road network distance taking advantage of its generality and flexibility. We also explored the relationships between the varying optimal values of p and a set of quantitative characteristics including road network density, curvature, etc. in accordance with road networks of distinct features. The results show that network distance could be approximated better by the Minkowski distance with an optimized power parameter p than Euclidean distance, i.e. straight line distance. In addition, the optimum value of pcanbe affected largely by the curvature of a road network, which might provide an important clue for selectingMinkowski distance for approximation. We take the geographical weighted regression (GWR) technique as an example, and calibrate a GWR model with Euclidean distance, and Minkowski distance with an optimal power value p and travel time, respectively. The results show that the estimates with the optimal Minkowski distance provided closer coefficient estimates to the values calibrated with travel time than those from the calibration using Euclidean distance.
The conventional methods of road change detection have disadvantages in terms of the data acquisition period, data cost, algorithmic complexity, calculation difficulty, and periodic updating. In this paper, by making full use of taxi GPS trajectory data distribution and timeliness, a new road network topology change detection method based on vehicle GPS spatio-temporal trajectories is proposed. In this method, the similarity between the GPS trajectory vector and the partial topological vector is measured using the vector similarity measure model, and the topological change of the road network is detected by comparing the path changes of the new, the waste and the reconstruction. Experimental results show that this method can not only detect the changes of new, wasted, and reconstructed parts of road network, but also can realize real-time change detection in urban road networks.
By analyzing the performance of viewpoint-independent and view-dependent terrain generation algorithm in 3D real time rendering, we find that the view-independent algorithm cannot achieve a high resolution ratio when the terrain model is established, causing a lack of scene fidelity. If a high resolution ratio is required, the view independent terrain generation algorithm must store a large amount of data, and take lots of time to complete model rendering. Therefore, based on the view-independent terrain generation algorithm, in order to improve the fidelity of 3D scenes, we propose an optimized view-dependent terrain generation algorithm. This algorithm focuses on terrain split, view accuracy calculation, observation view distance calculation, and terrain visibility trimming. In a terrain split, a binary triangle tree is used to split the terrain node and show the relationship between the adjacent nodes. In the view accuracy calculation, each terrain block must calculate the variance value of the node error in each frame. In observation view distance calculation, we calculate the distance between observation point and block in each frame. In the terrain visibility trimming, terrain blocks are outside of the field of vision with the calculation, and this area can be ignored in the process of rendering to reduce amount of computation. Setting rendering accuracy as the measurement, we analyze the performance of the optimized view-dependent terrain generation algorithm and viewpoint-independent algorithm with experiments. In addition, by setting the rendering efficiency as the measurement, we analyze the performance of the optimized view-dependent terrain generation algorithm and original algorithm, and generate a three dimension terrain model using the proposed optimization algorithm.
The PSInSAR technique can achieve separation of PS phase components, by extracting the time-dimensional PS points based on various temporal and spatial statistical properties to get high-precision, surface deformation monitoring results. As the main error source in InSAR, atmospheric signals can be isolated from the other components of the residual phase by classical filters in the spatial and temporal domains. Optionally, after 3D unwrapping, high-pass filtering can be applied to unwrapped data in time followed by a low-pass filter in space in order to remove the remaining spatial correlated errors (atmosphere and orbit errors). Thus, when the deformation rate is large, the spectrum of various contributing factors will overlap, and the classic filter is powerless. This paper proposes methodologies which can automatically choose a smoothing parameter based on a fast robust version of a discrete smoothing spline instead of classic filter to effectively separate the phase components.
This paper proposes a vector polygon pattern recognition method based on the wavelet descriptor. In order to minimize the amount of calculation we calculate wavelet coefficient matrix of the target and the template polygons. Using this matrix, we calculate the similarity between these two polygons.We can determine through the similarity, whether or not these two polygons can be matched. Due to the nature of the selected wavelet, the comparison can be made by using the coefficients which can reflect the characteristic of polygon, so that the recognition effect is more effective. Experimental results show that the method is efficient and not sensitive to translation, rotation, and zoom transform.
This paper proposes a new cycle slip detection and repair method for a single-frequency GNSS receiver based on epoch difference. Taking the previous and current epoch as base and rover station respectively, posteriori standard errors and residuals of observations were derived from relative positioning and robust estimation. Single-frequency cycle slip detection and repair were carried out as well. Analysis of experiments with measured data suggest that the success rate of detection epochs of cycle slip is 100%, and more than 95% of abnormal satellites can be detected on occasions when the number of observed satellites without cycle slips is more than four and the percentage of the satellites without cycle slips is less than 30%. With an excessive number of abnormal satellites, the success rate of detection descends correspondingly. The success rate of cycle slip repair can reach 100% based on the success of detection of cycle slips.
We propose a non-recursive minimum discontinuity phase unwrapping algorithm for improved efficiency. After analyzing the principles of minimum discontinuity phase unwrapping algorithm, a stack is used to store the intermediate results during edge spreading and circle canceling processes and thena non-recursive algorithm for phase unwrapping is implemented. The new algorithm was combined with the quantized quality-guided phase unwrapping algorithm, which enhances the unwrapping efficiency by restricting the optimization area. Unwrapping tests performed on InSAR and InSAS interferograms show that the proposed method maintains high quality unwrapping results and greatly improves efficiency.
Larsemann Hills, located on the Ingrid Christensen Coast of Princess Elizabeth Land in East Antarctica, is an ideal area for Antarctic ice sheet and oceanographic studies. Digital elevation models are of importance to many geoscientific and environmental studies in Antarctic and due to relatively poor coverage by ground based surveys, the main data source for developing Antarctic DEMs is satellite altimetry. The new operating satellite-borne altimeter for ice applications is the ESA satellite CryoSat-2, launched in April 2010. Based on CryoSat-2 data collected during austral winter of 2013 and 2014 and ground based elevation points from China, India, and Australia, a new 200 m DEM for the Larsemann Hills, termed LA-DEM, was derived by the Ordinary Kriging method. The accuracy of LA-DEM was assessed by residual elevation points. The results show that the accuracy of LA-DEM is about 19.7 m, andbetter than four commonly used Antarctic DEMs.
Navigation users will significantly benefit from BDS and GPS positioning fusion in terms of availability, accuracy and reliability. However, for single point positioning, systematic biases between multi-GNSS systems cannot be eliminated completely, thus the accuracy of positioning and navigation is not always improved with the un-difference measurements of multi-GNSS systems. In this paper, an integrated BDS/GPS positioning model with unknown systematic parameters that compensates for systematic bias is proposed. Furthermore, a Bayesian estimation of fusion positioning model is specifically investigated in which the priori information of the additional parameters is taken into account. Real data collected from different areas with different types of receivers are used to verify those new algorithms. The results show that (a) receiver-dependent inter-system biases are quite evident, while the size of the system bias varies with the receiver type; (b) the precision of fusion positioning is improved significantly by introducing additional parameters into the functional model; and (c) Bayesian estimation of fusion positioning model can still obtain ideal position solution when the number of visible satellites is not enough.
A co-localization method for lunar rover positioning based on very long baseline interferometry (VLBI) and celestial navigation systems (CNS) is presented in the paper. A federated kalman filter was utilized to implement optimal estimation of position information in order to enhance reliability and fault tolerance ability of the system. Experimentalresults calculated with measurement data from the Chang'E-3(CE-3) mission demonstrate that this method improves the positioning accuracy of the lunar rover comparing with VLBI alone or joint calculation with Least Squares method. Furthermore, this method also guarantees the reliability and stability of lunar rover positioning.
Getting precise undifferenced ambiguity resolution is the key issue when obtaining high-precise ionospheric delay with phase observations. Generally, extreme wide-lane (EWL), wide-lane (WL) and narrow-lane ambiguities are needed in the process under triple-frequency conditions. The MW combination wide-lane ambiguity may be fixed into a wrong integer because of the influence of code hardware delay and observation noise. In this paper BDS triple-frequency observation and GIM production are applied to resolve wide-lane ambiguity with a fixed EWL ambiguity and phase geometry-free (GF) combination. In addition, high-precise ionospheric delay is reconstructed and code hardware delay is separated. Test results show that the wide-lane ambiguity fixing success rate rises to 100% when assisted with GIM information, while the ambiguity is free of systemic bias. There is a difference of about 1.0m between the reconstructed ionospheric delay and GIM corrections, meaning an equivalent precision of 6 TECU. The standard deviation of the separated code hardware delay is less than 0.3 m.
The Position and Orientation System (POS) is a critical element in mobile mapping systems. Because of the complex system integration, the center of the POS is hard to observe. Thus, a cooperative target could be used for tracking the motion state of a POS, if the relative position between target and A POS is determined. Based on the positioning equation for mobile mapping systems, we analyze the calibration techniques for cooperative target lever-arm parameters. An error model was designed for predicting the accuracy of mounting parameters of cooperative targets, with consideration of the impact from errors such as laser ranging error, angular measurement error, position error, orientation error, and scale factor error. Outdoor experiments were carried out to verify the feasibility of this calibration method, verifying that it could be used for testing the dynamic position accuracy of POS.
Theweighted total least squares method based on partial errors-in-variables (PEIV for short) model is used to solve the inversion parameters of crustal strain model. It not only considers the error of observation (displacement or velocity field), but also the error effects from the coefficient matrix, generally composed of monitoring points coordinates. When taking the special structure of the coefficient matrix in the geodetic inversion model into account, we insure that the repeated coordinates have the same residual and that the constants are not allocated any correction. The method usedin this paper can meet these requirements as it separates the random elements from the constant elements taking advantage of the partial errors-in-variables model. All calculation formulae for crust strain (rate) parameters inversion based on partial errors-in-variables using monitoring point displacement or velocity fields are deduced. In addition, the derivate correction of weighted least squares (WLS) is used to analyze the effect of the random coefficient. The discrepancy between the weighted least squares solution and WTLS solution was also investigated. Because of the complexity of the WTLS solution, we propose a formulation to relate the WLS\and WTLS solutions based on Xu (J Geod 86:661-675, 2012). A simulation using data from the Sichuan-Yunnan region permits a comparison and analysis of the effect of the random design matrix. The experimental results reveal that the effect of the random coefficient matrix on adjustment of the inversion of crustal strain (rate) parameters model is mainly depend on the order of value of the GPS coordinates and the crustal strain parameters themselves.
Thecode-phase divergences, which are absent for GPS, GLONASS, and Galileo, are commonlyfound in BDS geostationary (GEO), Inclined GeoSynchronous Orbit (IGSO) and Medium Earth Orbit (MEO) satellites. Several precise applications that use code observations are severely affected by these code biases; therefore, it is necessary to correct biases in BDS code observations. Since the BeiDou satellite-induced code bias is confirmed to be orbit type-, frequency-, and elevation-dependent, an improved code bias correction model for IGSO and MEO satellites based on a large amount of the data was developed. To obtain the best fitting results, we analyzed the effect of the number and distribution of stations and observation time on model estimation, and also considered the different influence of multipath at different elevations. A robust estimation method controlled the observation quality. A dataset from 18 stations during one year period in 2015 was employed to estimate the correction model for MEO satellites and four stations for IGSO satellite. To validate the improved correction model, the effect of the code bias on precise point positioning (PPP) before and after correction is analyzed and compared. Results show that systematic variations were eliminated more clearly after applying the improved correction model as compared to the traditional model. After correction, the positioning accuracy of PPP solution was improved and the convergence time decreasedshowing a better performance than results using thetraditional model as proposed by Wanninger and Beer.
GPT2w is used to estimate slant tropospheric delay, and considered the best empirical model based on its nominal accuracy. Besides the model value of ZHD, this model delivers a blind value for meteorological parameters. We used the database from USNO to validate the marked accuracy of the model, and data from IGRA to access the accuracy of the blind meteorological elements. A system bias from Tm was detected. After the correction of this bias, the bias of model ZTD against the USNO ZTD rose from-1.38 mm to-0.3 mm. This paper presents a fusion of blind model and the in situ data. The input parameters of this new method are in situ P, t and hr, the corrected blind Tm and λ. This method performs better than the modified GPT2w model because of the in situ data, better than the Saast model as it profits from a improved ZWD model. Without the in situ data, the modified GPT2w is a good choice. If with available in situ data, the fusion method is recommended.
The influence of horizontal disturbing gravity (HDG) on the positional error of high-precision Inertial Navigation Systems (INSs) was studied under different conditions of movement. A space-state equation expressing INS error with HDG was deduced and analytic expressions of INS position error with HDG were deduced under three conditions of movement. Additionally, a method of computing above influence based on inertial navigation calculation is designed. Under the condition of uniform motion, INS position error caused by the same HDG was calculated through analytic expressions and inertial navigation calculations. The results from analytic expressions show that disturbing gravity varied between ±80 mGal (1 mGal = 10-5 m/s2) causing about 3 000 m INS positional error at the maximum. A comparison of the results from two methods show that the level and change in INS positional error are basically consistent while the validity of each method was verified by these results.
In the field of time-frequency decomposition, the Local Mean Decomposition(LMD) method is applied in settlement monitoring, but the phenomenon of mode mixing can appear during the application, which results in inaccurate deformation signal extraction.The Ensemble Local Mean Decomposition(ELMD) method can be used to improve the mode of mixing the local mean decomposition by adding auxiliary noise to the original signal, and also can use the statistical characteristics of auxiliary noise to remove the mode mixing. This paper uses simulation data to analyze the model error in the ELMD method and presents a parallel combination prediction method based on ELMD. In the case of high speed railway bridge monitoring data, it divides a series of discrete nonlinear and unstable signal into three product function(PF) components and one remaining component. The method takes advantage of the support vector machine and Kalman filter algorithms to predict these components, and analyses the superiority of ELMD in the case of mode mixing and overall feasibility empirically. The results indicate that: the parallel combination model, based on ensemble local mean decomposition (ELMD), can eliminate the mode mixing problem in the local mean decomposition (LMD) method very well and extracts the deformation signal accurately. In terms of prediction precision, the mean relative error can reach 8.3%, and may provide areference for prediction of deformation monitoring.
The 2015 Nepal Mw7.9 earthquake occurred in the central segment of the Himalayan collision zone, where the rigid Indian plate thrusts beneath the Tibetan Plateau. The published focal mechanism solution shows this earthquake was dominated by thrust slip but minor right-lateral strike slip, so a significant vertical deformation appears on the surface caused by this event. Accurate coseismic vertical displacements in this region provide us a scarce chance to understand the long term uplift of the Himalaya and southern Tibet. By processing the resurvey 'in-situ' GPS data, we obtained a coseismic GPS horizontal displacement field at high precision. In combination with the coseismic GPS displacements and the L-band InSAR line of sight (LOS) observations, we extract the coseismic vertical deformation field due to the Nepal earthquake with a mean uncertainty of 1~2 cm and spatial resolution of 1 km×1 km. The result shows that the Kathmandu was uplifted ~0.95 m after the main shock. In particular, the Mount Everest and Shishapangma subsided~2-3 cm and ~20 cm, respectively. The two-dimensional elastic half-space dislocation model suggests that the mean rupture width of the Nepal earthquake was~60 km and the average coseismic slip reached 4 m. Our results indicate that the slip deficit of this event was equivalent to a moment magnitude of Mw 7.89 assuming a rupture length of 120 km and rigidity of 30 GPa, which is consistent with seismological estimation.The 2015 Nepal earthquake broke the trend of long term uplift in the central segment of higher Himalaya during an interseismic period. Whether this segment will continue to subside or uplift after the event can be discriminated by continuous post-seismic geodetic measurements.
The impounding of the Three Gorges Reservoir has lasted for 10 years since its beginning in 2003. In this paper, the tilt data of the Xiannvshan fault zone from 2002 to 2009 is analyzed by comparing with the water level data of the Three Gorges Reservoir, and the result reveals that the trend of curve in the tilt data changes around the impounding, which shows that the impounding of three gorges reservoir has an affection on the nearby fault activity. the response of short-term changing load during impounding is more significant but can't change the nature of activity fundamentally. It is found that water rise level has the same ratio with the observation data's rise level during three large impoundment, which further shows that the tilt meter is able to record the load change caused by the impounding of three gorges reservoir.