2016 Vol. 41, No. 11
For observational data with systematic and gross errors, this paper presents a method to locate and value gross errors by introducing the mixed Cook distance into the semiparametric model. Firstly, by structuring a penalized least squares function and using Taylor expansion, and according to the equivalence of the mean shift model and data deleted model, the penalized least squares estimation expression of the parametric and non-parametric components are obtained for the data with a deleted ith observation, which is useful for locating the gross errors. Secondly, with the help of mixed Cook distance as a kind of diagnostic statistic, the corresponding formula for the parametric and non-parametric components are deducted, in order to improve the accuracy when locating gross errors. Common forms of parameters Q and C are given, which can influence the mixed Cook distance directly, as different choices for the parameters yield different results. By selecting the appropriate parameters and calculating the Cook distances of the parametric and non-parametric components, the positions of gross errors are determined, thus the systematic error and gross error can be separated from the observed data. Using simulated computations and a real example, it is shown that the method can effectively determine the position and fixed values of gross error, and illustrating the effectiveness of the proposed approach.
With the latest lunar gravity field model GL0660b and LP 150Q, SGM150, we analyze the influence of different lunar gravity field models on orbit determination of CE-3. Firstly, the power spectrum and error spectrum of these three models are discussed. GL0660b model improves the spatial resolution and precision of spherical harmonic coefficients significantly.Then the free-air gravity anomaly of these three lunar gravity models are compared by truncating to different orders and degrees at different height.These three gravity field models are applied in the precision orbit determination. The results show that position overlap differences of 100 km×100 km orbit are about 22 m with GL0660b model truncated to 150 orders and degrees, the same level as those using LP150Q model and SGM150 model in full orders and degrees. For 100 km×15 km orbit, position overlap differences with GL0660b model truncated to 360 orders and degrees are at a level of 21 m, which is better than those using LP150Q and SGM150.
Single-epoch ambiguity resolution is always a challenging issue in the satellite navigation and positioning. With the development of BeiDou navigation and positioning system, the combination observations of three-frequency carrier phase can combine more excellent combination data, which could effectively improve the accuracy and reliability of the single-epoch ambiguity resolution. This paper adds a priori information of ionospheric delays, regards it and the location parameters and ambiguity as unknown parameters to be solved based on the traditional TCAR method and least squares collocation. Meanwhile, the ambiguity search space to be constricted by the ambiguity related method of considering ionospheric delays. The results showed that: add a prior information of ionospheric delays in unknown parameters can greatly improve the single-epoch ambiguity success rate; the ambiguity related method of considering ionospheric delays correction can reduce the ambiguity search space, and effectively improved the success rate and the reliability of single-epoch ambiguity resolution.
In this paper, a method of re-calibration for roll bias of multi-beam sounding system is proposed to solve the faltiness of multi-beam sunding transducer calibration. Based on the sounding data of smooth terrain, the roll bias angle of transducer, the dip of terrain, can be calculated by using the method of weighted least squares. The obtained bias angle can be used for roll bias correction of transducer in vessel model. The conclusion shows that the corrected bathymetry data in the overlapping area of adjacent two swaths is more consistent than before, the accuracy of multi-beam bathymetry data has been significantly improved.
Satellite radar altimetry has been widely used to investigate the elevation change and mass balance in the polar region. Compared with traditional radar altimeters, CryoSat can not only provide a denser network of ground tracks but also higher measuring accuracy. However, the long orbit cycle (369 days) of CryoSat is less suited to the conventional cross-over technique that has been applied to longer time series. In this paper, a new method called the near repeat track is proposed and the related data preprocessing of Gross Error Elimination and Backscatter correction are also introduced, based on the data characteristics and the collaborative analysis between CryoSat and EnviSat. Using these methods, the elevation change in the PANDA transection of Antarctica from 2012 to 2014 was extracted and analysed. The result shows an ascending trend of 0.017±0.009m/a that agrees well with the result of other research along the inspection route in the PANDA transection. However, the accumulation of distribution is not uniform. To check the results, the elevation change as presented in this paper is compared with the field measurement data, and the same trend was obtained.
Time-variable series of earth gravity model are obtained by means of GRACE, which can be continuously monitor variations of the mass in or on the surface of the Antarctic ice sheet in middle and large scales. Gravity solutions of GRACE RL05 from January 2003 to July 2013 released by UTCSR are used in this paper. Several filters, such as isotropic Gauss, non-isotropic, Fan, Wiener and de-correlated filter, were compared in the inversion of the Antarctic ice sheet mass change rates. Through analysis and comparison, the following conclusions are obtained: (1) The signal and noise function of Wiener were calculated by 121 months gravity model and the result was very close to that of the Gaussian smoothing radius of 300 km, and that is mass change can be completely extracted by using the 300km radius of filtering; (2) Increasing the radius of the filter is an effective way to improve the accuracy of the results, so the propositional radius of filtering in the Antarctic area can be 500km; With the same filter radius, the rates of mass changed are basically consistent for different methods; (3) De-correlated filter is more effective method in reducing the systematic errors of spherical harmonic coefficient. The conclusions as above help reconcile CRACE ice mass estimates obtained for different filter strategies.
A grounding line is the boundary between inland grounded ice sheet and a floating ice shelf and an important parameter in glacier dynamics. Precise extraction of the grounding line has a great impact on the Antarctic ice sheet mass balance and mathematical modeling of glacier dynamics. In this paper, the basic principles of grounding line extraction using DInSAR are introduced. The interference of ice flow on extraction of grounding line is removed by double differential SAR interferometry(DDInSAR) and the grounding line is detected through an interpretation of the inner of dense fringes induced by ocean tide in double differential SAR interferogram images. The mapping result is verified by comparing it to the Antarctic grounding line product, demonstrating that DDInSAR is an effective technique suitable for large-scale, continuous and high-precision grounding line extraction, thus laying a foundation for grounding line extraction and understanding of the dynamic changes in polar region.
Based on SMMR and SSM/I sea ice index dataset, we analyzed a consistent 36-year edge length of Antarctic sea ice from 1978.11 to 2014.12. In this period, the edge length of Antarctic sea ice expanded at the speed of 19.54±16.31 km/a(p < 0.05). Based on this analysis, it can be inferred that the sea ice extent grows from March and declines from September every year. The sea ice edge length grows slowly from March to August, followed with a slow decline. It rapidly increases in November, reaches a peak in December and then quickly falls. Generally, the maximum value of sea ice edge length occurs in December, and the minimum value turns up in March. Sea ice edge length is related to both the sea ice extent and fractal dimension. The fractal dimension shows the same trend as the sea ice edge length in March, November, and December, when the periodic ice edge length displays a trend opposite to the ice extent. In other months, the ice edge length exhibits the same trend as the ice extent and opposite the Fractal dimension. In conclusion, the Indian Ocean and Ross Sea sectors show positive yearly trends; the Bellingshausen Sea sector shows a negative yearly trend; the other two sectors show no obvious trend. All the five sectors show no obvious trend most of the time.
Currently, analysis centers(ACs) of International GNSS continuous monitoring and assessment system (iGMAS) could provide precise GNSS orbit products. To improve the stability and reliability of orbit products for high-precision users, we usually combine the orbit products from ACs. In this paper, we present a robust least square method of combing satellite orbits from different solutions. We first verify this method by combining GPS and GLONASS final orbit products provided by IGS Analysis Centers. Results show that the RMS differences between our combined GPS orbits and those from IGS are about 4 mm, and the RMS differences between our combined GLONASS orbits and those from IGS are about 5 mm. As the orbits of different IGS ACs are calculated in different reference frames, we first transform the orbits of different IGS ACs to the reference frame of IGS combined solution independent exchange format (SINEX) solutions before orbit combination. Results show that this transformation can improve the quality of combined orbits up to 2 mm. We then apply this method to analyze and combine GNSS satellite orbit products provided by ten of iGMAS ACs. For most iGMAS ACs, results show that the precision of GPS final and rapid orbits are better than 2.5 cm, and the precision of GPS ultra-rapid orbits are better than 6 cm and 15 cm for observed part and predicted part, respectively. The precision of iGMAS combined final orbits are 2 cm, 2~3 cm and 6 cm for GPS, GLONASS and Galileo satellites, respectively. The precision of iGMAS combined final orbits are 1.5 m and 20 cm for IGSO and MEO/IGSO of BDS, respectively. The relatively lower precision for combined GEO orbits is due to the different satellite antenna phase center offset (PCO) models adopted by different iGMAS ACs.
The large number of bolts and screws attached to the subway shield ring plates, along with the great amount of accessories of metal stents and electrical equipments mounted on the tunnel walls, means that tunnel laser point cloud data includes lots of non-tunnel section points, referred to as non-points, therefore affecting the accuracy for modeling and deformation monitoring. This paper proposes a filtering method for point clouds based on the elliptic cylindrical model. The original laser point cloud data is projected onto a horizontal plane, and a searching algorithm is used to extract the edging points of both sides, to further to fit the tunnel central axis. Along the axis the point cloud is segmented regionally, and then fitted as smooth elliptic cylindrical surface by iteration. This processing enables automatic filtering of those inner wall non-points. Experiments on two groups of data showed coincident results, that the elliptic cylindrical model based method effectively filters out the non-points, thus providing high-quality point cloud for subway deformation monitoring.
The paper presents a method for estimating the aboveground biomass of forest stands using vegetation index and principle component analysis methods; and in a case study in the Mt. Gongga region, combines remote sensing data (HJ-1B CCD2 and SPOT4 HRVIR) with field measurements to evaluate the method. The accuracies of aboveground biomass estimation were assessed through the cross validation method, and comparative analysis was done for HJ-1B CCD2 and SPOT4 HRVIR sensors in order to evaluate their abilities and differences on the estimation of aboveground biomass in forest stands. The results showed that the retrieval model of aboveground biomass based on the simple ratio vegetation index performed better than other vegetation indices, and the performance of HJ-1B CCD2 was superior to SPOT4 HRVIR in a single linear regression model. As for the model of biomass estimation using multiple vegetation indices, their differences on the estimation of aboveground biomass were not apparent, according to the results of cross validation for HJ-1B CCD2 (r: 0.5458; RMSE: 27.8114 t·ha) and SPOT4 HRVIR sensors (r: 0.5634; RMSE: 27.1696 t·ha). Moreover, the performance of HJ-1B CCD2 was better than SPOT4 HRVIR for the principle component analysis method. In general, both of HJ-1B CCD2 and SPOT4 HRVIR sensors could satisfy the need for aboveground biomass estimation in Mt. Gongga region. Additionally, the results of HJ-1B CCD2 data were found to outperform SPOT4 HRVIR.
This paper presents a SAR image segmentation method that combines regular tessellation and the Metropolis-Hastings (M-H) algorithm. First the image domain is partitioned into a group of rectangular sub-blocks by regular tessellation and then the image is modeled on the assumption that intensities of its pixels in each homogeneous region follow an identical and independent Gamma distribution. A region-based SAR image segmentation model is built using the Bayesian paradigm. Then, an M-H scheme is used to simulate the segmentation model, which can segment SAR image and estimate the model parameters. In the M-H algorithm, three move types are designated, including updating parameter vector, updating label field, and splitting or merging sub-block. The results obtained from both real and simulated SAR images show that the proposed algorithm works effectively and efficiently.
The complex K-Wishart distribution aims to describe the multilook covariance matrix and coherence matrix of polarimetric SAR (PolSAR) data by a statistical method to indicate the Non-Gaussian statistical characteristics of heterogeneous scenarios. Given the attractiveness of this approach for accurate description, it was applied to PolSAR classification. Two comparative unsupervised classification experiments using the Wishart and K-Wishart classifiers were designed for domestic airborne full-polarimetric SAR data from the Yigen test site in Inner Mongolia and the Zunhua test site in Hebei province. Preliminary results suggest that K-Wishart classifier is more suitable for extracting forests and buildings. The K-Wishart classifier performance was also evaluated in terms of overall accuracy and stability.
In this study, we retrieved the land surface temperature (LST) of Guangzhou on Jan 14, 2013. The retrieval was based on the characteristics of HJ-1B thermal infrared band, adopting a revised QK & B algorithm. The established partial differential equation showed that the emissivity error of 0.01 resulted in a LST error of around 0.6 K. The LST error was negatively correlated to the atmospheric transmittance and positively correlated to the atmospheric transmittance error; the transmittance error of 0.1 resulted in a LST error of around 1 K. Meanwhile, the atmospheric water vapor error and the LST error exhibited a linear relation; the atmospheric water vapor error of 0.1 g/cm2 resulted in LST error of around 0.2 K. The LST retrieval error was positively correlated to both the near-surface air temperature error and the average atmospheric error; the near-surface air temperature error of 1 K led to the LST retrieval error of around 1 K. Overall, the LST retrieval error and interval ratio are related to the average atmospheric temperature error as well as the near-surface air temperature error. The retrieved land surface temperature of Guangzhou was in strong spatial accordance with the MOD11_L2 LST product. The temperature difference curve exhibited a normal distribution, concentrated in the range of -0.9℃ to 0.9℃. Six ground measurement spots in Guangzhou were chosen to compare the LST obtained by the revised QK & B algorithm with the ground measured average surface temperature. The difference between the LST obtained using the algorithm and the measured ground temperature was around 0.31 K, whereas the MOD11_L2 product had a difference of around 0.65 K with the measured surface temperature, both were less than 1 K. By deriving the partial differential equation of the revised QK & B algorithm, a more detailed and precise analysis was performed on LST retrieval in HJ-1B/IRS. This study also provides a reference for other similar LST retrieval algorithms based on environmental satellite thermal infrared band, as well as a scientific basis for future improvement of LST retrieval accuracy.
Microwave data has become the leading dataset in polar remote sensing research. In this paper, based on the CFAR algorithm, rock outcrop information is extracted from RADARSAT-1 synthetic aperture radar datasets, which yields high resolution imagery with good contrast between rock outcroppings and glacier ice. In order to choose the suitable parameters for the mountain range rock outcrop information extraction, evaluation of the effectiveness different models was conducted on representative datasets. In experiments, the results of Weibull distribution based SO-CFAR method demonstrate that the accuracy of rock outcrops detection ratio was larger than 80% and the overall error ratio was less than 8%, which verifies that the proposed method has a great potential for analyzing the rock outcroppings in the Antarctic.
In polar exploration, it is of great importance to unify the geospatial data representation and must be unified seamlessly integrate the output. In this paper, a service pool based geographic data sharing model is proposed, based on the characteristics of polar geospatial data, including multi-source, heterogeneous, and decentralized properties. Furthermore, a service oriented geographic information sharing platform was developed specifically for the polar environment. The proposed platform is based on the standard base map from comprehensive environmental survey and assessment, integrated with novel technologies such as resource pool based controlling, multi-service smooth linking, thematic map rendering services. To validate the developed system, the authors standardize over 130 sheets of polar maps and share the spatial information online in map format using service resource pool based data sharing model. The background GIS server also supports spatial analysis, which make it possible to meet the requirement of multi-dimensional data sharing applications. The system can efficiently achieve data sharing for polar expeditions, providing comprehensive technical support to the data integration, and also offer a solid platform for subsequent data representation and sharing.
Intelligent scheduling of geographical information resources based on load balancing for improved concurrent access to a service system is a focal point of current study. This paper presents a method for dynamic load balancing; different types of geographical information service are matched with the corresponding server group, and then the RED algorithm is combined with the double threshold method to effectively judge the load state of server nodes. The service is scheduled based on weighted probabilities for a given period. Finally, a experiment system was built based on a server cluster. This experiment illustrates the effectiveness of the method presented in this paper.
Volunteered geographic information (VGI) is a new kind of geospatial data collected by non-professional volunteers, and it will lead to uncertainty of its credibility. At present, little research works focuses on VGI quality according to its contributor's reputation. This paper proposes integrating user reputation with VGI trustworthiness. A user reputation model is put up at first, which aggregates initial reputation and experience reputation of contributor. Then, a computing model of VGI trustworthiness is developed, which take factors such as the editing process of geographic feature, reputation of contributors and so on into consideration. At last, historical polyline data from OpenStreetMap is employed for experiments. The experimental results demonstrate that the quality of polyline features has high positive correlation with its trustworthiness. In this paper, VGI is assessed by means of its trustworthiness, which offers a fresh perspective for evaluating and screening VGI.
The acquisition and update speed of urban foundational geographic data cannot keep pace with the speed of urbanization construction with the acceleration of land urbanization process.For the current situation of urbanization process, the paper analyses the characteristics of the urbanization construction and their representations of geographic data, and proposes a Group-based Incremental Updating Model (GBIUM) by utilizing the incremental updating technology.Such model generates the group-based incremental information based on the characteristics of geographic elements that are distributed in groups, and defines the spatial operation of group-based incremental information on the basis of group-based incremental information.At last, the paper formulates the standardized data transmission format and updating workflow based on sample regions in Wuhan and Shenzhen that are taken as experiments, and gives a detailed discussion on update process of geographic elements of urbanization construction by combining experiment and statistical results. Analysis with different models of urbanization construction and corresponding experiments illustrate the feasibility and efficiency of GBIUM in urban data updating.
Pedestrian detection is one of the key technologies in the large video data to extract information, which is an important link in the process of large video data mining. This is a difficult problem because pedestrian can vary from place to place and time to time. The changes in illumination and viewpoint, variability in shape, non-rigid deformations all can cause variations. In order to achieve a fast and robust pedestrian detection, this paper proposes a pedestrian detection algorithm based on sparse multi-scale image segmentation and cascade deformable part model. Through the sparse multi-scale image segmentation algorithm based on texture, lots of background region is eliminated and the interesting area is extracted. In the segmented interesting area, a general method is used for building cascade classifiers from part-based deformable models such as pictorial structures. Pictorial structures describe objects by a collection of parts included in a deformable configuration. Each part stands for local appearance properties of a part of the body while the deformable configuration is presented by spring-like connections between parts. The model focuses primarily on the case of star-structured models and show how a simple algorithm based on partial hypothesis pruning can speed up object detection. A discriminative procedure called Latent SVM is used to train these models. Lots of experiments are conducted on public data sets TUD-Crossing and TUD-Pedestrian. Experimental results show that little detection accuracy is increased by our algorithm, and the detection speed is improved obviously.
In order to automatically identify laser print documents, a new sparse representation algorithm based on Gabor features is proposed for print document identification. Considering that toner accumulation texture characteristics of laser print documents, the proposed method first extracts Gabor features of the image at multiple scales and multiple orientations, and then uses principal component analysis to reduce the Gabor feature dimension. At last, different classifiers are used to the identification of laser print documents. Experimental results on our database show its efficiency and effectiveness with a correct printer identification rate of 94.74%.
According to the accumulation of points within WiFi locations based on the fingerprint map database, and error accumulation calculated by Pedestrian Dead Reckoning, a loose fusion coupling algorithm by Adaptive Weighted Extended Kalman Filter is presented. This method maintains the high-precision of WiFi locations. In the meanwhile, the algorithm inherited the coherence from PDR(Pedestrian Dead Reckoning), which not only decreased the accumulated rebound points, but weakened the error accumulation, enhanced the efficiency of the fusion algorithm, and finally improved the precision and stability of indoor localization. The result denotes that this method works in the indoor environment quite well, which improves almost 22.9% according to WiFi results.