2016 Vol. 41, No. 7
One of the challenges to build a smart city is to create intelligent and ubiquitous geospatial Web services, and provide a composition of these services available to users. These geospatial Web services must be better-tuned to a given context. Using AI planning techniques and semantic enhancement, this paper presents a dynamic, context-aware service composition method which is achieved by transforming the service composition problem into a planning problem described in a standardized fashion using PDDL. Semantic representation of a geospatial Web service is modeled by extending OWL-S ontology with GeoContext class, GeoContextPrecondition class, GeoContextEffect class and GeoContextBinding class, which support geo-context and geo-context adaptation. Semantic information is used for the enhancement of the composition process as well as for approximating the optimal composite service when exact solutions are not found. Independence from specific planners is maintained. The generating plan is transformed to a WS-BPEL compatible representation, which is executable on the business process execution engine. A case study about smart travel is also presented to demonstrate the functionality, effectiveness and potential of the approach.
To further improve the accuracy of remote sensing image classification, a classification algorithm of remote sensing image based on Log-Gabor wavelet and Krawtchouk moments is proposed in this paper. Firstly, multi-direction and multi-resolution filtering is performed on a remote sensing image by Log-Gabor filter to extract texture features of the remote sensing image. Meanwhile Krawtchouk moments invariants of the remote sensing image are calculated to serve as edge shape features of the remote sensing image. Thus a complete feature vector is constructed with the texture features extracted by Log-Gabor wavelet. Finally the remote sensing image is classified according to the extracted feature vectors by supporting vector machine. The classification result of remote sensing image is obtained. A large number of experimental results show that, compared with three recent classification algorithms of remote sensing image such as the algorithm based on Gabor wavelet, the algorithm based on Log-Gabor wavelet and the algorithm based on Krawtchouk moments, the proposed algorithm has a significant improvement in the subjective visual effect and objective quantitative evaluation index such as classification accuracy. It is a kind of effective classification algorithm of remote sensing image.
A fast compression algorithm for hyperspectral images based on dispersion sorting in transform domain is proposedConsidering the characteristics of hyperspectral data in the Hadamard domain, the proposed algorithm selects a favourable order adaptively and sorts the dimensions of spectral vectors by dispersion. Consequently, the energy and difference of the spectral vectors is concentrated on the lower dimensions and the dimensions of high signal to noise ratio are moved into low dimensional subspace. Then, efficient eliminating inequalities are constructed. When combinined with the LBG(Linde Bazo Gray) clustering algorithm, the proposed algorithm quickly completes the encoding of hyperspectral images via vector quantization. Experiments were conducted under different compression ratios The results show that, the compression algorithm for hyperspectral images as presented in this paper can reduce the computational complexity significantly when completing fast compression based on the precondition of good recovery quality.
In the process of object-oriented change detection, the accuracy of the final result is directly related to the change threshold. Aiming at this problem, this paper presents a novel object-oriented change detection method using fuzzy comprehensive evaluation. Firstly, multi-scale segmentation is used to obtain initial objects; then, optional features for each object are chosen. Several criteria, such as objects change vector analysis, Chi-square transformation, the similarity of vector, and correlation coefficient, are treated as factors to get the “synthetic inter-layer logical values” of the fuzzy comprehensive evaluation model. The fuzzy comprehensive evaluation model is used to decide whether the target object has changed or not. Finally, the result of fuzzy comprehensive evaluation model is compared with the result of each single “inter-layer logical value” that using OTSU threshold segmentation. Based on this theory, experiments are done with SPOT5 multi-spectral remote sensing imagery. The experimental results illustrate that the model proposed can integrate the spectral and texture features and also overcome the defects caused by using single criteria. The fuzzy comprehensive evaluation model is proved to outperform other methods.
In order to solve the vegetation filtering problem of high-steep slope point cloud data in complex scene, the multi-scale dimensionality feature of vegetation and rock laser point cloud on high-steep slope is studied first. Then the SVM (support vector machine is utilized) to build a classifier. Finally a vegetation filtering algorithm of high-steep slope laser point cloud is proposed and a three-dimensional laser point cloud filtering software LIDARVIEW is written . The data shows that: the vegetation of different scales in complex scene is well identified and the classification accuracy of the filtering algorithm is high. The algorithm is not affected by the density, occlusion of laser point cloud as well as the complex topography and it is also suitable for airborne LiDAR point cloud data filtering. The classification accuracy of rock under high vegetation cover is greater than 93%, while under low vegetation cover is higher than 97%. The algorithm has great significance for hilly high-steep slope terrain measurement with complex topography.
Assessing soil organic carbon (SOC) effectively is crucial to understand the global carbon cycle and achieve sustainable management of agricultural systems. Visible/near-infrared spectroscopy has been widely used to retrieve SOC content. However, retrieval models with visible/near-infrared spectroscopy generally have regional limitations. The objective of our work was to study the transferability of models between regions using the spectra and SOC measurements of the soil samples collected from Zhongxiang and Honghu(number of samples are 100 and 96 respectively), Hubei province, Peoples Republic of China. Our results show that the regional model calibrated with Zhongxiang or Honghu datasets could not be used in another region. However, the model calibrated with the entire Zhongxiang dataset and 30 samples from Honghu performed well when estimating the SOC contents in Honghu (R2=0.88, RMSE=2.51 g·kg-1). Although the transferability of this regional model is very limited, nevertheless, this study illustrates that by combining a small set of samples from target region with the regional soil spectral library of anther region can improve the performance when estimating SOC contents in target region, and thus can reduce the cost of sampling,measurement, and determination in a target region.
Contour line is one of the commonly used data sources for DEM generation, and the interpolation method has a significant impact on the accuracy of the contours-derived DEM. However, most of the existing interpolation methods generate low-accuracy DEM due to unreasonable methodologies. In this paper, an interpolation method is presented to build DEM from contour lines by integration of morphological reconstruction and distance transformation with obstacles. Particularly, morphological reconstruction is used to get the elevation values of the higher contour lines and the lower contour lines of any a spatial point between two contour lines, and distance transformation with obstacles is used to get the geodesic distances of the spatial point to the higher contour lines and the lower contour lines respectively. At last, linear interpolation along steepest slope is used to get the elevation values of the pixels to be interpolated. The experiments demonstrate that the accuracy of the DEM generated by our method is higher than the one generated by Hu Peng's MADEM method if only contours are employed to interpolate a DEM.
A digital of the current study watermarking algorithm based on DCT is presented to protect the copyright of DEM data. The of the current study aim is to solve the problem that how to balance the robustness requirements of the watermark and the high precision need of the DEM. Namely the watermarking algorithm should not only have near-lossless but also robustness and be adaptive. As for the watermarking embedding strength, it is calculated by combing with the precision of the slope and aspect of the DEM and the Watson perceptual model. And by given slope and aspect precision, the watermark embedding strength can be calculated automatically. Moreover, the methodology is designed to embed a watermark in terrain lines in order to improve the robustness. The experiment results are satisfying not only in the near-lossless of the DEM precision of slope and aspect, but also in the transparency of watermark. Besides, the contour line and the SOS (slope of slope) index derived from the watermarked DEM are near-lossless as well. In addition, the watermark can resist the JPEG compression and cropping attack which means that the watermark possess robustness.
Consistency is a basic indicator of spatial data quality assessment. As an opposite of consistency, inconsistency of spatial data refers to the conflicts or contradictions between spatial objects at the same scale or different scales. Spatial data inconsistency is a common concern in the international geographical information community since 1990's, and will have a direct effect on spatial data integration, cartographic generalization and spatial data updating. Previous studies mainly focus on handling inconsistencies between line objects with the same scale. Less attention is paid to the problem of handling inconsistencies between objects from maps with different scales or related issues about “a building dropped in a river” in the process of cartographic generalization while deriving a smaller scale map from a larger scale map. Hence, this paper proposes an approach based on data assimilation principles for handling inconsistency between rivers and buildings. The proposed method first extracts boundaries of both rivers at different scales by means of spatial operations (e.g. split, intersection). Then, morphing transformations are performed from a larger-scale river boundary to the other one. Finally, with respect to topological and distance constraints between river boundary and a building, an optimal assimilated river boundary is determined. Experiments demonstrate that the proposed method is able to handle inconsistency between generalized river and buildings at a larger-scale effectively, providing a promising solution for eliminating topological conflicts while deriving consistently a smaller-scale map from a larger-scale map.
A Sub-region of the Arctic coastal plain of Alaska was divided according to latitude and distance from coastline, and were prepared for the selection of spatial impact factors that influence the average lake surface water temperature (LSWT). After analyzing the relationship between each factor with LSWT by isolating the other factors, the factors would be recalculated via logarithm or exponent transformation in order to satisfy a linear relationship if the relationship is nonlinear. The most related factors including lake area, compactness index, mean depth, the distance to Chukchi Sea, the distance to Beaufort Sea and latitude were used to construct the LSWT spatial distribution model. To decrease the spatial non-stationarity of the models, the principal component analysis was applied to eliminate the effect of multicollinearity among variables. Then the LSWT spatial distribution models were constructed by ordinary least squares regression and geographically weighted regression method, respectively. The validation results show that the accuracy of geographically weighted regression model improved compare with ordinary least squares regression model. The coefficient of determination of the geographically weighted Regression model, R-square, is promoted from 0.615 to 0.752. And compared with the OLS model. The MAE and RMSE of GWR model decreased from 0.48 to 0.38 and from 0.65 to 0.44, respectively. It demonstrates that the improved GWR model can moderately depict the spatial distribution of thawed lake surface water temperature on the arctic coastal plain of Alaska.
As an important aspect of spatial relation similarity, directional similarity is widely used in pattern recognition, spatial query, spatial data match, quality assess of cartographic generalization and consistency checking of multi-resolution spatial data. Therefore, calculation of directional similarity is necessary. Now,the influential calculating model of directional similarity is proposed by Goyal, which based on the direction-relation matrixand derive the similarity value by calculating the least cost for transforming one direction-relation matrix into another. However, the current model is complex and the calculating result is inconsistent with the recognition of people.To overcome the limitations of the model given by Goyal, an improved model for calculating the similarity of spatial direction relations between areal objects is proposed. The new model improves the model given by Goyal in three ways: Firstly, the direction-relation matrix model for calculating the spatial direction relations between simple areal objects is extended to accommodate the calculation of the spatial direction relations between areal object groups. Secondly, the definition of direction relation distance between single-element direction-relation matrices is improved according to cognitive knowledge of direction relation difference. Finally, a new method based on the minimum element to calculate the distance between two multi-element direction-relation matrices is presented, which simplifies the process of calculation. Two experiments are performed to validate the effectiveness of the proposed model. One experiment is recognition experiment by comparing the calculating results with the recognition results. The second experiment is an application experiment of the improved model. Experiments illustrate that the similarity result between two spatial direction relations calculated by the proposed model is consistent with the human recognition of directional relation differences. It is simple and feasible for applications.
This paper presents a new fuzzy projection pursuit clustering (FPPC) algorithm. FPPC is a combination of the fuzzy clustering iteration (FCI) algorithm and the projection pursuit clustering algorithm. In this paper, we adopted a new projection index function formed by the standard deviation of projection values and the quadratic sum of Euclidean distance between projection values. The new projection index function can avoid the qualitative selection of the Density Window Width, which is generally determined by experience. After lowering the dimension of sample data using projection technology, the FPPC algorithm takes a dual iterative clustering approach with FCI and PPC. In the FPPC solution process, the chaotic culture differential evolution (CCDE) algorithm formed by the chaos theory, cultural algorithm and differential evolution algorithm is adopted. Experimental simulations show that FPPC algorithm has higher clustering precision and effectiveness.
The 60 m was determined as the optimal grain size for landscape pattern analysis with the methods of landscape index grain effect analysis and data loss assessment based on the Landsat images in 1983, 1993, 2003 and 2013 in Jiawang mining area. Landscape pattern change including patch level index and landscape level index was analyzed in this paper based on the optimal grain size. The results show that the regional landscape pattern including landscape level index showed the trends of gradual fragmentation, heterogeneity and connectivity downward driven by exploiting coal resources and urbanization before 2003, while the landscape pattern including landscape level index showed trends of continuous, equalization and connectivity index showed the trends of increase driven by the implementation of mine land reclamation projects, balancing urban and rural development as well as regional development since 2003. Farmland, construction land and water landscape patches changed actively with the time. The landscape pattern including patch level index showed trends of fragmentation, connectivity decreased driven by coal mining and other activities which changed more complex driven by urbanization during the period of 1983-2003.Construction land and water landscape patches showed the trends of high connectivity, patch regularization during the period of 2003-2013.
Kalman filter is one of the most common ways to deal with dynamic data and has been widely used in project fields. However, the accuracy of Kalman filter for discrete dynamic system is poor when the observation matrix is ill-conditioned. Therefore, the method for overcoming the harmful effect caused by ill-conditioned observation matrix in discrete dynamic system is studied in this paper.The causes of the ill-conditioned observation matrix and its effect on Kalman filter are analyzed. Biased Kalman filter and its algorithm are proposed by combining the biased estimation and Kalman filter in the sense of mean square error (MSE). The methods of choosing biased parameter in the new algorithm is proposed. By separately exerting some disturbance on the observation matrix and observation vector, two simulations are carried out. The experimental results show that the traditional Kalman filter is inaccurate when the observation matrix is ill-conditioned, and the biased Kalman filter is more accurate than the traditional one in terms of MSE.
In order to meet the navigation demand of the Mars exploration cruise phase, an autonomous navigation method based on observing the Sun, Mars, Earth and Stars with different sensors is presented in this paper. According to the characteristics of celestial navigation, the paper analyzes the different observation models. After that, according to the operability of the Line-of-Sight between Earth and Mars, two navigation models are established which use the Sun, Earth and Star observations and the Sun, Mars, Star observations, respectively. Then combined with information fusion technology, a real-time position and velocity estimation of the probe is achieved in any kind of navigation model. Finally, the feasibility of the method is verified by simulations. Simulation results show that the proposed method can be more feasible and efficient using multi-source observations, and can provide precise orbit determination information which meets the navigation requirements of the Mars exploration cruise phase.
We analyze the accuracy trends in landslide deformation, based on the exponential smoothing method and the practical stage of evolution landslide. We establish a connection between the main predisposing factor and model parameters, introducing monthly cumulative rainfall as the evaluation factor for the dynamic model parameter. We use cumulative displacement data from Baijiabao landslide for fitting and forecasting. The result shows that the absolute error and correlation coefficients in our final modal were 11.346 and 0.933. Compared with the conventional method using static parameters, the proposed more in line with the general laws of development of rainfall-induced landslides, and is more accurate.
At present, the vertical velocities measured by GPS mobile observations have relatively low precision, which is due to the significant effect of seasonal motion and which cannot be overcome effectively given the insufficient data. A method to correct the results of mobile observations by using the information from GPS continuous observation in the same area is given in this study. Firstly, amplitudes of annual motion in GPS continuous stations were calculated. The amplitudes were used to obtain the amplitudes in mobile stations by using spatial interpolation method. With the results above, the vertical time-series of mobile GPS could be corrected to weaken the annual motion effect, so the vertical velocity results were acquired with improved precision. Two tests based on the continuous (from 2010 to 2013) and mobile observation (from 2011 to 2013) from tectonic and environmental observation network (hereinafter referred to as TEONET) in Yunnan area are proposed to prove its reliability. The results show that, in the area with a consistent spatial motion with an annual period, the method can correct the results of vertical velocity measured by GPS mobile observation effectively and improve the precision of velocity results.
According to general relativity theory, the geopotential difference between two positions gives rise to a clock's running rate difference (time difference) or oscillation frequency difference. Inversely, the geopotential difference and height difference between these two positions can be determined by measuring the frequency or time difference between two clocks located at these two positions. Using the TWSTFT, two way satellite time and frequency transfer data sets at five timing-keeping stations released by the BIPM ( Bureau International des Poids et Mesures), we determined the geopotential difference and height difference between any two of the five stations based upon the gravity frequency shift method and TWSTFT technique. Compared with EGM2008 model results, the standard deviations of the geopotential and height differences are 129.2 m2·s-2and 13.2m, resp-ectively. Our experimental results are consistent with the current stability level 10E-15 of the atomic clocks installed at the time-keeping stations. The quick development of time-frequency science, including highly precise atomic clocks or optical clocks, creates the potential for using the TWSTFT technique to determine geopotential and height difference, as well as enable its extensive application in various fields.
The principle of constructing independent vector sets using Gram-Schmidt orthogonal transformation is proposed. The method of setting independent baseline and independent double-difference ambiguity data sets refers to the idea of vertex incidence matrix and vector weight. The three ways to build double-difference ambiguity data sets are discussed. The computing process of selecting independent baselines and independent double-difference ambiguities is provided. An example of orbit determination using global GPS track nets is tested. The result shows that the orthogonal transform algorithm is effective.
In order to eliminate magnetic heading perturbations coming from the near-field magnetic anomaly, a real-time compensation method is proposed based on magnetic anomaly inversion. Magnetic gradient tensor data arisen near-field ferromagnetic targets was measured by the designed magnetometer array. Then position, magnetic moment, and magnetic scalar values were combined to calculate magnetic vectors of the ferromagnetic targets. Real-time compensation was achiev
Uncombined precise point positioning (PPP) could be used to extract ionospheric delay with high accuracy. However, parameters estimation in PPP requires long converge time due to the high correlation between ionosphere and ambiguity parameters. Further more, the multipath effects at tracking station degrade the precision of code and phase measurements, thus impact the performance of PPP ionospheric delay estimation. For static observation stations, sidereal filtering could be used to eliminate multipath errors by taking advantage of the ground track repeat period of GPS satellites. After extracting the code and carrier phase residuals of the past few days in post-processing, multipath error correction model could be established with historical residual series by sidereal filtering, so as to improve the performance of real-time ionospheric delay estimation. Experiments with IGS observation data showed that with the application of sidereal filtering, real-time ionospheric delay extraction error decreased from 0.185 m to 0.028 m and convergence time of ionospheric parameters for newly rising satellites reduced from 80 minutes to 35 minutes. Improvements in single station ionosphere delay estimation could refine ionosphere model of local network. On the other hand, precise satellite slant ionosphere delay can be obtained at a lower elevation, which can reduce the layout density need of reference station network.
Ambiguity resolution is one of the key technologies for high precision GPS positioning. Accelerating ambiguity fixing can expand the application and improve the reliability of GPS positioning. Based on the characteristics of single-frequency GPS kinematic positioning, a new method to accelerate ambiguity fixing is proposed in this paper. First, a receiver and epoch double differenced approach was used to get the epoch-differenced coordinate information between neighbouring epochs. Then, the epoch-differenced coordinate information is combined with the ambiguity normal equation. Adding epoch-differenced coordinate information to the ambiguity normal equation can decrease the ill-posed-ness of the normal equation, so the accuracy of the floating ambiguity solution was improved and the convergence time of ambiguity was shortened. Experimental results show that the new method can accelerate ambiguity resolution in GPS relative kinematic positioning with good application value.