2016 Vol. 41, No. 9
This paper describes a novel method for special textural aerial image matching based on PCA-SIFT. Images are down sampled and subsequently, PCA-SIFT feature matching is applied. The matched points are used to calculate the homography matrix and the corresponding areas are determined between stereo image pairs. PCA-SIFT is performed again on the corresponding areas to get more matching points, and gross errors are detected. Finally, an improved least square image matching method is implemented to refine the PCA-SIFT matching results. Examples of special textural image matching demonstrate that the proposed match method can achieve subpixel accuracy and works well on images with poor texture and repetitive patterns. This method fully satisfies the requirements for automatic aerotriangulation image measurement.
Automatic water-body extraction from remote sensing images is a challenging problem. In this paper, a novel automatic water-body extraction technique is proposed for optical visible remote sensing images. It integrates image segmentation, image registration and change detection with GIS data as a whole process. A new iterative segmentation and registration strategy is also proposed. A multi-scale visual attention model is introduced to detect salient areas and a level-set segmentation algorithm is employed for image segmentation. An improved shape curve similarity (ISCS) method is presented to constrain the matching of image segmentation objects and GIS-identified water-bodies. Furthermore, a buffer-based change detection algorithm was designed to obtain unchanged water-bodies and non-water objects were eliminated with the aid of GIS data and spectral features. Experiments were carried out on three sets of data.Results show that the proposed method was effective in rapid water body extraction and change detection.
This paper proposes a quick and viewpoint-invariant matching method for oblique images. We preprocess an oblique image to obtain a rectified image that eliminates the geometric distortion, scale, and rotation of the image. First, we calculate the homography matrix between the oblique image and the object space plane by making full use of the Interior Orientation(IO) elements and the rough Exterior Orientation(EO) elements of the oblique image and recover the oblique image to a rectified image through 2D perspective transformation. Secondly, we extract the Harris corner-points from the rectified image and describe them using the SIFT descriptor. Thirdly, in order to distribute matches evenly and improve the matching efficiency during the matching process, we use the fundamental and homography matrices to calculate the potential area of the correct corresponding point of a Harris corner-point to be matched, and pick out all the extracted Harris corner-points in this potential area as candidate points. Nearest Neighbor Distance Ratio(NNDR) and Normalized Cross Correlation (NCC) measure constraints are used to get matches. Experiments conducted on three pairs of typical oblique images demonstrate that our method takes just a few seconds to match a pair of oblique images with a plenty of corresponding points distributed evenly with an extremely low mismatching rate.
Noise from clouds is a common problem in optical satellite image processing. The high pass filter (HPF) fusion method is analyzed as a way to estimate the influence of cloud noise during image fusion. An approach combining cloud detection with HPF is introduced that refines the results of image fusion containing clouds. A, NIR/R-OTSU cloud detection approach is employed for real-time cloud detection, thus areas covered by clouds can be identified. A local optimization strategy is adopted in image fusion with HPF in cloudless blocks to get the fused image. Merged multispectral and panchromatic iZY-3 satellite image results show that the algorithm discussed in this paper performs better than HPF, IHS transform and Pansharp methods for merging images with clouds.
Chlorophyll content is an important parameter when assessing rice cultivation and production. In order to estimate chlorophyll content quickly and precisely, different nitrogen level experiments for Ninggeng 43 were conducted. Canopy hyperspectral reflectance and the SPAD value at different growth stages were measured. We analyzed the red edge characteristics of hyperspectral reflectance at the canopy level and built the SPAD estimation models. Our results revealed that the SPAD value increased with an increasing nitrogen level, and reached a maximum value at the booting stage and then drops. Spectral reflectance gradually became smaller in the visible wavelengths and bigger in the near infrared wavelengths with increasing nitrogen levels, there were 'red shift' and 'blue shift' phenomena from jointing to booting, before the filling stage for the red edge position, amplitude and region of the canopy spectra. All three red edge parameters increased with increases in the nitrogen level. The model with the red edge region area as the independent variable was determined to be the optimalfor the SPAD of the rice canopy at jointing stage, but for the booting and filling stages, the model based on red edge position was more reliable for predicting SPAD values.These results are a little different from rice in south China, using hyperspectral technology can quantitatively retrieve the SPAD values of rice at the canopy level, and therefore provide a theoretical basis for rice growth monitoring based on remote sensing.
Changes in both the area and thickness of Arctic sea iceare significantly influenced by global climate change the rapid decline in Arctic sea ice during the summer of 2007 is analyzed empirically in our research. Sea ice thickness can be retrieved from the sea ice freeboard heights. As ICESat/GLAS (ice, cloud and land elevation satellite/geoscience laser altimeter system) provide high-precision sea ice elevation information, therefore we could extract the freeboard heights of the Arctic sea ice from ICESat/GLAS datasets spanning from 2003 to 2008 to analyze seasonal and inter-annual variation in Arctic sea ice. The results show that the Arctic sea ice freeboard heights have been decreasing during 2003-2008, especially in the summer of 2007. Systematic deviationsin the retrieved results are discussed and analyzed based on the ULS (Upward Looking Sonar) field data.
Selectivity estimation for spatial databases is a core scientific problem in query optimization. The exiting spatial histograms violate theintegrity of spatial objects, so it is difficult to precisely calculate the selectivity of spatial data to deduce the histograms of query results. In view of the above problems, we propose a forward cumulative annular bucket histogram, referred to as the cumulative AB histogram. This histogram establishes annular buckets to receive all spatial area objects. Therefore, it maintains the integrity of area objects and achieves better performance on the selectivity estimation and histogram deduction in fine spatial topological query. We discuss some theories of the cumulative AB-histogram in detail and propose selectivity estimation methods for fine topological queries. We take land use data as example to show accuracy of selectivity estimation and discusstopics relevant to the efficiency and scope of applications.
Existing studies of big data taxi GPS tracesdo not consider the characteristics and demands of out-of-service taxi driver activities, such as refueling, dining, and shifting activities. This paper studies the these short-term out-of-service behaviors, extracts short-term out-of-service behaviors from taxi trace data, and analyzes the spatio temporal distribution of these events with kernel density estimation (KDE) for linear features. We also analyze the spatial correlation between short-term taxi out-of-service behaviors and locations of gas stations, using Ripley's K function. Our experimental results show that this approach effectively uncovers short-term taxi driver out-of-service demands and exposed the ineffective allocation of urban public resources, by analyzing spatio temporal distribution of short-term out-of-service taxi activities. Our results couldsupport decision-making concerning adjustment and optimization of public resources.
Spatial grid units are usually used for investigate urban human mobility. These units can easily lead to a modifiable area unit problem (MAUP) stemming from size variations between grid units. Current research on urban human mobility does not consider the MAUP or its influence on the quantitative analysis of human mobility. To address this problem, we used massive mobile phone tacking data to conduct a quantitative analysis of the effects of modifiable areal unit problem when dividing the urban space into different size of grid cell sand determined that intra-grid movements increase approximately at a linear pattern, and inter-grid human mobility decreases with grid sizes in a linear pattern and thus grid cell size affects spatial decisions about urban human mobility. Different sized grids deliver inconsistent results when extracting important locations from a human mobility network. When combined with the land use data, the grid cells containing residential and industrial land use types are the most affected. This paper discusses possible means to mitigate the uncertainty problems.
The number of replicas in the control strategy of a traditional distributed file system is calculated based on internal resources while changes in external demand are ignored. However, this strategy is not suitable for deployment in a service-based, resource-rich internal storage "smart city" cloud storage center. We propose a control model for the number of replicas ,which combines data security(the minimum amount of copies)together with service needs (best copy volume). A predictive algorithm based on double time granularity is included in this model to predict the popularity of a file. In addition, the number of copies in the cloud adjusts itself dynamically according to the popularity of a file and system resources. Simulation experiment results show that the accuracy of double time granularity forecasting is greatly increased. Compared to the original static copy, a dynamic copy mechanism based on double time granularity prediction has a significant advantage in response to sudden large-scale concurrent accesses and its storage resources occupancy rate is low.
A magnifier glass map is one of the basic and important tools in GIS software for map browsing. In order to trade off information unbalance between the window and data visualized in magnifier glass map, a piecewise scale change function is taken as a method for projection to construct the relationship among observing object and its neighbors under the Voronoi k order adjacency relation. Taking objective equilibrium distribution and deformation readability into account, two parameters are proposed and calculated quantitatively using Voronoi k order adjacency relation and constrained area of a Voronoi region, with one being the parameter controlling the target distribution balance and the other being target deformation readability. We also presentthe target selection algorithm. Through comparative experiments, we determined the stability of targets accounting for the large deformation in the magnifier glass map representation, providing some certain practical value.
Displacement, deformation and symbolization of roads in maps usually results in spatial conflicts between roads and the surrounding buildings. The algorithm proposed in this paper resolves spatial conflicts between roads and buildings collaboratively, by means of transforming buildings displacement into linear feature displacement. The algorithm maintains the pattern of buildings along roads. The reasonability and availability of the algorithm was verified by a test. In this algorithm, spatial conflict areas are detected, and types of conflicts identified based on cartographic features in these areas. Buildings inside the areas are detected, and buildings close to detected buildings inside each conflict area and not far from a threshold value are distinguished from other buildings. Perpendiculars from each building center to a road are calculated and regarded as connections between buildings and roads. A minimum spanning tree (MST)for the centers of found buildings is established. The MST, perpendiculars and roads constitute a linear network, and a Snake model is used to analyze the collaborative displacement of this network.
Road density is a useful index widely applied in the analysis of ecological effects, road network planning, delimitation of urban areas and road network generalization. Extraction of dense and sparse areas of the road network is a key issue in the filed of automated map generalization. This paper proposes a method for road density partition. The method creates the Voronoi diagram of road intersections and endpoints, and then uses Gi* to identify statistically significant spatial clusters of high values and low values for the area of a Voronoi cell. Finally, the method aggregates neighboring Voronoi cells from the statistically significant spatial clusters of high values and low values at a 95% confidence level. The road network of Hong Kong at the 14, 13 and 12 levels of Google Map are used as experimental data for evaluation of road network selection. Experimental results showed that the road density partitions using the proposed method generally reflected the density of the road network, while the road density contrasted well before and after road selection. Our method is superior to the grid density approach.
The recognition of microstructures such as road junctions in road networks is of importance for multi-scale road modeling and pedestrian navigation. Aiming to resolve deficiencies in the current recognition methodsfor geometric shape description and shape matching with complex road junctions, the paper presents a road junction recognition method based on the classification of roads and starting with recognition and reduction. Firstly, junctions are located by the node cluster density detection. Then the characteristic vectors are built by analyzing and quantifying the sizes, shapes, and attributes of roads; treating this issue as a two-class classification problem for differentiating main roads and the auxiliary sections, solved by using a support vector machine. Using Open Street Map data for experimental verification, our results show that this method can effectively recognize road junctions.
Compared to ambiguity-float precise point positioning (PPP), ambiguity-fixed PPP needs a short convergence time and has better positioning accuracy. However, when there are only a few GPS satellites without an optimal geometrical distribution, ambiguity-fixed PPP requires time to achieve a first fixed solution. The objective of our research was to reduce the time to first fixed solution (TFFS) for GPS-only PPP by adding GLONASS satellites. An observation model for GPS/GLONASS ambiguity-fixed PPP using the integer phase clock method was developed and tested. Forty kinematic tests using static data show that the average TFFS is 50.2 min for GPS ambiguity-fixed PPP while only 25.7 min for GPS/GLONASS ambiguity-fixed PPP; reduced by 48.8% after adding GLONASS observations. A vehicular test demonstrates that GPS-only PPP cannot get its first fixed solution due to a less than ideal observation environment while GPS/GLONASS PPP can realize ambiguity fixing.
In urban mobile mapping activity, odometer data is used to complement global navigation satellite system (GNSS)/ inertial navigation system (INS) data in positioning & orientation systems (POSs). We analyze the two error sources in POS, misalignment and odometer scale factor error, and their propagation in ECEF. A cascaded extend Kalman filter (EKF) was designed to estimate errors without changing the GNSS/INS EKF, after which is the INS/Odometer (ODO) EKF. Navigation errors and misalignment, and scale factor error are modeled as system states in two EKFs, respectively. Given GNSS continuous observation and fixed ambiguity, the INS was effectively calibrated by GNSS/INS EKF and its position increment was used as the measurement of the INS/ODO EKF. Meanwhile, the calibrated odometer was used as an observation for INS when the GNSS experienced loss of lock. These tests indicate that the algorithm can calibrate misaligned angles of the POS and the scale factor of the odometer. Consequently, positioning accuracy was significantly improved when GNSS experienced loss of lock. Accuracy could be restricted in half a meter when two minutes GNSS gap happened in the process of mobile mapping with the use of a smoothing Kalman filter.
The purpose of a time scale algorithm is to form a time scale with high frequency stability from an ensemble of clocks. The main component in a time scale algorithm generates weights and predictions for N clocks from the N-1 measured clock differences. Traditional algorithms mostly focus on how to set weights to improve the stability of the time scale. The algorithm we propose instead focuses on how to adjust predictions to improve stability. We use Kalman filters to estimate the states of measured clock differences. We adjust predictions with the every update of the Kalman filter, to form a time scale. Theoretical analyses and simulations both show that this algorithm filters out white frequency modulation noise, and the formed time scale mainly involves the random walk frequency modulation noise; hence, the time scale formed by our proposed algorithm has high short-term and middle-term stability. Itis also a continuous, real-time, and predictable time scale.
In the current research on the total least squares method in the conversion of GPS height, the calculation of the conversion parameter and elevation abnormities of the check points are generally performed in two steps, and only consider the error in the coefficient matrix used to calculate the parameters; errors in the coordinate of the check point are ignored. In view of this gap, we put forward a total least squares fitting estimation model of GPS height transformation, that combines the calculation of fitting parameters with the calculation of elevation abnormities at inspection points, and considers the position error of all points. Collocation calculation experiemental results verify the feasibility of this method. These test results show that the method can effectively improve the accuracy of elevation conversion.
Using zero drift correction for single survey line data and a scale factor change detection method, we carried out fine processing of CG-5 gravimeter from dual survey observation data during 2014 for the Yunnan area; gravity variations in the area were obtained. The results show that: (1) a zero drift rate of CG-5 gravimeter varies significantly over time. In the first survey period, the zero drift rate of gravimeter C1169 and C1170 present a tendency of linear increase at a magnitude of 20×10-8 m·s-2·h-1 variation. During the second survey period, the zero drift rate of the gravimeters tended to be stable. After zero drift correction, the observation precision improved substantially. (2) The scale factor of the C1170 gravimeter changed at a magnitude of-0.000 100. (3)After scale factor correction, the gravity variation showed a better agreement with the absolute observation result, and the gravity variation image can clearly reflect the difference between the two sides of the Zhaotong-Ludian faults and seismogenic background of the Ludian 6.5 earthquake, which verified the view that a large earthquake occurred in the positive-negative transition area of gravity variation. Systematic observation error can be effectively eliminated using our proposed method that obtains the real gravitational field change.
There are advantages inanalysis of theintrinsic chaotic component in the fitting residuals in displacement monitoring statistics as well as in traditional algorithms for mining dam monitoring information. This paper therefore combines the characteristics of the conventional optimization algorithm, based on using frog leaping algorithm (SFLA) to determine the optimum weight in the sub-model, to establisha dam displacement combination monitoring model based on SFLA. Taking the chaotic characteristics of fit residuals in the statistical analysis into account through using phase space reconstruction and chaos theory, we analyzed displacement residuals and predicted values, and superimposed the forecast residual term with SFLA model predictions and developed leapfrog algorithm combination forecasting methods fusing chaos residuals, and a dam displacement leapfrog algorithm implementation process that considers of the chaotic residuals. Examples show that the forecasting ability of this model provides a new, improved approach to the analysis of dam deformation.