2004 Vol. 29, No. 5
In modern gravity satellite campaigns satellite-based GPS receiver, accelerator, K-band rangefinder etc. are used to measure the orbit of the gravity satellite. From these data earth gravity field can be derived. However, all the information provided by GPS and mentioned satellite-based instruments are in the format of 3D cartesian coordinates. So it is meaningful to derive the expressions of earth gravity field, gravity and gravity gradient in 3D cartesian coordinates system in the determination of earth gravity from the information given by gravity satellites.
The high frequency STAR micro-accelerometer can provide the nogravitational perturbation force acceleration acting on CHAMP satellite. From the CHAMP mission level-2 data, the 0.1Hz acceleration, Star camera data, thruster firing data and some corrections are available. In this paper, the level-2 data (from 2002 001 to 2002 100) were processed. Before applying corrections, which are provided by CNES, the curves of mean acceleration per day are very rough. When the corrections were applied, the curves show the clear trend. Finally, the results from 2002 001 to 2002 100 CHAMP level-2 data are discussed, and some suggestions in CHAMP data processing are given.
In the course of measuring data processing, the distribution and quality of the surveying data can be approximately described by the histogram. But in the course of drawing histograms, the unusual histograms, e.g. bimodal histogram are often presented. For such unusual situation, people impute to the measuring data itself. Analyzing the measuring data which draws the unusual histogram, this paper discusses the question that the histogram block melts histogram, and provides the formula of calculating the fuzzy frequency and method for drawing the fuzzy histogram. The superiority of the fuzzy histogram is stated.
On the basis of damped LAMBDA algorithm, this paper puts forward two processing programs for this case. One is that the constraint equations and observation equations are solved together according to the adjustment by elements with conditions. The other is that the coordinate function constraint is just treated as a judgment for choosing ambiguities. Last, two examples are used to test and evaluate the algorithm.
This paper analyzes the reason of systematic error forming, the sound velocity error and attitude error are thought to be two main contribution factors. Although the error character themselves shows random, their effect for depth presents systematic. For a sonar system, according to the nominal accuracy, in whole systematic error, the contribution of sound velocity error for depth is about 35%, and the contribution of attitude error for depth is about 65%. Based on the proportion, according to the character of non-parametric (semi-parametric) method, the systematic error in depth is estimated and weakened.
Systematic errors contained in observations are always complicated smooth function varying with some variables. This paper describes this systematic errors using natural cubic spline, which is nonparametric component in semiparametric regression model. Penalised least squares technique implemented in the procedure reduces to unique solution. According to simulating tests, the semiparametric regression model and the penalised least squares technique can better separate systematic errors from observations compared with the parametric model and the least squares technique.
This paper introduces the processing techniques of Sidescan Sonar. These techniques are not used in general image Processing. Generally, sidescan images have geometric distortions and intensity distortions. The former are discrepancies between the relative location of features and their true location, and the latter are deviations from the ideal linear relation between image intensity and backscattering strength. If the raw data include the navigation information, the geometric distortions can be corrected from it, otherwise, the attitude of vehicle can be inferred from the sonar image.
A new concept of ambiguity resolution, called dual-space ambiguity resolution approach(DARA), i.e. searching in two spaces at the same time, is presented. It shows DARA can dramatically reduce the ambiguity candidates even if the C/A codes are used and faster than in an individual space, because only few ambiguity candidates meet the conditions of the two spaces simultaneously. The result of vehicle test shows that the new approach performs perfectly. Compared to the traditional RTK, millimetre level of the KINRTK is achieved.
In order to explore a reliable model, we must eliminate the errors of the modeling surveying points firstly. Because of the brainpower and robustness for parameters exploring, the principle of information diffusion is naturally used for exploring the reliable model. This paper analyzes the probability that some of the factor weights can be determined by the principle of information diffusion, deduces a series of statistics to the scale and angle during the course of the modeling, and brings forward a reasonable method for judging the model reliability of the plane similitude conversion by using the approximate t-test. Lastly, the operation process about the model reliability is analyzed, on the basis of the example of the model optimization which was used for Xiaodongjiang GPS metamorphose monitoring network.
Wavelet theory is efficient as an adequate tool for analyzing single epoch GPS deformation signal. In this paper, wavelet analysis technique on gross error detection and recovery is advanced. Criteria of wavelet function choosing and MALLAT decomposition levels decision are discussed. An effective deformation signal extracting method is proposed, that is wavelet noise reduction technique considering gross error recovery, which combines wavelet multi-resolution gross error detection results. Time position recognizing of gross errors and their repairing performance are realized. In the experiment, compactly supported orthogonal wavelet with short support block is more efficient than the longer one when discerning gross errors, which can obtain more finely analyses. And the shape of discerned gross error of short support wavelet is simpler than that of the longer one. Meanwhile, the time scale is easier to identify.
Discussing the dynamic balance between cultivated land demands and supplies and comprehensively analyzing the driving factors for the changes of the balance, this paper presents the dynamic balance system of cultivated land demands and supplies with the conceptual framework of multi-scales. In accordance with such main processes as monitoring changes, this paper puts forward a forewarning system of dynamic balance between cultivated land demands and supplies at multi-scales, aiming at the driving factors. It also gives some detailed discussion on multi-dimension warning indexes and quantitative analyses on disequilibria in quantity, quality, per capita capacity and time scale.
In the light of the graphic conflicts between the streets and the buildings caused by the exaggeration of the street symbols during the map generalization, this paper puts forward the concrete methods to let the computer simulate human cartographers and have the functions of "visual sense" and analysis. For this purpose, a special hybrid raster-vector data structure is designed and the graphic conflicts can be solved by the cartographic displacement and constrained reshaping of all the building symbols within the displacement zones.
Traditionally evaluation knowledge is integrated with land evaluation software by tight mode or loose mode. It can not meet the needs that the land evaluation systems can be organized at will in different cities, and entities such as factors and samples can be analyzed by different methods or processes. The evaluation knowledge's classifications, expressions, storage, integrated mode and the relations with data entities are discussed, and the evaluation repository is designed. Three types of evaluation knowledge: descriptive knowledge, rational knowledge,process knowledge are divided into several subsections such as entities, database structure, fields content, formula parameters and calculating methods etc. They will be expressed as strings and saved in the repository file.
This paper proposes a novel formalization framework of topological relation of contour lines and elaborates the applications based on it. The idea is concentrated on the continuity of the spatial proximity and the direction among contour lines. Constraint Delaunay TIN on the contour lines is employed to acquire proximal relation of neighboring contour lines. The benefits of the proposed idea are demonstrated experimentally on the time-consuming and error-prone tasks of assigning elevation value to the contour lines and connection of broken contour lines resulted from vectorizing the scanned map.
Reviewing the traditional vector parallel line based and raster based method for extracting the skeleton and center of polygon, this paper presents an approach to the automatic extraction of skeleton and center based on the constraint Delaunay triangulation. The triangle in the local joint location acts as the root of the extraction tree. This paper calculates the area size split by the above triangle and makes the extension continue in the two directions whose areas are bigger. The operation repeats until all extensions ends at the vertex of polygon. The connection of all the midpoints of triangle's edges in sequence forms the skeleton of polygon. The algorithm is developed in detail and its properties are analyzed by experiment.
Two methods for calculating similarity for spatial directions between areal objects in raster data are given: one is based on the features of raster data and the directional matrix to calculate the similarity for spatial directions; the other is based on the variation of angle between each raster cell and a reference object to calculate the similarity for spatial directions. The two methods simplify the calculation of similarity given by Goyal for spatial directions based on direction matrix model, or overcome the limitation of direction small change. The two methods have broader applicability to calculating the similarity for spatial directions.
CNSDTF is a national commend standard for the transformation of geo-spatial vector data. There are some difficulties when transferring digital map with this CNSDTF. This paper expounds a new concept that there are two kinds of attribute classes in geo-spatial vector data after researching this standard earnestly, and discuss the extension of functions for CNSDTF briefly.
An approach based on canonical correlation analysis in multivariate statistics is introduced to change detection of multi-temporal/multi-channel remote sensing imagery. The basic idea is to take multichannel remote sensing imageries acquired at different times as groups of random multivariates, then construct linear combinations to explore correlations between them, thus finding out biggest differences over the time span. In our approach, MAD transformation is firstly conducted on original imageries to produce a difference image, then MNF transformation is utilized as a postprocessing step to separate noise from signal in difference images, and change information can be effectively concentrated into a few components of the final result. Another change detection method based on PCA is also described briefly for comparison. Experimental results of a case study using Landsat5 TM imageries are presented to demonstrate the effectiveness of our method. And the characteristics of correlation between results and original imageries are discussed in detail.
To meet the productive application of satellite data in land cove classification the higher resolution, the more bands are used, the more accurate results can be produced. A clustering in 3D self-organized neural nodes is used. ASTER data is a new kinds of sensors, 3 bands with 15m resolution and 6 bands with 30m resolutions. Dagang region in Tianjing is selected as a case study area. The Wavelet fusion is applied to fused different bands with different resolutions, then the self-organized neural network classification for land use is performed. Finally classification results is compared with that of the maximum likelihood classification with the same training samples. The accuracy of validation result is over 94%.
A fast algorithm (PFS) applicable to H.264 is proposed: utilizing one dimension projection to eliminate unwanted positions. The algorithm can make 7 modes of motion estimation perform almost simultaneously. The simulation results show that the proposed algorithm is up to 6 times faster than the exhaustive search algorithm, with almost identical performance.