2019 Vol. 44, No. 4
A multi-GNSS precise point positioning (PPP) algorithm considering the inter-system bias (ISB) is adopted to process the multi-GNSS data obtained from 7 stations of the multi-GNSS experiment (MGEX). Using the proposed algorithm, the ISBs values can be achieved between Galileo, GLONASS, BDS (BeiDou Navigation Satellite System) and GPS (Global Positioning System). The results of static multi-GNSS PPP solutions show that RMS (root mean squares) values are 8.9 mm, 5.3 mm and 10.9 mm for the east, north and up directions respectively. One-day stabilities of ISBs described by STD (standard deviation) values are better than 0.12 ns for different systems, and especially Galileo is the best one. From the sequence of multi-day ISBs, significantly irregular ISB jumps can be found where the change range can reach nearly 20 ns. There are some differences on ISB values for different types of receivers, and the ISB values are similar for same types of receivers. On the whole, the ISB for Galileo is the most stable and optimal, and the results of BDS and GLONASS are almost equivalent.
A static measurement experiment and an elevator-platform test of GT-2A have been designed to determine the zero drift rate, resolving power and scale factor of the instrument. We have calculated the drift rate with the static continuous observations from GT-2A and a relative gravimeter, as well as the tidal gravity from solid tide model. Considering solid tide correction, the zero drift rate of GT-2A observations decreases from -6.5 μGal/h to -0.1 μGal/h, while the representative error caused by using the zero drift rate from calibration measurement decreases from 7.2 μGal/h to 1.1 μGal/h. Thus, we can see that the solid tide has great influence on the determination of zero drift rate. With these data, the ability for GT-2A to monitor solid tide can be tested. The results from frequency domain analysis shows that the tidal components whose amplitude exceeds 30 μGal can be seen in the amplitude-frequency diagram, which means the resolving power of GT-2A is about 30 μGal. The observations of GT-2A from the elevator-platform test can be used to calculate the vertical gradient of gravity, which is then compared with the result measured by the relative gravimeter. Then the scale factor of the instrument has been determined to be -0.003 4±0.011 6.
Single-frequency GPS ambiguity resolution on-the-fly has the problem that normal equation is ill-posed. Using epoch-differenced coordinate information between neighboring epochs can mitigate the ill-condition problem to some extent, as a result, the accuracy of the float ambiguity solution will be improved and the convergence time of ambiguity will be shortened. In order to further improve the ambiguity fixing effciency of epoch-differenced method, the epoch-differenced coordinate pseudo-observation error equation and its new variance matrix is reconstructed. The experimental results show that, compared with the previous algorithm with epoch-differenced coordinate information, the new method has higher stability and efficiency.
ue to ionospheric delays caused by the receiver code biases in the traditional multi-GNSS precise point positioning (PPP) using raw observations, the estimates can be negative values. An improved model of Global Positioning System/BeiDou Navigoction Satellite System (GPS/BDS) PPP with receiver differential code bias (DCB) parameters for raw observations is proposed in which receiver code biases on the first frequency of each system are constrained to zero and receiver DCB parameters are estimated. Ionospheric delays and receiver code biases are separated by the presented model. Additionally, the singularity between receiver clock offsets and ionospheric delays is reduced. GPS/BDS data from 4 stations of the Multi-GNSS experiment (MGEX) network are processed in static and kinematic modes. The results show that with the proposed PPP model, the average positioning accuracy and convergence time in static/kinematic mode are improved by 29.3% and 15.7%, 29.8% and 21.6%, respectively, in comparison with the traditional PPP model.
Due to the uncertainty of the Lagrange empirical parameter, selecting empirical of parameters for diverse observed data sets introduces uncertainty into the results, which weakens the applicability of the inversion method. By using the turning point of the L curve to replace the Lagrange empirical parameter as the regularization parameter, the algorithm focusing on the preconditioned conjugate gradient algorithm has been improved. The underground models have been converted to models with unequally spaced aiming to solve ill conditioned problem as to well as weaken kernel function attenuation. In order to take full advantage of the gravity gradient multiple components, the method of joint five independent measured components of tensor gradient gravity data has been taken with the purpose of meliorating the non uniqueness of inversion results. The effectiveness and reliability of the improved method are validated by the statistical analysis of multiple sets of synthetic models. For the application of the field data, analysis result shows that the improved calculation method is effectively applicable to the inversion of measured gravity gradient data, through inversion of airborne gradiometry data on Australian Kauring test site, we obtained 3D distribution of underground density anomalies. According to the previous results of gravity data inversion, this paper verifies the effectiveness of the algorithm, and discovers more anomaly blocks besides the central anomaly blocks. Our results show that the improved algorithm using field measurements can inverse the distribution of density anomalies, and the inversion results provide more detailed and reliable pattern information for the density anomaly.
We have developed the first Mercury precise orbit determination and geoscience parameters solution software system with independent intellectual property rights, MERGREAS, considering the great prospect of its future missions in China. The software simulates forecasting ephemeris, observations, and precise orbit determination (POD), and then results are compared with GEODYN-Ⅱ. The difference magnitude of the forecasting ephemeris is at 10-7-10-8 m in a day, and the speed deviation is at the magnitude of 10-9-10-12 m/s; besides, two-way range difference is 10-4 m and two-way range-rate difference is 4×10-6 m/s. In POD, the X direction error is 0.2 m, Y direction 0.7 m, Z direction 0.5 m, therefore, the simulation results show that the software precision of POD can reach the level of GEODYN-Ⅱ for MESSENGER. Meanwhile, we analyze the Mercury lander with simulation of same-beam very long base line interferometry(VLBI), with position error of 1 m for orbiter and 0.88 m for lander. With the errors combination from Mercury gravity models and Mercury rotation models taken into account, the position error is 13.6 m for orbiter and 250.3 m for lander. This software can provide reference for the Mercury tracking task in future. These research results have certain application value to China future Mercury exploration missions.
A method of detecting outlier of multibeam sounding with back propagation (BP) neural network is proposed in this paper for the complexity of bathymetric data of a ping. This paper constructs a training and learning algorithm for complex curve of multibeam single ping data for curve fitting based on the mapping function from input to output of BP neural network. Then it inspects the results from the previous steps lengthways by the correlation analysis of data of adjacent pings, and a vertical check to locate and remore outlier is also proposed. The experiment is conducted using the real bathymetric data, where there is a shipwreck in the middle. And also the result is compared with the combined uncertainty and bathymetry estimator (CUBE) algorithm, which is a popular method in detecting outlier of multibeam sounding at present. The experiment proves that the method proposed in this paper can detect the outlier more effectively.
To solve the deformed seabed topography caused by the representative error sound velocity profile (SVP) in the flat seafloor during measuring the depth of sea by multibeam systems, a sound velocity profile inversed and seafloor topography corrected method is proposed on the basis of the seabed observation. Looking on the incidence angles and the travel time of beam as the input parameters, the function relationships are built up between the beam displacement and the travel time of the error sound velocity. Then, the modified SVP, which was approximated to the true SVP, is inversed by the indirect adjustment and the Levenberg-Marquardt (LM)method. Thus, correcting the distorted terrain is achieved. Through the field data validating, results show that the inversed SVP is more converged to the original SVP than that of the error SVP. Meanwhile, the distorted terrain caused by the sound velocity errors can be reduced effectively by the proposed inversion method, and the STD (standard deviation) of the modified depth is reduced by 50% above.
In sub-bottom profile data processing, the artificial picking method is inefficient and can be influenced by human factors significantly, and the semi-automatic picking method needs the human intervention and the accuracy is low. In order to solve these problems, this paper proposes a method of using gray scale mutation and constraint of horizon direction to pick sub-bottom horizon automatically. Firstly, a pre-pick method based on gray scale mutation is used. Secondly, based on the continuity principle, a method of horizon tracking and filtering is proposed, which realizes the connecting of discrete horizon and filtering out of the irregular horizons. And then, taking the correlation of the ping sequence and horizon direction into consideration, a horizon growth method based on the direction constraint is proposed to realize the connectivity of discontinuous horizon. Finally, the process of horizon picking is given, and the results are verified by experiment.
Multibeam water column image (WCI) has high application value in underwater target detection and recognition. However, it is often affected by receiver beam side lobe, the third-party sonar interference, and complex marine environment noise, such as marine organism, suspended solids, microbubbles and so on, which brings a great impact on its applications. Therefore, this paper proposes a comprehensive method to suppress those interferences in WCI. Firstly, the origin and mechanism of the interferences have been fully analyzed. In accordance with the characteristics of spatial distribution of those interferences, an abnormal "arc" detection and elimination method based on the features of intensity distribution is proposed, which aims to suppress the abnormal "arc" mainly caused by receiver beam side lobe. In this process, a target remaining principle is set to remain the target fragments existing in the abnormal "arc". Secondly, a background noise reduction method based on image intersection and difference set is proposed, which aims to suppress the background noise mainly caused by marine organism and suspended solids. Combined these two methods, a comprehensive method is obtained. The experimental results show that these methods can suppress the interference in WCI efficiently. The quality of the WCI is highly improved and the targets in WCI are remained.
Image quality assessment plays an important role in remote sensing image fusion. Research on objective evaluation methods is of great value in this field. Because the fused images will be observed by human finally, the best objective evaluation index of image quality should be consistent with the subjective evaluation. After the human visual characteristics are introduced into, image quality evaluation method can get better result. Therefore, based on the multi channels decomposition and contrast sensitivity characteristics of human visual system, this paper proposes a new method to evaluate the fusion quality of panchromatic image and multispectral image. Firstly, the color distortion and structural similarity are modified to evaluate the spectral information and spatial information respectively. Secondly, they are integrated as the overall evaluation of fused image. According to the experiment evaluating quality of different fused images by SPOT image data and GF-2 image data, it proves that the proposed method is in good agreement with the subjective evaluation when the remote sensing image is high resolution data.
The low spatial resolution images often have high temporal repetition rates, but they contain a large number of mixed pixels, which may seriously limit their capability in change detection. Pixel unmixing can produce the proportional fractions of land-covers, on which sub-pixel information can be extracted to reduce the low-spatial-resolution-problem to some extent. Therefore, the sub-pixel change detection can be proposed to overcome this issue in this paper. Firstly, the endmembers of different temporal remote sensing images are extracted; after that, the pixel unmixing process is implemented and the abundance difference image is produced through comparison with the fractional abundances. The endmember variability per pixel is considered during the process to ensure the high accuracy of derived proportional fractional difference image; in the end, an exact threshold should be determined according to the difference image. In order to avoid some false change and noises of the initial fractional difference image, a suitable threshold should be set. In the light of the intensity change values in the image are consistent with the distribution of Gauss mixture, the expectation maximization (EM) algorithm and Bayes discriminant criterion are both used to find out the best threshold of each land cover change. Once the threshold value is set, fraction changes larger than the value will be considered to be correct; other changes will be regarded as zero. The proposed method is compared with two traditional methods in both of the simulated and real experiments. The result demonstrates that our method can extract more precise changed information with high stability.
In this paper, a method is proposed to locate the insulator automatically in aerial image based on binarized normed gradients (BING) and convolutional neural networks (CNNs). Firstly, we extract insulator candidate windows with BING algorithm. Secondly, we identify the windows containing insulator with convolution neural networks. Finally, the weighted iteration of the window set with high overlap is used to acquire the final insulator positioning results. The method proposed in this paper is validated with the transmission line aerial images obtained by the actual inspection of the large-scale unmanned helicopter of Guangdong Grid Co. Experiment shows that the recall of insulators with complex background is 90.5% and the positioning accuracy is 92%, which means the proposed method can effectively locate the insulators in aerial image with complex background, the method also has strong versatility, and can be adapted to the visible light image of different background.
Comprehensively evaluating the accuracy of the digital elevation model (DEM) upscaling methods is significant for us to choose the DEM upscaling method reasonably, master the DEM upscaling law, and construct a new DEM upscaling model. However, the current researches on DEM accuracy evaluation are still not sufficient to meet the requirements of practical applications. In this paper, an accuracy assessment method for DEM based on energy factor is proposed and then is applied to the accuracy assessment of different DEM upscaling methods. A comprehensive evaluation strategy combining qualitative and quantitative accuracy assessment and quality evaluation methods is constructed to verify the rationality and effectiveness of the assessment strategy based on energy factor. The experimental results show that the accuracy assessment method based on energy factor is simple in theory and is easy to operate in practice. It can reflect the retention characteristics of the terrain masking relationship of the upscaling methods, whereas the traditional precision evaluation strategy cannot reflect them directly. This accuracy assessment framework also provides a new and valuable standard for choosing the DEM upscaling methods, and puts forward new design requirements for the DEM upscaling methods. It also has great influence on remote sensing modeling and inversion.
Inshore containers in high resolution optical imagery are under severe interference, such as structure, shadow, and environment, and the ship bodies are very similar to the container structures on nearby land. These situations make the automatic detection of inshore containers a very challenging task. In order to address this problem, this paper proposes a detection method for inshore containers based on the superpixel-level contextual feature. Firstly, the image is segmented into superpixels, and the features of the superpixel and its neighboring superpixels are concatenated into the superpixel-level contextual feature. Then, based on the positive samples and the actively selected negative samples, the target and the background superpixels are classified via machine learning. Finally, the fully connected conditional random field is employed to refine the classification result and realize the detection. The experimental result verifies the applicability of the proposed method.
A positioning method in footprint of space-borne laser altimeter based on vertical line locus (VLL) algorithm and elevation structure constraint is proposed. In the space-borne laser altimeter supported satellite photogrammetry frame, positioning point candidates are acquired in the footprint with digital surface model(DSM) generated by stereo images, according to the relative elevation structure information of the waveform. Then, elevations of positioning point candidates are also improved using VLL algorithm compared with the elevation structure information, which can eliminate the position errors of stereo images. Experiments show that the method proposed is effective in big footprint space-borne laser altimeter positioning, and the elevation positioning accuracy is 0.16 m, plane positioning accuracy is the same with stereo images.
The lunar surface topography mapping is the basic work of the lunar exploration and plays an important role in the three-phase ("around-fall-back") lunar exploration program. In the "around" phase, it is necessary to use the camera carried by the spacecraft to acquire high-resolution image of area around the drop-off point. Due to the low orbit altitude and the limited field of view of camera, the width of the imagery obtained by the lunar satellite is relatively small compared to the range of the required drop-off area. Multiple strips stitching is an effective and practical way to solve large regional imaging problem. In a given multiple strips stitching task planning, it is required to optimize the side-swing angle of each orbit within a given mission duration to meet the regional coverage demand. To meet the requirements of region multiple strip stitched imaging task, an lateral swing optimization of method is proposed in this paper. Firstly, a fast algorithm based on vector polygon logical operation is proposed. Coverage polygons of each imaging strip are calculated based on the lateral swing angle, satellite orbit and the field of view of the sensor, then the coverage ratio can be computed via a Boolean operation between target polygon and strip coverage polygons. The proposed method of coverage ratio calculation can not only ensure the accuracy of calculation, but also can dramatically reduce its computation complexity. Secondly, a lateral swing angle optimization model for multiple strip stitching imaging task is introduced, which uses swing angle as decision valuables, coverage ration maximization as objective function respectively, and an improved adaptive genetic algorithm based on sigmoid function is proposed to solve the optimization model. Finally, two simulation experiments are executed to validate the validity of proposed method.
Complex image registration is a crucial step in interferometric synthetic aperture sonar (InSAS) signal processing. The registration quality directly influences the generation of interferogram and subsequent reconstruction of digital elevation model (DEM). Utilizing the geometric model of ima-ging, the rational function relationship between offset and slant range is deduced, thus this paper propose a new InSAS complex image registration method based on second-order rational function surface fitting. Compared with the traditional polynomial surface fitting method, the proposed method in this paper has the advantages of higher precision and smaller calculation cost. The evaluation has been carried out using the root mean squared error(RMSE), correlation coefficient, residue numbers and computing time as the criteria. Simulation and real data experiments results indicate the validity of the proposed method.
We address the problem of fine-grained parallel optimization of large-scale data. Patch-based multi-view stereo (PMVS) algorithm has been widely applied to digital city and other fields because of its good three-dimensional reconstruction effect, however, its large-scale computing algorithm has a low execution efficiency. Therefore, to address the limitation, this paper proposes a fine-grained parallel optimization method, including task allocation and load-balancing; strategies of main system memory and GPU memory; the optimization of communication. We perform CPU multi-threading operation using the pthreads function library to take full advantage of the computing power of multi-core CPUs. And for GPUs, we utilize the CUDA framework while optimizing thread organization and memory access. Besides that, we propose the idea of adapting memory pool model and pipelining model to improve bandwidth availability ratio. The memory pool model reduces the impact of data resources transferring on the bus for CPUs_GPUs while waiting for resources; the pipelining model hides communication time for CPU to read data from memory. At the same time, this paper utilizes the Harris-DOG feature extraction of PMVS algorithm of sequences of images as the example to verify our optimization strategies. The experiments demonstrate that the multi-threading CPU-based strategy can achieve 4 times speed-up ratio, the highest ratio that parallel CUDA-based strategy can achieve is 34 times, and our strategy can improve the performance 30% on the basis of the parallel CUDA-based strategy. In the future, our optimization strategy can be applied to quick computing resource scheduling in big data processing of other domains.
In order to generate and represent property solids with exterior topology. Based on the existing floor-plans, this paper proposes a method to adopt the way of extruding floor-plans to create 3D models of property solids, and to use 3-combinatorial maps to represent interior and exterior topology of property solids. The results are as follows:(1) A delivery method of intervals is proposed based on incidence graphs with weight and incidence matrix. (2) According to the comparison relationship between old and new intervals, a method for generating darts of 3-combinatorial maps is proposed. (3) An algorithm for adding β relations of 3-combinatorial maps is proposed. Research conclusions show that property solids can be created by extrusion and 3-combinatorial maps, and represent interior and exterior topology of property solids and improve efficiency of constructing property solids.
Outline is very important to the cognition of objects and it is the key attribute to decide the category of objects. For the metaphor map whose purpose is to use the virtual map to express non-spatial data, its outline has operational advantages. This advantage can be exploited to enhance map similarity and promote the establishment of a connection with the real map. At present, the study of metaphor maps emphasizes too much on the map making technology, and pays less attention to map cognitive design. Poor control of the mapping process leads to a strong randomness of the map's outline and to filter the map outlines. We also apply the optimization algorithm to control the filter flow to obtain a satisfactory map outline and to filter the map outlines. The experiments under real data show that this method can obtain a map outline similar to target outline and the outline framework helps to carry out the work of map design and representation.