2017 Vol. 42, No. 1
In the era of new urbanization, industrialization and informatization, China's cities confront much complex problems on the way to further city with sustainable economy, society and ecology. Now, city applications integrating Geomatics, Internet and cloud computing generates urban big data, which become strategic resources of smart city. With the cross-fusion of Geomatics, Informatics and urban science, Urban Informatics, a new direction of surveying, mapping and geoinformation, is developing. Based on the unified spatial-temporal datum, urban informatics is to capture, process, analyze urban big data, and make smart decision with the help of information technology, aiming to realize the green, low carbon, and sustainable city. This paper discusses the relationship between Geomatics and urban informatics, and five characteristics of urban informatics, including dynamic evolution, data-driven, crowdsourcing learning, collaborating decision, and cross-disciplines. Finally, it summarizes scientific questions and outlook the further of urban informatics.
Land Observing Satellite Data Center is the core platform for storing, distributing, processing, and integrating land observing satellite resources. It can provide high quality and effective services for the State Council and the relevant departments of government and local authorities. In the era of big data, the data center benefits from big data opportunities as well as suffering from big data challenges. In this paper, the big data challenges in the center are discussed and then a big data solution is presented. In particular, five major challenges include the 3V dimensions of big data (i.e. Volume, Variety, and Velocity) and the specific challenges in the CRESDA, i.e., extensibility and integration (of multiple disparate management systems). To tackle the challenges aforementioned, a distributed architecture is proposed to manage all resources inside the data center thanks to the Hadoop-like framework for storing and processing the big remote sensing data. It is hoped that the proposed architecture can lend more support to national decision-making, improve the country's spatial information resources application level and serves as a new economic growth source of land observing satellite data applications.
Air pollution is getting worse with the development of cities in recent years. Urban air quality is mainly monitored by air quality monitoring stations at present. However, the number of stations is limited and the air quality fluctuates in different urban areas. So it is unefficient to detect air quality's distribution in a city by air quality monitoring stations only. Based on Sina Weibo data with location information, we propose an urban air quality trend surface modeling method by analysing the correlation between air pollution related topic microblogs and air quality monitoring station AQI data. The study reveals that our method not only qualitatively shows the relative air quality in diffferent regions of the city, but also demonstrations the urban air quality in a quantitative and fine-grained way. The findings of this study evaluate the feasibility of using a new type of large-scale data source for research on air quality estimation of any location in a city, and are of great significance when reflecting air quality distribution and finding areas where are relatively air polluted.
Efficient organization and querying of trajectory data is one of the research hotspots in spatial database field. By taking advantage of the properties of globally unique, one-dimension and hierarchically recursive coding of Geohash codes, oriented to relational spatial database, we proposes a Geohash-baesd organization method of large trajectory data and its range query processing technology. First, a trajectory relational schema, which combines Geohash coding and B+ tree index, is designed for range queries at multiple scales. Then, a corresponding two-stage range query processing algorithm is introduced. Next, we come up with Z-merge optimization for further improving the efficiency of range query processing. Finally, the experiment results based on Oracle11g verify that our approach is fit for organizing large trajectory data and its range query performance is much better than traditional R-tree.
It is a challenge of integrating environment, entities and human in a high fidelity virtual geographic environments (VGE), because of the complexity of the geo-phenomena, large volume of geo-spatial data, and the limitation of computer resource. We try to resolve this challenge discussed in this paper by utilizing a serious game engine, which is CryEngine, for small scale virtual environments. Methodology of CryEngine based construction of VGE is discussed. Meanwhile, several key technologies are involved, which are efficient re-construction of terrain, vehicle re-construction and dynamic simulation, human modeling and behaviors simulation, and virtual environments integration and rending respectively. A prototype of virtual campus is constructed to test the reasonability of developed methods and technologies. The result reveals that it is much reasonable to build a CryEngine based VGE to integrate environment, entities and human. The result also shows that CryEngine based VGE is of high fidelity compared with most existing virtual environments, especially for detail visualization and rendering in small scale.
Based on the analyses of unit functions and unit function types of the interior space, an interior-space network-topology model that contained semantic information, supported path analysis, and fully expressed the topological logic of the various parts of the interior space was constructed after considering the demand for three-dimensional (3D) high-quality space navigation rendering, and limitations in personal mobile platform resources and computing power. In addition, by accurately determining the regions and topological logic of the interior space, strategies for space division and organization and strategy adjustment of geometric models based on topological relations of the interior space were explored, achieving the goal of dynamic interruption and elimination of interior models. The applicability, feasibility, and effectiveness of the proposed model were verified by performing actual tests using personal mobile platforms. The results indicated that the methods proposed in this study could interrupt and eliminate geometric models in real time, significantly reducing the number of models rendered and enhancing data transmission and rendering efficiency. This method provided reliable data and assurance concerning the visualization and rendering of high-fidelity 3D models.
The lacking of indoor walking networks is an essential bottleneck for indoor navigation. The floorplan of indoor environments can represent the structural information of indoor space, is a potential data source for the generation of indoor walking networks. However, the lacking of topology information limits the usage of floorplan for indoor route planning and guidance. This study proposed a data model for indoor navigation, which can model the indoor walking environment, the visibility of indoor landmarks, and the relation between indoor and outdoor walking space. Based on the data model, this study also developed a multi-object model for indoor pedestrian route planning. Result showed that the time used for constructing indoor walking networks and route planning based on the proposed method is quite low. Compared to the shortest route, the optimized route has better landmark visibility and coverage.
In recent years, the availability of mobile phone location data provides an opportunity and challenge for studying human stay. Therefore, we only can extract human stay based on base stations from the dataset, it need estimate to produce a continuous population distribution. Kernel density estimate (KDE) could generate a continuous surface and has been widely used to estimate population distribution, but the traditional KDE assumes that the sample data points are homogeneous and use fixed bandwidth to estimate for all data points, however, the service area of base stations in the city varies with the distribution of population distribution, so fixed bandwidth will bring error. In order to eliminate the errors, this paper introduces a search bandwidth controlling parameter to make the bandwidth to vary with the spatial distribution of mobile phone towers. Least-squares cross validation (LSCV) and log-probability methods were used to test the proposed approach, and the result of experiment demonstrates that this improvement can make the estimation better than fixed bandwidth. Taking mobile location data of Shenzhen as an example, we extract urban human stay for five typical time intervals, and the improved KDE was used to analyze the distribution difference of the five time intervals, which make us have a deep understanding of condition of urban different areas are used by human and how it vary with time going.
Although there exists quantities of GPS data of floating car, partial links lack real data during some certain period of time. Therefore, we can't estimate target link travel time. Considering the problem of sparse data when using floating car data estimating link travel time, we put forward a kind of inferred method based on big data of floating car. We designed a three-layer artificial neural network model, whose input information and output information are the feature relationship and the travel time ratio between target link and adjacent link respectively. We obtained traffic spatiotemporal association relationship using historical big data of floating car and then inferred link travel time. The model was verified by historical big data of Wuhan's floating car from March to July, 2014 and the MAPE of estimated value of link travel time is less than 25% which proved the effectiveness of the proposed method.
Human mobility patterns have been intensively investigated by scientists from computational social science, statistical physics and complex science. In last decade, mobile phone data provide an unprecedented tool for capturing individuals' travel activities in space and time. However, its nature of sparsity in time and imprecision in space imposes significant bias upon the derived mobility patterns. This research proposes two efficient techniques to cope with this issue. First, we implement an activity-location and travel-OD identification method to reconstruct reliable trajectories from call detailed records for mobile users. Second, we introduce the approximate entropy, which is superior to conditional entropy, for quantifying the regularity of individuals' consecutively visited locations. With a case study in Harbin, the proposed approaches enable us to uncover meaningful patterns of urban mobility in terms of frequently and consecutively visited locations.
In the existing studies of video synopsis, the lack of trajectory geographic direction and partition expression forthe moving objects' trajectories is the key problem yet to be solved.A novel approach for surveillance video synopsis was proposed, in which trajectory geographic direction was considered.The homography model was used for mapping the trajectory from image space to geographic space, and the trajectory analyzation of geographic direction was carried out by clustering the Trajectories. On the basis of mapping and clustering, the video synopsis background was built by selecting the virtual scene viewpoint, and moving objects' display trajectory was determined by creating the fitting centerline, and the expressing order was determined by creating the expressing model of the cluster. Finally the video synopsis was completed. By conducting several experiments, the effectiveness and flexibility of the approach is validated.
The emergence of big spatio-temporal data brings brand new perspectives as well as challenges for us to investigate and understand urban space. Due to existence of GPS position error, it is inevitable to adopt the map-matching methods to map the spatio-temporal trajectories onto geographic space. This research focuses on the low-sampling trajectories of floating cars in urban road networks by formalizing the map-matching process and exploring the influence of both the geometric and topology constraints on matching results. To solve the problem of matching low-sampling GPS data in the context of complex urban road networks, this paper proposes a topology-constrained incremental matching algorithm (TIM). Utilizing a sample GPS trajectory of Beijing float car as an example, the TIM algorithm is verified to be efficient and accurate give various road network complexity. Our study is valuable for the pre-processing of massive spatio-temporal data, and has the potential to benefit trajectory data mining and related urban informatics research in the future.
In a video surveillance system, same person may looks different across different camera, while different people may look the same in one camera, thus making person re-identification a challenging problem. This paper carried out an algorithm based on part feature importance, firstly extract features such as color, texture and shape. Then cluster each part by classify different appearance of body parts, using an error accumulation method to figure out weight vector indicate the significant of the feature, which fit the type of appearance. Calculating the similarity at last, using this vector to weighting the features of each part, making the feature more suitable for the appearance. This algorithm indicate some features are more important than others for parts with different appearance. We had complete our experiments on the public datasets VIPeR, and evaluate the result by CMC curve, which indicated this algorithm achieved higher re-identification rate for re-identification, and more robust to viewing condition changes, illumination variations, background clutter and occlusion.
The environment of the lunar surface is quite complex and under such complicated lunar environment, how to effectively get a reasonable, safe, and scientific valuable exploration area is extremely important for the lunar exploration program. To solve this problem, firstly, based on the available lunar exploration data and some research work about the landing site selection, this approach investigated the lunar environmental indicators that might influence the exploration work, after that, an interactive GIS prototype system for exploration area selection was designed and developed, and based on the system the environmental indicators were used to construct exploration area selection model dynamically, with which the right exploration area could be calculated and visualized automatically. This system was successfully implemented in the scientific exploration selection work for the Chang'E 3 lunar rover, and the exploration selection result shows that it is good for the lunar exploration.
To solve the problem of large-scale urban true orthophoto's generation, the paper proposes an occlusion detection based on the overall projection of digital building model (DBM). Using the characteristics that the surface of DBM storage with triangular facets and raster inside projection of plane graphics do not shelter, we use triangular face as a unit to orthographic projection of the entire buildings to get polygon of the roof. Make perspective projection of the entire buildings to get the polygon of the building on the image, then get the polygon of the whole building on traditional orthophoto based on DEM projection iteration. We can get the occlusion areas of the building by subtracting two polygons. Finally, we get the true orthophoto after repairing the occlusion areas with the best image. The experiment shows that the method can detect the occlusion areas quickly and accurately and provide a prediction to generate high-quality true orthophoto.
Space-borne radar altimeter can measure a majority of the global sea surface height within an accuracy of few centimeters, but there are various errors in altimetry measurements, so absolute calibration activities must be carried out. Ground laser station has range accuracy better than 1 cm, so it can serve for the independent calibration combined with the space-borne laser reflector array (LRA). In this paper, the general model of the laser calibration for space-borne altimeter was deduced, which was not presented in the relevant publications. Afterwards, simulations and error analysis were executed based on the model, and the calibration precision was estimated quantificationally. In particular, the Grasse laser observations (actual laser echoes from the Jason-2 satellite LRA) in the vicinity of the Corsica calibration site were analyzed, and the results were promising. Finally, the contributions of the laser station in the calibration site were summarized, and the benefits of simultaneously operating laser station and GNSS buoy configuration was emphasized. The construction and performance assessment of China's forthcoming satellite altimeter calibration site would benefit from the study in this paper.
3D reconstruction of unordered multi-view images is very sensitive to noise. Error matching relations will affect the accuracy of the reconstruction or even lead to failure. A robust batch reconstruction algorithm is proposed in this paper, first the triplets which may contain mismatches are removed using closed cycle constraint, and then the trifocal tensor constraint in triplet is used instead of the epipolar constraint of the traditional algorithm, also the linear programming algorithm with the l∞ norm is used instead of the second order cone programming to calculate a global optimum of rotations and locations of all the images. An efficient Union Find algorithm is introduced into the reconstruction to exact the multi-view matching points, and the 3D points are computed using the iterative linear triangulation. Experimental results show that the proposed method performs satisfactorily in terms of reconstruction efficiency and accuracy.
The spatial resolution of land surface temperature (LST, 120 m) retrieved from thermal infrared (TIR) band is lower than its visible/near-infrared (VNIR) bands (30 m). LST image with high spatial resolution compatible with VNIR bands of Landsat TM is very important for application of the LST image to many studies such as environmental monitoring. The objective of this study is to decompose the coarse pixels of the LST image data into the same pixel scale of the auxiliary VNIR data. Firstly, the E-DisTrad method is used to divide LST of the parent pixels into the sub-pixels to obtain the first decomposition temperature and the theoretical radiance at-sensor of each sub-pixel. Then, a chess-segmentation is done to the temperature of the sub-pixels on the basis of the object-oriented image segmentation method to compute the weight for each sub-pixel, which is consequently used to allocate the thermal radiance for each sub-pixel to generate the decomposed LST image. At last, a method of increase the spatial resolution firstly and then used for double-step pixel decomposition is executed in the study to validate the accuracy and efficiency of the decomposition method for the Landsat TM image of Beijing. The result showed that the double-step pixel decomposition method is applicable for decomposition of LST images for high spatial resolution, reflecting the spatial differences of different land use and land cover and ensure the conservation of energy before and after pixel decomposition. Therefore we can conclude that it is ideally suited for complex covered surface area of downscaling thermal infrared band remote sensing data.
To reconstruct surface from noisy point clouds, a surface reconstruction algorithm based on Delaunay refinement was proposed. Firstly, the local surface was approximated by algebraic sphere, which was fitted through neighbor point coordinates and normals by robust least square algorithm. Compared with traditional sphere fitting methods, the new method is more robust to noises and outliers. Secondly, to find any segment intersect with surface for Delaunay refinement procedure, the surface bounding spheres intersected with segment were efficiently founded with AABB-tree. Then, initialized with sphere center, the first approximated segment-surface intersections within bounding spheres were parallel-computed by iterative segment-sphere intersection. Finally, the surface was meshed by Delaunay refinement, which is not ambiguous and can reconstruct surface with good aspect ratio comparing with Marching-cube algorithm. Experiments show that the new algorithm can efficiently, robustly and accurately reconstruct surface from point clouds with high noises. But its time and memory consuming will rapidly increase for precise models.
We proposed a parallel quality-guided phase unwrapping algorithm in shared memory environment. The intrinsic relationship between the neighboring phase points about quality value computing is analyzed firstly, and then the row and column arrays are used to store the intermediate results in order to eliminate the repeated computation of gradient. The computing task is allocated by row, which can simplify the computing of row and column gradient mean values. Finally, the allocation of computing task is realized using OpenMP instructions, and the quality map is computed in parallel. The inherent parallelism of quality guided process is also analyzed in-depth, but subject to the repetitive thread startup and exit overhead, it is difficult to reflect the advantage of speed. Unwrapping tests performed on InSAR and InSAS interferogram show that the proposed method greatly improves the efficiency of phase unwrapping, and provides a foundation for improving the solution precision under real-time conditions.
Based on least squares collocation, adaptive fusion algorithm based on observation signals is deduced. The airborne gravity data, satellite gravity data and land gravity data of Bohai bay area are combined by the adaptive fusion algorithm based on observation signals. The accuracy of gravity observation data is improved after fusion. Tectonic stress field in Bohai area is inversed using gravity observation data after adaptive fusion.Combined with earthquake historical data, tectonic stress field in Bohai area is analyzed. Characteristics of Tectonic stress field in Bohai area are summarized. There is a close relationship between gravity and tectonic stress. The direction of tectonic stress field is towards the north.Tanlu and Zhangjiakou-Penglai Fault zone are consistent with tectonic stress field in Bohai area. Tectonic stress is gathered in high degree in these Fault zone. And the value of tectonic stress is higher than others as well.It is showed that the tectonic movement is active in Tanlu Fault and Zhangjiakou-Penglai Fault from Tangshan to Zhangjiakou, corresponding with historical earthquake data.