2012 Vol. 37, No. 2
Display Method:
2012, 37(2): 127-131.
Abstract:
This paper mainly focuses on the Wikipedia,a collaborative editing pattern in Web 2.0.The articles,editors and the editing relationships between the two ones are three important components in Wikipedia statistical analysis.We collected different kinds of statistical tools,methods and results,and further analyzed the problems in the current statistics researches and discussed the possible resolutions.
This paper mainly focuses on the Wikipedia,a collaborative editing pattern in Web 2.0.The articles,editors and the editing relationships between the two ones are three important components in Wikipedia statistical analysis.We collected different kinds of statistical tools,methods and results,and further analyzed the problems in the current statistics researches and discussed the possible resolutions.
2012, 37(2): 132-135.
Abstract:
This paper uses PLSA technology to have a try of image segmentation on RS images,and introduces the principle and the exact algorithm of PLSA image segmentation.Via the experimentation of three RS images segmentation and the contrast between biogeography based optimizing image segmentation method and attractor based algorithm.The result shows that the method used in this paper possess some advantage,and it is a potential image segmenting method.
This paper uses PLSA technology to have a try of image segmentation on RS images,and introduces the principle and the exact algorithm of PLSA image segmentation.Via the experimentation of three RS images segmentation and the contrast between biogeography based optimizing image segmentation method and attractor based algorithm.The result shows that the method used in this paper possess some advantage,and it is a potential image segmenting method.
2012, 37(2): 136-140.
Abstract:
To find tie points from semi-randomly distributed point clouds is difficult,though such a task can be completed much easier in terms of images.Therefore,conventional imageto-image registration algorithms are no longer valid,where tie points are the main features for calculating registration parameters.We take the collinearity equation as a strict mathematical model for registration,and replace point features by linear ones through the parametric form of a straight line.High accuracy registration results have been achieved by such a manner.Meanwhile,semi-quantitative analysis is conducted in terms of the relationship between the image resolution and the LiDAR point density,which,to the author's best knowledge,has never been performed in literature.
To find tie points from semi-randomly distributed point clouds is difficult,though such a task can be completed much easier in terms of images.Therefore,conventional imageto-image registration algorithms are no longer valid,where tie points are the main features for calculating registration parameters.We take the collinearity equation as a strict mathematical model for registration,and replace point features by linear ones through the parametric form of a straight line.High accuracy registration results have been achieved by such a manner.Meanwhile,semi-quantitative analysis is conducted in terms of the relationship between the image resolution and the LiDAR point density,which,to the author's best knowledge,has never been performed in literature.
2012, 37(2): 141-144.
Abstract:
Ice flood monitoring is very important for preventing river ice disaster.We use HJ-1A and HJ-1B CCD data to monitor ice flood of Yellow River and propose a decision tree method to extract river ice.The experimental results show that HJ-1A and HJ-1B are suitable for river ice identification.HJ-1A/1B CCD data has great potential in ice flood monitoring of Yellow River.
Ice flood monitoring is very important for preventing river ice disaster.We use HJ-1A and HJ-1B CCD data to monitor ice flood of Yellow River and propose a decision tree method to extract river ice.The experimental results show that HJ-1A and HJ-1B are suitable for river ice identification.HJ-1A/1B CCD data has great potential in ice flood monitoring of Yellow River.
2012, 37(2): 145-148.
Abstract:
An improved combined corner and edge detector which introduced a non-local maximum suppression method based on Harris operator is proposed.The extracted edge based on the principal orientation is traced,and the edge for the sake of further storage is vectorized.The experimental results show that the improved Harris operator performed better in feature extraction compared to the original algorithm.The description of the edge with certain mathematical model is beneficial for the further data management and feature matching.
An improved combined corner and edge detector which introduced a non-local maximum suppression method based on Harris operator is proposed.The extracted edge based on the principal orientation is traced,and the edge for the sake of further storage is vectorized.The experimental results show that the improved Harris operator performed better in feature extraction compared to the original algorithm.The description of the edge with certain mathematical model is beneficial for the further data management and feature matching.
2012, 37(2): 149-153.
Abstract:
Taking IKONOS as examples,by analyzing the shortages of five traditional kinds of fusion methods and the unique characteristic of vegetation on fusion quality,a fusion model,considering these influences by combining fusion methods based on transformation and those based on filtering,was proposed to improve fusion quality.Then,three key factors for fusion quality were concluded and improved methods were introduced.The comparisons were carried out on real image among different improving methods.The results show that the methods based combining fusion model always outperform the traditional ones,and that the most practical method with the best fusion quality is the method,which adopts the new strategies including vegetation-separated with fix-threshold on improved vegetation index,improved transformation fusion method and improved filtering fusion method.
Taking IKONOS as examples,by analyzing the shortages of five traditional kinds of fusion methods and the unique characteristic of vegetation on fusion quality,a fusion model,considering these influences by combining fusion methods based on transformation and those based on filtering,was proposed to improve fusion quality.Then,three key factors for fusion quality were concluded and improved methods were introduced.The comparisons were carried out on real image among different improving methods.The results show that the methods based combining fusion model always outperform the traditional ones,and that the most practical method with the best fusion quality is the method,which adopts the new strategies including vegetation-separated with fix-threshold on improved vegetation index,improved transformation fusion method and improved filtering fusion method.
2012, 37(2): 154-159.
Abstract:
We present a bundle block adjustment method of aerial images based on unit dual quaternion.The major characteristic of this method is to use dual quaternion to represent the exterior orientation elements(including camera station and image attitude) of the images,and establish the basic mathematic model based on unit dual quaternion.Furthermore,parameter adjustment with constrains is used to solve the mathematic model.Real aerial images with different images scale are used to test this method and the conventional bundle block adjustment.The experimental results show that the adjustment accuracy of this new method is as much as that of the conventional method,and the requirement of the images scale and the number and distribution of ground control points is almost the same as the conventional method.This method will provide a new technique for photogrammetry processing of the aerial images acquired by light-small sensor platforms.
We present a bundle block adjustment method of aerial images based on unit dual quaternion.The major characteristic of this method is to use dual quaternion to represent the exterior orientation elements(including camera station and image attitude) of the images,and establish the basic mathematic model based on unit dual quaternion.Furthermore,parameter adjustment with constrains is used to solve the mathematic model.Real aerial images with different images scale are used to test this method and the conventional bundle block adjustment.The experimental results show that the adjustment accuracy of this new method is as much as that of the conventional method,and the requirement of the images scale and the number and distribution of ground control points is almost the same as the conventional method.This method will provide a new technique for photogrammetry processing of the aerial images acquired by light-small sensor platforms.
2012, 37(2): 160-164.
Abstract:
To avoid the disadvantages of current algorithms,an effective line extraction algorithm is presented.Firstly,the aerial image is processed by means of small-scale Gaussian filtering.Then,by means of edge detection,an edge detection amplitude image is obtained;and heuristic search is applied in the edge detection amplitude image,so as to extract line search trajectories fitted the line model.Finally,the line from the search trajectories is selected.The experimental results show that our algorithm can extract the true line in the aerial image,and is an effective line extraction algorithm.
To avoid the disadvantages of current algorithms,an effective line extraction algorithm is presented.Firstly,the aerial image is processed by means of small-scale Gaussian filtering.Then,by means of edge detection,an edge detection amplitude image is obtained;and heuristic search is applied in the edge detection amplitude image,so as to extract line search trajectories fitted the line model.Finally,the line from the search trajectories is selected.The experimental results show that our algorithm can extract the true line in the aerial image,and is an effective line extraction algorithm.
2012, 37(2): 165-169.
Abstract:
Fingerprint orientation field describes the essential texture feature of a fingerprint image,including shape,structure and direction.Fingerprint orientation field behaves similar in local region while very different in global region.We propose to segment a fingerprint image rapidly and coarsely using orientation field image,present an example of its application on local matching.We conduct experiments on FVC2004 DB1.The experimental results show that the proposed segmentation method helps to improve the identification accuracy and the efficiency of local-matching algorithms.
Fingerprint orientation field describes the essential texture feature of a fingerprint image,including shape,structure and direction.Fingerprint orientation field behaves similar in local region while very different in global region.We propose to segment a fingerprint image rapidly and coarsely using orientation field image,present an example of its application on local matching.We conduct experiments on FVC2004 DB1.The experimental results show that the proposed segmentation method helps to improve the identification accuracy and the efficiency of local-matching algorithms.
2012, 37(2): 170-173.
Abstract:
As the point positioning accuracy of single frequency low-cost GPS receivers is slightly low,we propose to use amendments provided by single base station to correct observed data of single frequency of mobile station.Meanwhile,ionospheric parameters are introduced to estimate ionospheric delay on time and do precise point positioning calculations of single frequency.By analyzing the numerical example with real data,the new method in this essay markedly improves the converging speed and positioning accuracy of single-PPP.The accuracy is better than 2 cm and less influenced by the distance between base stations.
As the point positioning accuracy of single frequency low-cost GPS receivers is slightly low,we propose to use amendments provided by single base station to correct observed data of single frequency of mobile station.Meanwhile,ionospheric parameters are introduced to estimate ionospheric delay on time and do precise point positioning calculations of single frequency.By analyzing the numerical example with real data,the new method in this essay markedly improves the converging speed and positioning accuracy of single-PPP.The accuracy is better than 2 cm and less influenced by the distance between base stations.
2012, 37(2): 174-177.
Abstract:
Starting directly with coefficient matrix of condition equation or error equation,the least square solution by triangulation decomposition on coefficient matrix is carried on with improved Gram-Schmidt orthogonalization procedure.Then,the math formula and the calculation steps of solving generalized inverse matrix on improved Gram-Schmidt algorithm are deduced.The unknown solution vectors and the mathematical expression of the variance-covariance matrix are given through the generalized inverse expression.Two examples are used to verify its effect,and the results show that the modified Gram-Schmidt orthogonal method can process any matrix including rank defect array.
Starting directly with coefficient matrix of condition equation or error equation,the least square solution by triangulation decomposition on coefficient matrix is carried on with improved Gram-Schmidt orthogonalization procedure.Then,the math formula and the calculation steps of solving generalized inverse matrix on improved Gram-Schmidt algorithm are deduced.The unknown solution vectors and the mathematical expression of the variance-covariance matrix are given through the generalized inverse expression.Two examples are used to verify its effect,and the results show that the modified Gram-Schmidt orthogonal method can process any matrix including rank defect array.
Iterative Method of Weight Constraint Total Least-Squares for Three-Dimensional Datum Transformation
2012, 37(2): 178-182.
Abstract:
According to the traditional approach of three-dimensional datum transformation and classic least-squares(LS) theory,the weight constraint total LS(WCTLS) and mixed LS are introduced.A more reasonable model and the corresponding iterative algorithm are given.The example results show that the new model is more reasonable indeed and the more accurate parameters of datum transformation can be obtained with suitable weight assigned.
According to the traditional approach of three-dimensional datum transformation and classic least-squares(LS) theory,the weight constraint total LS(WCTLS) and mixed LS are introduced.A more reasonable model and the corresponding iterative algorithm are given.The example results show that the new model is more reasonable indeed and the more accurate parameters of datum transformation can be obtained with suitable weight assigned.
2012, 37(2): 183-186.
Abstract:
Adding reasonable constraint conditions to the observation equation can be used to conquer the ill-posed problem in the GPS water vapor tomography.In order to conquer the problem caused by the improper use of the weight matrix of the constraint equations,we apply the fitting method by selection of the parameter weights(FMSPW) to the GPS water vapor tomography.In this method,the parameter weight matrix is constructed based on the property of water vapor in the troposphere,and a simulation experiment is made to validate the feasibility of this method in the GPS water vapor tomography.The results show that the FMSPW can be used to get the reasonable distribution of water vapor.
Adding reasonable constraint conditions to the observation equation can be used to conquer the ill-posed problem in the GPS water vapor tomography.In order to conquer the problem caused by the improper use of the weight matrix of the constraint equations,we apply the fitting method by selection of the parameter weights(FMSPW) to the GPS water vapor tomography.In this method,the parameter weight matrix is constructed based on the property of water vapor in the troposphere,and a simulation experiment is made to validate the feasibility of this method in the GPS water vapor tomography.The results show that the FMSPW can be used to get the reasonable distribution of water vapor.
2012, 37(2): 187-190.
Abstract:
The separation of the gravity anomaly using wavelet multiple decomposition and wavelet multiscale edges reconstruction are studied.The vertical and horizontal influence can be separated simultaneously by this method,which provides a reasonable reference for the separation of complex regional gravity anomaly.The gravity anomalies of the Ryukyu subduction zone are separated base on simulation experiment.In the vertical gravity anomalies separation,the separation scales can be determined,based on the correlation between gravity anomaly and topography.Wavelet multiscale edge analysis by selecting appropriate scales range can achieve the effect of gravity horizontal separation,which is simpler than multiscale edges reconstruction.The data used to inverse structure of the subduction zone can be got through the separation.
The separation of the gravity anomaly using wavelet multiple decomposition and wavelet multiscale edges reconstruction are studied.The vertical and horizontal influence can be separated simultaneously by this method,which provides a reasonable reference for the separation of complex regional gravity anomaly.The gravity anomalies of the Ryukyu subduction zone are separated base on simulation experiment.In the vertical gravity anomalies separation,the separation scales can be determined,based on the correlation between gravity anomaly and topography.Wavelet multiscale edge analysis by selecting appropriate scales range can achieve the effect of gravity horizontal separation,which is simpler than multiscale edges reconstruction.The data used to inverse structure of the subduction zone can be got through the separation.
2012, 37(2): 191-194.
Abstract:
A massive Ms 9.0 earthquake occurred on 11 March 2011,off the Pacific coast of the northeastern part of the Japanese main land.In this paper,using developed PPP software,the co-seismic and post-seismic surface movement information and precipitable water vapor were extracted from IGS stations data located in Japan and the surrounding countries and State Oceanic Administration operational GNSS stations data.First,the co-seismic horizontal motion trajectory,three-dimensional coordinate time series and the dynamic changes of precipitable water vapor in the zenith direction for GPS sites were achieved using kinematic PPP.Then,using static PPP,this paper obtained the pre-seismic and post-seismic three-dimensional coordinate in 24-hours solution of GPS sites.Based on the calculating results,this paper reveals the co-seismic surface movement process and post-seismic permanent deformation of GPS sites,and verifies the snowfall process in Japan disaster area.These measurements are therefore of high value for earthquake monitoring and disaster warning using GPS technology.
A massive Ms 9.0 earthquake occurred on 11 March 2011,off the Pacific coast of the northeastern part of the Japanese main land.In this paper,using developed PPP software,the co-seismic and post-seismic surface movement information and precipitable water vapor were extracted from IGS stations data located in Japan and the surrounding countries and State Oceanic Administration operational GNSS stations data.First,the co-seismic horizontal motion trajectory,three-dimensional coordinate time series and the dynamic changes of precipitable water vapor in the zenith direction for GPS sites were achieved using kinematic PPP.Then,using static PPP,this paper obtained the pre-seismic and post-seismic three-dimensional coordinate in 24-hours solution of GPS sites.Based on the calculating results,this paper reveals the co-seismic surface movement process and post-seismic permanent deformation of GPS sites,and verifies the snowfall process in Japan disaster area.These measurements are therefore of high value for earthquake monitoring and disaster warning using GPS technology.
2012, 37(2): 195-198.
Abstract:
In order to visualize the detail motion among stations within the Shanxi graben,we build the No-Net-Rotation based on the GPS repeat observations from 1999 to 2007 of the crustal motion observation network in China.Then the spatial distribution feature of strain fields are analyzed by graphic element method,and the dynamic mechanism of crustal deformation is also deeply analyzed.The results show that present tectonic strain fields of Shanxi graben present tensile strain with NNW-SSE direction,which has good consistency with the earthquake focal mechanisms and regional long-term tectonic deformation background.And the present crustal movement of Linfen and Datong basin is strong,meanwhile,the Linfen and Datong basin is also the areas with high shear strain.In the end the reason of exceptive deformation area is discussed.Then we put forward a significant tectonic event that the crustal activities of tectonic uplift between the Linfen and Taiyuan basin is fierce,and the tectonic uplift is now undergo intense tension rupture.
In order to visualize the detail motion among stations within the Shanxi graben,we build the No-Net-Rotation based on the GPS repeat observations from 1999 to 2007 of the crustal motion observation network in China.Then the spatial distribution feature of strain fields are analyzed by graphic element method,and the dynamic mechanism of crustal deformation is also deeply analyzed.The results show that present tectonic strain fields of Shanxi graben present tensile strain with NNW-SSE direction,which has good consistency with the earthquake focal mechanisms and regional long-term tectonic deformation background.And the present crustal movement of Linfen and Datong basin is strong,meanwhile,the Linfen and Datong basin is also the areas with high shear strain.In the end the reason of exceptive deformation area is discussed.Then we put forward a significant tectonic event that the crustal activities of tectonic uplift between the Linfen and Taiyuan basin is fierce,and the tectonic uplift is now undergo intense tension rupture.
2012, 37(2): 199-204.
Abstract:
Three different calculation methods are constructed with the newly launched COSMIC/ FORMOSAT-3 system.The analysis is based on the radio occultation data sample collected the observation during Jan.,Apr.,July and Oct.2007.For the validation,the data is compared with monthly average wind data of ECMWF's ERA-interim data sets.The comparisons are done in terms of mean winds profiles for various latitudes,longitudinal-latitudinal winds variations for particular pressure level and latitude-height distributions of zonal mean winds for different seasons,and reveal excellent resemblance between the COSMIC measurements and the model outputs in stratosphere.The bias is larger in upper stratosphere than in lower stratosphere,however,the error is less than 10 m/s below 1 hPa.The differences of the three calculated winds field are very small,the winds field derived from refractive index and temperature data is nearly the same,the bias of winds derived from height is relatively larger,but the differences of error is less than 2 m/s than the two fields infered before.
Three different calculation methods are constructed with the newly launched COSMIC/ FORMOSAT-3 system.The analysis is based on the radio occultation data sample collected the observation during Jan.,Apr.,July and Oct.2007.For the validation,the data is compared with monthly average wind data of ECMWF's ERA-interim data sets.The comparisons are done in terms of mean winds profiles for various latitudes,longitudinal-latitudinal winds variations for particular pressure level and latitude-height distributions of zonal mean winds for different seasons,and reveal excellent resemblance between the COSMIC measurements and the model outputs in stratosphere.The bias is larger in upper stratosphere than in lower stratosphere,however,the error is less than 10 m/s below 1 hPa.The differences of the three calculated winds field are very small,the winds field derived from refractive index and temperature data is nearly the same,the bias of winds derived from height is relatively larger,but the differences of error is less than 2 m/s than the two fields infered before.
2012, 37(2): 205-209.
Abstract:
We present a novel method to improve the long range contacts method that was proposed by Ohnishi.The new method partitions the space by grid and selects super peer by space grid,builds the LRC between super nodes with the same row and column.The query is completed mainly by super nodes.The simulation results show that the novel method maintains the routing efficiency and greatly reduces the degree of the nodes,and enhances the stability of the system.
We present a novel method to improve the long range contacts method that was proposed by Ohnishi.The new method partitions the space by grid and selects super peer by space grid,builds the LRC between super nodes with the same row and column.The query is completed mainly by super nodes.The simulation results show that the novel method maintains the routing efficiency and greatly reduces the degree of the nodes,and enhances the stability of the system.
2012, 37(2): 210-214.
Abstract:
A schema match approach based on semantics is put forward.Firstly,the concepts at labels is built,and the semantic relationship between labels is computed,then the concepts at labels to concept at a node by a conjunction of all concepts at labels located above the given node are extended,including the node itself,concepts at node are translated into propositional formulas and then into CNF,which allows us to translate the matching problem into a propositional validity problem,which can then be efficiently resolved using SAT.Different versions of WCS schema-matching experiments show that the average recall of the semantic matching for the schemas is above 82%,the average precision reaches 91%,and the average overall achieves 67%.
A schema match approach based on semantics is put forward.Firstly,the concepts at labels is built,and the semantic relationship between labels is computed,then the concepts at labels to concept at a node by a conjunction of all concepts at labels located above the given node are extended,including the node itself,concepts at node are translated into propositional formulas and then into CNF,which allows us to translate the matching problem into a propositional validity problem,which can then be efficiently resolved using SAT.Different versions of WCS schema-matching experiments show that the average recall of the semantic matching for the schemas is above 82%,the average precision reaches 91%,and the average overall achieves 67%.
2012, 37(2): 215-219.
Abstract:
This paper implements global SST clustering analysis using C-means clustering method based on type 2 fuzzy sets,from which the typical clustering patterns of SST anomaly are discovered,and the potential ocean climate indices are discovered from the clustering patterns.
This paper implements global SST clustering analysis using C-means clustering method based on type 2 fuzzy sets,from which the typical clustering patterns of SST anomaly are discovered,and the potential ocean climate indices are discovered from the clustering patterns.
2012, 37(2): 220-223.
Abstract:
The polygon combination which includes aggregation and amalgamation is an important task in the generalization of thematic map.The spatial visual conflict of polygon groups which are lower than area thresholds is divided into four kinds and solved by different strategies.And a progressive combination method of adjacent polygon groups is mainly proposed,which can simplify the entire calculation progress,improve the computing efficiency,and make the changes reach the minimum before and after the generalization.
The polygon combination which includes aggregation and amalgamation is an important task in the generalization of thematic map.The spatial visual conflict of polygon groups which are lower than area thresholds is divided into four kinds and solved by different strategies.And a progressive combination method of adjacent polygon groups is mainly proposed,which can simplify the entire calculation progress,improve the computing efficiency,and make the changes reach the minimum before and after the generalization.
2012, 37(2): 224-228.
Abstract:
An algorithm for feature matching from network data at different map scaled based on similarity measure is presented.The whole strategy of matching is the first pre-matching of nodes and arcs,followed by accurate matching through similarity of node-arc topologies and discrete Fréchet distance.The matching process combines the matches in geometry,semantics,topology,nodes and arcs effectively.Finally,the different matching results are displayed to facilitate the human-computer interaction.The experimental results show that this method can match correspondent roads under complicated conditions effectively,and heighten the correctness and the speed of the feature matching.
An algorithm for feature matching from network data at different map scaled based on similarity measure is presented.The whole strategy of matching is the first pre-matching of nodes and arcs,followed by accurate matching through similarity of node-arc topologies and discrete Fréchet distance.The matching process combines the matches in geometry,semantics,topology,nodes and arcs effectively.Finally,the different matching results are displayed to facilitate the human-computer interaction.The experimental results show that this method can match correspondent roads under complicated conditions effectively,and heighten the correctness and the speed of the feature matching.
2012, 37(2): 229-232.
Abstract:
Based on the transparency visual variable in electronic map,we present a spatio-temporal visualization method that considering time focus and context.The method represents normally the spatio-temporal contents at the time focus as well as visualizes the objects in the user-defined time context by graphic elements with different transparencies simultaneously.It not only provides more spatio-temporal information with the same map extent,but also depicts the temporal distributions and temporal relationships of phenomena.
Based on the transparency visual variable in electronic map,we present a spatio-temporal visualization method that considering time focus and context.The method represents normally the spatio-temporal contents at the time focus as well as visualizes the objects in the user-defined time context by graphic elements with different transparencies simultaneously.It not only provides more spatio-temporal information with the same map extent,but also depicts the temporal distributions and temporal relationships of phenomena.
2012, 37(2): 233-236.
Abstract:
The current situation of building POI management system based on Web is discussed,considering positive effect in symbolizing data,combined with organizational traits of information chains in hypermedia models,POI information organization is explored,and taking tile data organization characteristics and performance capabilities of matured Web technologies into full account,a visualization strategy of POI based on B/S structure strictly is proposed.The example shows its decision-making assistance and strong platform portability.
The current situation of building POI management system based on Web is discussed,considering positive effect in symbolizing data,combined with organizational traits of information chains in hypermedia models,POI information organization is explored,and taking tile data organization characteristics and performance capabilities of matured Web technologies into full account,a visualization strategy of POI based on B/S structure strictly is proposed.The example shows its decision-making assistance and strong platform portability.
2012, 37(2): 237-241.
Abstract:
A kind of integrated covariance(variogram) model is introduced for spatial-temporal Kriging interpolation of monthly average temperature of 37 years in Heilongjiang province.As monthly average temperature display obvious seasonal change,seasonal part is removed before interpolation.Spatial-temporal variogram is built based on the ones of pure space and pure time.Extending ordinary Kriging into space-time and considering variable's correlation both in space and time,the monthly average temperatures of all stations in 2007 are estimated,and the effect is compared with spatial Kriging's.The result shows that spatial-temporal interpolation is practical,and its accuracy is better than that of spatial Kriging.
A kind of integrated covariance(variogram) model is introduced for spatial-temporal Kriging interpolation of monthly average temperature of 37 years in Heilongjiang province.As monthly average temperature display obvious seasonal change,seasonal part is removed before interpolation.Spatial-temporal variogram is built based on the ones of pure space and pure time.Extending ordinary Kriging into space-time and considering variable's correlation both in space and time,the monthly average temperatures of all stations in 2007 are estimated,and the effect is compared with spatial Kriging's.The result shows that spatial-temporal interpolation is practical,and its accuracy is better than that of spatial Kriging.
2012, 37(2): 242-246.
Abstract:
By analyzing the uncertain region topological relation,aiming at "egg-folk" model shortcoming that its logical relation among 46 topological relations are not clear,this paper obtains 93 topological relations through dynamic change of closest topological distance graph,by cutting out 41 topological relations contain tangency relation and 6 repetitive topological relations,46 topological relations are obtained too,which are consistent with Cohn theory,thus "egg-folk" model is popularized.
By analyzing the uncertain region topological relation,aiming at "egg-folk" model shortcoming that its logical relation among 46 topological relations are not clear,this paper obtains 93 topological relations through dynamic change of closest topological distance graph,by cutting out 41 topological relations contain tangency relation and 6 repetitive topological relations,46 topological relations are obtained too,which are consistent with Cohn theory,thus "egg-folk" model is popularized.
2012, 37(2): 247-251.
Abstract:
An approach for analyzing the relationship among botnets was presented.Several botnet communication characteristics were extracted,including the amount of data flows within a botnet,the number of packets per data flow,the payload of communication and data packets in the master hosts.Statistical similarity functions of botnet characteristics were defined.Based on the cloud model and the defined statistical similarity functions,the analysis model of botnet relationship was build,and the similarities of botnet characteristics were synthetically evaluated.The analysis experiments were conducted based on a simulation network environment.The experimental results show that the presented method was valid and efficient,even in the case of encrypted botnet communication messages.The result is better than the research production in the report on the interrelated research achievements.
An approach for analyzing the relationship among botnets was presented.Several botnet communication characteristics were extracted,including the amount of data flows within a botnet,the number of packets per data flow,the payload of communication and data packets in the master hosts.Statistical similarity functions of botnet characteristics were defined.Based on the cloud model and the defined statistical similarity functions,the analysis model of botnet relationship was build,and the similarities of botnet characteristics were synthetically evaluated.The analysis experiments were conducted based on a simulation network environment.The experimental results show that the presented method was valid and efficient,even in the case of encrypted botnet communication messages.The result is better than the research production in the report on the interrelated research achievements.