Error analysis and processing for spatial data is one of the key issues in GIS research. In order to present the optimal error processing model, the characteristic and distribution of the error in GIS data must be studied thoroughly in the first place. However, it should begin with capturing methods for spatial data. In this paper, error distributions, error tests and error processing of GIS spatial data from manual digitization are studied systematically. According to the statistical characteristics of the random error, the probability density functions of the normal, the Laplace and the p
-norm distribution are derived based on the different axioms. It is proved on the theoretical point that the random errors do not always follow normal distribution but are likely to other distributions such as Laplace distribution, p
-norm distribution, etc. Based on this idea, repeated manual digitization experiments are carried out by several operators under the same circumstance. By eliminating the effect of the systematic and gross errors, various statistical distribution fitness tests including Kurtosis and Skewness tests, Chi-Square test and Komogorov test for the random error in manual digitization are conducted. It is found that the random error in manual digitization obeys not the normal and Laplace distribution but the p-norm distribution (p≈1.6). Based on this, least p
-norm estimation for adjusting digitized data is analyzed, and the results are discussed Compared with least square estimation. It can be seen that the least p-norm adjustment is better than least square adjustment for digitized data processing in GIS.