Image super-resolution reconstruction is the method that uses one or several low-resolution images to reconstruct a high-resolution image. Sparse representation has been widely used in single image super-resolution reconstruction. However, the contents can vary significantly across different patches in a single image, and the fixed dictionaries, which common super-resolution algorithms based on sparse representation often used, cannot suit for every patch. This paper presents a novel approach for single image super-resolution based on sparse representation, which trains the dictionary with external database and the input low-resolution image itself. With the nonlocal similar patches extracted from the input image, the dictionary is updated by on-line dictionary learning method to ensure that the new dictionary is suitable for every patch in the image. Extensive experiments on natural images and remote sensing images show that the method with on-line dictionary learning achieves better results than those of the state-of-the-art algorithms in terms of both objective and visual evaluations.