Abstract

Methods for super-resolution can be broadly classified into two families of methods: (i) The classical multi-image super-resolution (combining images obtained at subpixel misalignments), and (ii) Example-Based super-resolution (learning correspondence between low and high resolution image patches from a database). In this paper we propose a unified framework for combining these two families of methods. We further show how this combined approach can be applied to obtain super resolution from as little as a single image (with no database or prior examples). Our approach is based on the observation that patches in a natural image tend to redundantly recur many times inside the image, both within the same scale, as well as across different scales. Recurrence of patches within the same image scale (at subpixel misalignments) gives rise to the classical super-resolution, whereas recurrence of patches across different scales of the same image gives rise to example-based super-resolution. Our approach attempts to recover at each pixel its best possible resolution increase based on its patch redundancy within and across scales.

Keywords

Subpixel renderingArtificial intelligenceComputer scienceRedundancy (engineering)Image (mathematics)Image resolutionComputer visionResolution (logic)PixelSub-pixel resolutionScale (ratio)Pattern recognition (psychology)Image processingDigital image processingGeographyCartography

Affiliated Institutions

Related Publications

Publication Info

Year
2009
Type
article
Citations
1872
Access
Closed

External Links

Social Impact

Altmetric

Social media, news, blog, policy document mentions

Citation Metrics

1872
OpenAlex

Cite This

Daniel Gläsner, Shai Bagon, Michal Irani (2009). Super-resolution from a single image. . https://doi.org/10.1109/iccv.2009.5459271

Identifiers

DOI
10.1109/iccv.2009.5459271