Abstract

Deep convolutional networks have become a popular tool for image generation and restoration. Generally, their excellent performance is imputed to their ability to learn realistic image priors from a large number of example images. In this paper, we show that, on the contrary, the structure of a generator network is sufficient to capture a great deal of low-level image statistics prior to any learning. In order to do so, we show that a randomly-initialized neural network can be used as a handcrafted prior with excellent results in standard inverse problems such as denoising, superresolution, and inpainting. Furthermore, the same prior can be used to invert deep neural representations to diagnose them, and to restore images based on flash-no flash input pairs. Apart from its diverse applications, our approach highlights the inductive bias captured by standard generator network architectures. It also bridges the gap between two very popular families of image restoration methods: learning-based methods using deep convolutional networks and learning-free methods based on handcrafted image priors such as self-similarity.

Keywords

InpaintingPrior probabilityArtificial intelligenceComputer scienceConvolutional neural networkDeep learningGenerator (circuit theory)Image (mathematics)Pattern recognition (psychology)Similarity (geometry)Image restorationComputer visionMachine learningImage processingBayesian probability

Affiliated Institutions

Related Publications

Publication Info

Year
2018
Type
article
Pages
9446-9454
Citations
1810
Access
Closed

External Links

Social Impact

Altmetric
PlumX Metrics

Social media, news, blog, policy document mentions

Citation Metrics

1810
OpenAlex
1029
CrossRef

Cite This

Victor Lempitsky, Andrea Vedaldi, Dmitry Ulyanov (2018). Deep Image Prior. 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition , 9446-9454. https://doi.org/10.1109/cvpr.2018.00984

Identifiers

DOI
10.1109/cvpr.2018.00984

Data Quality

Data completeness: 77%