Abstract

The feed-forward architectures of recently proposed deep super-resolution networks learn representations of low-resolution inputs, and the non-linear mapping from those to high-resolution output. However, this approach does not fully address the mutual dependencies of low- and high-resolution images. We propose Deep Back-Projection Networks (DBPN), that exploit iterative up- and downsampling layers, providing an error feedback mechanism for projection errors at each stage. We construct mutually-connected up- and down-sampling stages each of which represents different types of image degradation and high-resolution components. We show that extending this idea to allow concatenation of features across up- and downsampling stages (Dense DBPN) allows us to reconstruct further improve super-resolution, yielding superior results and in particular establishing new state of the art results for large scaling factors such as 8× across multiple data sets.

Keywords

UpsamplingConcatenation (mathematics)Computer scienceProjection (relational algebra)Artificial intelligenceExploitResolution (logic)Construct (python library)Image resolutionAlgorithmComputer visionImage (mathematics)Pattern recognition (psychology)Mathematics

Affiliated Institutions

Related Publications

Publication Info

Year
2018
Type
article
Citations
1575
Access
Closed

Social Impact

Social media, news, blog, policy document mentions

Citation Metrics

1575
OpenAlex
183
Influential
1182
CrossRef

Cite This

Muhammad Haris, Greg Shakhnarovich, Norimichi Ukita (2018). Deep Back-Projection Networks for Super-Resolution. 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition . https://doi.org/10.1109/cvpr.2018.00179

Identifiers

DOI
10.1109/cvpr.2018.00179
arXiv
1803.02735

Data Quality

Data completeness: 84%