Abstract

Convolutional neural network (CNN) depth is of crucial importance for image super-resolution (SR). However, we observe that deeper networks for image SR are more difficult to train. The low-resolution inputs and features contain abundant low-frequency information, which is treated equally across channels, hence hindering the representational ability of CNNs. To solve these problems, we propose the very deep residual channel attention networks (RCAN). Specifically, we propose a residual in residual (RIR) structure to form very deep network, which consists of several residual groups with long skip connections. Each residual group contains some residual blocks with short skip connections. Meanwhile, RIR allows abundant low-frequency information to be bypassed through multiple skip connections, making the main network focus on learning high-frequency information. Furthermore, we propose a channel attention mechanism to adaptively rescale channel-wise features by considering interdependencies among channels. Extensive experiments show that our RCAN achieves better accuracy and visual improvements against state-of-the-art methods.

Keywords

ResidualComputer scienceConvolutional neural networkChannel (broadcasting)Focus (optics)Artificial intelligenceDeep learningImage (mathematics)Pattern recognition (psychology)AlgorithmTelecommunicationsOptics

Affiliated Institutions

Related Publications

Publication Info

Year
2018
Type
book-chapter
Pages
294-310
Citations
5131
Access
Closed

Social Impact

Social media, news, blog, policy document mentions

Citation Metrics

5131
OpenAlex
905
Influential

Cite This

Yulun Zhang, Kunpeng Li, Kai Li et al. (2018). Image Super-Resolution Using Very Deep Residual Channel Attention Networks. Lecture notes in computer science , 294-310. https://doi.org/10.1007/978-3-030-01234-2_18

Identifiers

DOI
10.1007/978-3-030-01234-2_18
PMID
40875063
PMCID
PMC12394273
arXiv
1807.02758

Data Quality

Data completeness: 79%