Abstract

This study proposes a novel unified and unsupervised end-to-end image fusion network, termed as U2Fusion, which is capable of solving different fusion problems, including multi-modal, multi-exposure, and multi-focus cases. Using feature extraction and information measurement, U2Fusion automatically estimates the importance of corresponding source images and comes up with adaptive information preservation degrees. Hence, different fusion tasks are unified in the same framework. Based on the adaptive degrees, a network is trained to preserve the adaptive similarity between the fusion result and source images. Therefore, the stumbling blocks in applying deep learning for image fusion, e.g., the requirement of ground-truth and specifically designed metrics, are greatly mitigated. By avoiding the loss of previous fusion capabilities when training a single model for different tasks sequentially, we obtain a unified model that is applicable to multiple fusion tasks. Moreover, a new aligned infrared and visible image dataset, RoadScene (available at https://github.com/hanna-xu/RoadScene), is released to provide a new option for benchmark evaluation. Qualitative and quantitative experimental results on three typical image fusion tasks validate the effectiveness and universality of U2Fusion. Our code is publicly available at https://github.com/hanna-xu/U2Fusion.

Affiliated Institutions

Related Publications

Publication Info

Year
2020
Type
article
Volume
44
Issue
1
Pages
502-518
Citations
1520
Access
Closed

Social Impact

Social media, news, blog, policy document mentions

Citation Metrics

1520
OpenAlex
197
Influential
1513
CrossRef

Cite This

Han Xu, Jiayi Ma, Junjun Jiang et al. (2020). U2Fusion: A Unified Unsupervised Image Fusion Network. IEEE Transactions on Pattern Analysis and Machine Intelligence , 44 (1) , 502-518. https://doi.org/10.1109/tpami.2020.3012548

Identifiers

DOI
10.1109/tpami.2020.3012548
PMID
32750838

Data Quality

Data completeness: 81%