Abstract

Classification and identification of the materials lying over or beneath the\nEarth's surface have long been a fundamental but challenging research topic in\ngeoscience and remote sensing (RS) and have garnered a growing concern owing to\nthe recent advancements of deep learning techniques. Although deep networks\nhave been successfully applied in single-modality-dominated classification\ntasks, yet their performance inevitably meets the bottleneck in complex scenes\nthat need to be finely classified, due to the limitation of information\ndiversity. In this work, we provide a baseline solution to the aforementioned\ndifficulty by developing a general multimodal deep learning (MDL) framework. In\nparticular, we also investigate a special case of multi-modality learning (MML)\n-- cross-modality learning (CML) that exists widely in RS image classification\napplications. By focusing on "what", "where", and "how" to fuse, we show\ndifferent fusion strategies as well as how to train deep networks and build the\nnetwork architecture. Specifically, five fusion architectures are introduced\nand developed, further being unified in our MDL framework. More significantly,\nour framework is not only limited to pixel-wise classification tasks but also\napplicable to spatial information modeling with convolutional neural networks\n(CNNs). To validate the effectiveness and superiority of the MDL framework,\nextensive experiments related to the settings of MML and CML are conducted on\ntwo different multimodal RS datasets. Furthermore, the codes and datasets will\nbe available at https://github.com/danfenghong/IEEE_TGRS_MDL-RS, contributing\nto the RS community.\n

Keywords

Computer scienceDeep learningModality (human–computer interaction)Artificial intelligenceConvolutional neural networkBottleneckMachine learningFuse (electrical)Pattern recognition (psychology)

Affiliated Institutions

Related Publications

Publication Info

Year
2020
Type
article
Volume
59
Issue
5
Pages
4340-4354
Citations
1230
Access
Closed

External Links

Social Impact

Social media, news, blog, policy document mentions

Citation Metrics

1230
OpenAlex

Cite This

Danfeng Hong, Lianru Gao, Naoto Yokoya et al. (2020). More Diverse Means Better: Multimodal Deep Learning Meets Remote-Sensing Imagery Classification. IEEE Transactions on Geoscience and Remote Sensing , 59 (5) , 4340-4354. https://doi.org/10.1109/tgrs.2020.3016820

Identifiers

DOI
10.1109/tgrs.2020.3016820