Abstract

Our experience of the world is multimodal - we see objects, hear sounds, feel texture, smell odors, and taste flavors. Modality refers to the way in which something happens or is experienced and a research problem is characterized as multimodal when it includes multiple such modalities. In order for Artificial Intelligence to make progress in understanding the world around us, it needs to be able to interpret such multimodal signals together. Multimodal machine learning aims to build models that can process and relate information from multiple modalities. It is a vibrant multi-disciplinary field of increasing importance and with extraordinary potential. Instead of focusing on specific multimodal applications, this paper surveys the recent advances in multimodal machine learning itself and presents them in a common taxonomy. We go beyond the typical early and late fusion categorization and identify broader challenges that are faced by multimodal machine learning, namely: representation, translation, alignment, fusion, and co-learning. This new taxonomy will enable researchers to better understand the state of the field and identify directions for future research.

Keywords

Multimodal learningComputer scienceArtificial intelligenceModalitiesTaxonomy (biology)CategorizationMultimodalityField (mathematics)Machine learningHuman–computer interactionWorld Wide Web

Affiliated Institutions

Related Publications

Publication Info

Year
2018
Type
article
Volume
41
Issue
2
Pages
423-443
Citations
3491
Access
Closed

External Links

Social Impact

Social media, news, blog, policy document mentions

Citation Metrics

3491
OpenAlex

Cite This

Tadas Baltrušaitis, Chaitanya Ahuja, Louis‐Philippe Morency (2018). Multimodal Machine Learning: A Survey and Taxonomy. IEEE Transactions on Pattern Analysis and Machine Intelligence , 41 (2) , 423-443. https://doi.org/10.1109/tpami.2018.2798607

Identifiers

DOI
10.1109/tpami.2018.2798607