Abstract

Multi-Task Learning (MTL) is a learning paradigm in machine learning and its aim is to leverage useful information contained in multiple related tasks to help improve the generalization performance of all the tasks. In this paper, we give a survey for MTL from the perspective of algorithmic modeling, applications, and theoretical analyses. For algorithmic modeling, we give a definition of MTL and then classify different MTL algorithms into five categories, including feature learning approach, low-rank approach, task clustering approach, task relation learning approach, and decomposition approach as well as discussing the characteristics of each approach. In order to improve the performance of learning tasks further, MTL can be combined with other learning paradigms including semi-supervised learning, active learning, unsupervised learning, reinforcement learning, multi-view learning, and graphical models. When the number of tasks is large or the data dimensionality is high, we review online, parallel, and distributed MTL models as well as dimensionality reduction and feature hashing to reveal their computational and storage advantages. Many real-world applications use MTL to boost their performance and we review representative works. Finally, we present theoretical analyses and discuss several future directions for MTL. IEEE

Keywords

Computer scienceArtificial intelligenceMachine learningMulti-task learningUnsupervised learningReinforcement learningLeverage (statistics)Feature learningDimensionality reductionInstance-based learningCurse of dimensionalitySemi-supervised learningCluster analysisTask (project management)Generalization

Affiliated Institutions

Related Publications

Publication Info

Year
2021
Type
article
Volume
34
Issue
12
Pages
5586-5609
Citations
1864
Access
Closed

Social Impact

Altmetric

Social media, news, blog, policy document mentions

Citation Metrics

1864
OpenAlex
120
Influential
1575
CrossRef

Cite This

Yu Zhang, Qiang Yang (2021). A Survey on Multi-Task Learning. IEEE Transactions on Knowledge and Data Engineering , 34 (12) , 5586-5609. https://doi.org/10.1109/tkde.2021.3070203

Identifiers

DOI
10.1109/tkde.2021.3070203
arXiv
1707.08114

Data Quality

Data completeness: 88%