Abstract

Federated learning enables multiple parties to collaboratively train a machine learning model without communicating their local data. A key challenge in federated learning is to handle the heterogeneity of local data distribution across parties. Although many studies have been proposed to address this challenge, we find that they fail to achieve high performance in image datasets with deep learning models. In this paper, we propose MOON: model-contrastive federated learning. MOON is a simple and effective federated learning framework. The key idea of MOON is to utilize the similarity between model representations to correct the local training of individual parties, i.e., conducting contrastive learning in model-level. Our extensive experiments show that MOON significantly outperforms the other state-of-the-art federated learning algorithms on various image classification tasks.

Keywords

Computer scienceFederated learningKey (lock)Similarity (geometry)Artificial intelligenceMachine learningDeep learningImage (mathematics)

Affiliated Institutions

Related Publications

Publication Info

Year
2021
Type
article
Pages
10708-10717
Citations
1109
Access
Closed

External Links

Social Impact

Altmetric

Social media, news, blog, policy document mentions

Citation Metrics

1109
OpenAlex

Cite This

Qinbin Li, Bingsheng He, Dawn Song (2021). Model-Contrastive Federated Learning. , 10708-10717. https://doi.org/10.1109/cvpr46437.2021.01057

Identifiers

DOI
10.1109/cvpr46437.2021.01057