Abstract

Knowledge distillation aims at transferring knowledge acquired in one model (a teacher) to another model (a student) that is typically smaller. Previous approaches can be expressed as a form of training the student to mimic output activations of individual data examples represented by the teacher. We introduce a novel approach, dubbed relational knowledge distillation (RKD), that transfers mutual relations of data examples instead. For concrete realizations of RKD, we propose distance-wise and angle-wise distillation losses that penalize structural differences in relations. Experiments conducted on different tasks show that the proposed method improves educated student models with a significant margin. In particular for metric learning, it allows students to outperform their teachers' performance, achieving the state of the arts on standard benchmark datasets.

Keywords

Margin (machine learning)DistillationBenchmark (surveying)Metric (unit)Computer scienceStatistical relational learningArtificial intelligenceMachine learningRelational databaseData miningEngineering

Affiliated Institutions

Related Publications

Deep Mutual Learning

Model distillation is an effective and widely used technique to transfer knowledge from a teacher to a student network. The typical application is to transfer from a powerful la...

2018 2018 IEEE/CVF Conference on Computer ... 1668 citations

Publication Info

Year
2019
Type
article
Pages
3962-3971
Citations
1437
Access
Closed

Social Impact

Altmetric

Social media, news, blog, policy document mentions

Citation Metrics

1437
OpenAlex
194
Influential
1183
CrossRef

Cite This

Wonpyo Park, Dongju Kim, Yan Lu et al. (2019). Relational Knowledge Distillation. 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) , 3962-3971. https://doi.org/10.1109/cvpr.2019.00409

Identifiers

DOI
10.1109/cvpr.2019.00409
arXiv
1904.05068

Data Quality

Data completeness: 84%