Abstract

Increasing model size when pretraining natural language representations often\nresults in improved performance on downstream tasks. However, at some point\nfurther model increases become harder due to GPU/TPU memory limitations and\nlonger training times. To address these problems, we present two\nparameter-reduction techniques to lower memory consumption and increase the\ntraining speed of BERT. Comprehensive empirical evidence shows that our\nproposed methods lead to models that scale much better compared to the original\nBERT. We also use a self-supervised loss that focuses on modeling\ninter-sentence coherence, and show it consistently helps downstream tasks with\nmulti-sentence inputs. As a result, our best model establishes new\nstate-of-the-art results on the GLUE, RACE, and \\squad benchmarks while having\nfewer parameters compared to BERT-large. The code and the pretrained models are\navailable at https://github.com/google-research/ALBERT.\n

Keywords

Computer scienceSentenceLanguage modelArtificial intelligenceCode (set theory)Natural language processingPoint (geometry)Coherence (philosophical gambling strategy)Machine learningProgramming languageSet (abstract data type)

Related Publications

Publication Info

Year
2019
Type
preprint
Citations
4051
Access
Closed

External Links

Social Impact

Social media, news, blog, policy document mentions

Citation Metrics

4051
OpenAlex

Cite This

Zhenzhong Lan, Mingda Chen, Sebastian Goodman et al. (2019). ALBERT: A Lite BERT for Self-supervised Learning of Language\n Representations. arXiv (Cornell University) . https://doi.org/10.48550/arxiv.1909.11942

Identifiers

DOI
10.48550/arxiv.1909.11942