Abstract

Large language models can produce powerful contextual representations that lead to improvements across many NLP tasks. Since these models are typically guided by a sequence of learned self attention mechanisms and may comprise undesired inductive biases, it is paramount to be able to explore what the attention has learned. While static analyses of these models lead to targeted insights, interactive tools are more dynamic and can help humans better gain an intuition for the model-internal reasoning process. We present exBERT, an interactive tool named after the popular BERT language model, that provides insights into the meaning of the contextual representations by matching a human-specified input to similar contexts in a large annotated dataset. By aggregating the annotations of the matching similar contexts, exBERT helps intuitively explain what each attention-head has learned.

Keywords

TransformerComputer scienceHuman–computer interactionEngineeringElectrical engineering

Affiliated Institutions

Related Publications

Publication Info

Year
2019
Type
preprint
Citations
47
Access
Closed

External Links

Social Impact

Social media, news, blog, policy document mentions

Citation Metrics

47
OpenAlex

Cite This

Benjamin Hoover, Hendrik Strobelt, Sebastian Gehrmann (2019). exBERT: A Visual Analysis Tool to Explore Learned Representations in Transformers Models. arXiv (Cornell University) . https://doi.org/10.48550/arxiv.1910.05276

Identifiers

DOI
10.48550/arxiv.1910.05276