Abstract
This is a tutorial describing the Expectation Propagation (EP) algorithm for a general exponential family. Our focus is on simplicity of exposition. Although the overhead of translating a specific model into its exponential family representation can be considerable, many apparent complications of EP can simply be sidestepped by working in this canonical representation. Note: This material is extracted from the Appendix of my PhD thesis (see www.kyb.tuebingen.mpg.de/bs/people/seeger/papers/thesis.html). 1 Exponential Families Definition 1 (Exponential Family) A set F of distributions with densities P (x|θ) = exp � θ T φ(x) − Φ(θ) � , θ ∈ Θ, Φ(θ) = log exp � θ T φ(x) � dµ(x) w.r.t. a base measure µ is called an exponential family. Here, θ are called natural parameters, Θ the natural parameter space, φ(x) the sufficient statistics, and Φ(θ) is the log partition function. Furthermore, η = Eθ[φ(x)] are called moment parameters, where Eθ[·]
Keywords
Affiliated Institutions
Related Publications
Dynamic Head: Unifying Object Detection Heads with Attentions
The complex nature of combining localization and classification in object detection has resulted in the flourished development of methods. Previous works tried to improve the pe...
Enriching Word Vectors with Subword Information
Continuous word representations, trained on large unlabeled corpora are useful for many natural language processing tasks. Popular models that learn such representations ignore ...
Finding Function in Form: Compositional Character Models for Open Vocabulary Word Representation
Wang Ling, Chris Dyer, Alan W Black, Isabel Trancoso, Ramón Fermandez, Silvio Amir, Luís Marujo, Tiago Luís. Proceedings of the 2015 Conference on Empirical Methods in Natural L...
Learning Character-level Representations for Part-of-Speech Tagging
Distributed word representations have recently been proven to be an invaluable resource for NLP. These representations are normally learned using neural networks and capture syn...
Word Embeddings Go to Italy: A Comparison of Models and Training Datasets.
In this paper we present some preliminary results on the generation of word embeddings for the Italian language. We compare two popular word representation models, word2vec and ...
Publication Info
- Year
- 2005
- Type
- article
- Citations
- 139
- Access
- Closed