Abstract

Real-world learning tasks often involve high-dimensional data sets with complex patterns of missing features.In this paper we review the problem of learning from incomplete data from two statistical perspectives|the likelihood-based and the Bayesian.The goal is two-fold: to place current neural network approaches to missing data within a statistical framework, and to describe a set of algorithms, derived from the likelihood-based framework, that handle clustering, classi cation, and function approximation from incomplete data in a principled and e cient manner.These algorithms are based on mixture modeling and make t wo distinct appeals to the Expectation-Maximization (EM) principle (Dempster et al., 1977)|both for the estimation of mixture components and for coping with the missing data.

Keywords

Computer science

Affiliated Institutions

Related Publications

Publication Info

Year
1994
Type
report
Citations
233
Access
Closed

External Links

Social Impact

Altmetric

Social media, news, blog, policy document mentions

Citation Metrics

233
OpenAlex

Cite This

Zoubin Ghahramani, Michael I. Jordan (1994). Learning from Incomplete Data.. . https://doi.org/10.21236/ada295618

Identifiers

DOI
10.21236/ada295618