Abstract
Both in science and in practical affairs we reason by combining facts only inconclusively supported by evidence. Building on an abstract understanding of this process of combination, this book constructs a new theory of epistemic probability. The theory draws on the work of A. P. Dempster but diverges from Depster's viewpoint by identifying his as epistemic probabilities and taking his rule for combining upper and lower as fundamental. The book opens with a critique of the well-known Bayesian theory of epistemic probability. It then proceeds to develop an alternative to the additive set functions and the rule of conditioning of the Bayesian theory: set functions that need only be what Choquet called monotone of order of infinity. and Dempster's rule for combining such set functions. This rule, together with the idea of weights of evidence, leads to both an extensive new theory and a better understanding of the Bayesian theory. The book concludes with a brief treatment of statistical inference and a discussion of the limitations of epistemic probability. Appendices contain mathematical proofs, which are relatively elementary and seldom depend on mathematics more advanced that the binomial theorem.
Keywords
Related Publications
Testing the Conditional Independence and Monotonicity Assumptions of Item Response Theory
When item characteristic curves are nondecreasing functions of a latent variable, the conditional or local independence of item responses given the latent variable implies nonne...
Finite-Dimensional Approximation of Gaussian Processes
Gaussian process (GP) prediction suffers from O(n3) scaling with the data set size n. By using a finite-dimensional basis to approximate the GP predictor, the computational comp...
CODA: convergence diagnosis and output analysis for MCMC
[1st paragraph] At first sight, Bayesian inference with Markov Chain Monte Carlo (MCMC) appears to be straightforward. The user defines a full probability model, perhaps using o...
From local explanations to global understanding with explainable AI for trees
Tree-based machine learning models such as random forests, decision trees, and gradient boosted trees are popular non-linear predictive models, yet comparatively little attentio...
Training Products of Experts by Minimizing Contrastive Divergence
It is possible to combine multiple latent-variable models of the same data by multiplying their probability distributions together and then renormalizing. This way of combining ...
Publication Info
- Year
- 2020
- Type
- book
- Citations
- 11859
- Access
- Closed
External Links
Social Impact
Social media, news, blog, policy document mentions
Citation Metrics
Cite This
Identifiers
- DOI
- 10.2307/j.ctv10vm1qb