Imprecise probability

From formulasearchengine
Jump to navigation Jump to search

Imprecise probability generalizes probability theory to allow for partial probability specifications, and is applicable when information is scarce, vague, or conflicting, in which case a unique probability distribution may be hard to identify. Thereby, the theory aims to represent the available knowledge more accurately. Imprecision is useful for dealing with expert elicitation, because:

  • People have a limited ability to determine their own subjective probabilities and might find that they can only provide an interval.
  • As an interval is compatible with a range of opinions, the analysis ought to be more convincing to a range of different people.

Introduction

Uncertainty is traditionally modelled by a probability distribution, as argued by Kolmogorov,[1] Laplace, de Finetti,[2] Ramsey, Cox, Lindley, and many others. However, this has not been unanimously accepted by scientists, statisticians, and probabilists: it has been argued that some modification or broadening of probability theory is required, because one may not always be able to provide a probability for every event, particularly when only little information or data is available—an early example of such criticism is Boole's critique[3] of Laplace's work—, or when we wish to model probabilities that a group agrees with, rather than those of a single individual.

Perhaps the most straightforward generalization is to replace a single probability specification with an interval specification. Lower and upper probabilities, denoted by and , or more generally, lower and upper expectations (previsions),[4][5][6][7] aim to fill this gap:

with a flexible continuum in between.

Some approaches, summarized under the name nonadditive probabilities,[8] directly use one of these set functions, assuming the other one to be naturally defined such that , with the complement of . Other related concepts understand the corresponding intervals for all events as the basic entity.[9][10]

History

The idea to use imprecise probability has a long history. The first formal treatment dates back at least to the middle of the nineteenth century, by George Boole,[3] who aimed to reconcile the theories of logic (which can express complete ignorance) and probability. In the 1920s, in A Treatise on Probability, Keynes[11] formulated and applied an explicit interval estimate approach to probability.

Since the 1990s, the theory has gathered strong momentum, initiated by comprehensive foundations put forward by Walley,[7] who coined the term imprecise probability, by Kuznetsov,[12] and by Weichselberger,[9][10] who uses the term interval probability. Walley's theory extends the traditional subjective probability theory via buying and selling prices for gambles, whereas Weichselberger's approach generalizes Kolmogorov's axioms without imposing an interpretation.

Usually assumed consistency conditions relate imprecise probability assignments to non-empty closed convex sets of probability distributions. Therefore, as a welcome by-product, the theory also provides a formal framework for models used in robust statistics[13] and non-parametric statistics.[14] Included are also concepts based on Choquet integration,[15] and so-called two-monotone and totally monotone capacities,[16] which have become very popular in artificial intelligence under the name (Dempster-Shafer) belief functions.[17][18] Moreover, there is a strong connection[19] to Shafer and Vovk's notion of game-theoretic probability.[20]

Mathematical models

So, the term imprecise probability—although an unfortunate misnomer as it enables more accurate quantification of uncertainty than precise probability—appears to have been established in the 1990s, and covers a wide range of extensions of the theory of probability, including:

Interpretation of imprecise probabilities according to Walley

A unification of many of the above mentioned imprecise probability theories was proposed by Walley,[7] although this is in no way the first attempt to formalize imprecise probabilities. In terms of probability interpretations, Walley’s formulation of imprecise probabilities is based on the subjective variant of the Bayesian interpretation of probability. Walley defines upper and lower probabilities as special cases of upper and lower previsions and the gambling framework advanced by Bruno de Finetti. In simple terms, a decision maker’s lower prevision is the highest price at which the decision maker is sure he or she would buy a gamble, and the upper prevision is the lowest price at which the decision maker is sure he or she would buy the opposite of the gamble (which is equivalent to selling the original gamble). If the upper and lower previsions are equal, then they jointly represent the decision maker’s fair price for the gamble, the price at which the decision maker is willing to take either side of the gamble. The existence of a fair price leads to precise probabilities.

The allowance for imprecision, or a gap between a decision maker's upper and lower previsions, is the primary difference between precise and imprecise probability theories. Interestingly, such gaps arise naturally in betting markets which happen to be financially illiquid due to asymmetric information.


Bibliography

  1. {{#invoke:citation/CS1|citation |CitationClass=book }}
  2. 2.0 2.1 {{#invoke:citation/CS1|citation |CitationClass=book }}
  3. 3.0 3.1 3.2 {{#invoke:citation/CS1|citation |CitationClass=book }}
  4. {{#invoke:Citation/CS1|citation |CitationClass=journal }}
  5. 5.0 5.1 5.2 {{#invoke:citation/CS1|citation |CitationClass=conference }}
  6. 6.0 6.1 6.2 {{#invoke:Citation/CS1|citation |CitationClass=journal }}
  7. 7.0 7.1 7.2 7.3 7.4 {{#invoke:citation/CS1|citation |CitationClass=book }}
  8. {{#invoke:citation/CS1|citation |CitationClass=book }}
  9. 9.0 9.1 9.2 {{#invoke:Citation/CS1|citation |CitationClass=journal }}
  10. 10.0 10.1 {{#invoke:citation/CS1|citation |CitationClass=book }}
  11. 11.0 11.1 11.2 {{#invoke:citation/CS1|citation |CitationClass=book }}
  12. {{#invoke:citation/CS1|citation |CitationClass=book }}
  13. {{#invoke:citation/CS1|citation |CitationClass=book }}
  14. Template:Cite doi
  15. Template:Cite doi
  16. {{#invoke:Citation/CS1|citation |CitationClass=journal }}
  17. 17.0 17.1 {{#invoke:Citation/CS1|citation |CitationClass=journal }}
  18. 18.0 18.1 {{#invoke:citation/CS1|citation |CitationClass=book }}
  19. Template:Cite doi
  20. {{#invoke:citation/CS1|citation |CitationClass=book }}
  21. {{#invoke:Citation/CS1|citation |CitationClass=journal }}
  22. {{#invoke:citation/CS1|citation |CitationClass=book }}
  23. {{#invoke:citation/CS1|citation |CitationClass=book }}
  24. {{#invoke:Citation/CS1|citation |CitationClass=journal }}
  25. {{#invoke:citation/CS1|citation |CitationClass=book }}
  26. {{#invoke:Citation/CS1|citation |CitationClass=journal }}
  27. Template:Cite web
  28. {{#invoke:citation/CS1|citation |CitationClass=book }}

See also

External links