# Talk:Probability

Template:Vital article Template:WikiProjectBannerShell

## Untitled

I've moved the existing talk page to Talk:Probability/Archive1, so the edit history is now with the archive page. I've copied back the most recent thread. Hope this helps, Wile E. Heresiarch 04:44, 10 Aug 2004 (UTC)

## Game theory and probability

While I am by no means an expert, I have read the book by Von Neumann and Morgenstern, and I have a hard time understanding how game theory is "strictly based on probability." I'd even go so far as to say that that statement is complete rubbish - game theory may apply probability in solution concepts, but it isn't "based on probability." This is even obvious from the article on game theory itself. Zalle 14:02, 2 January 2007 (UTC)

## Laws of Probability

In the section "Formalizing probability" a list of three statements is given, and it is implied that the listed statements are the "laws of probability." While the statements are true, they are not the same as the Kolmogorov axioms that the are linked to in the paragraph. I'd say it's fairly misleading to imply they are. Zalle 13:50, 2 January 2007 (UTC)

## Law of Large Numbers

First, good job on the entry to all....

Now, the Law of Large Numbers; though many texts butcher this deliberately to avoid tedious explanations to the average freshman, the law of large numbers is not the limit stated in this entry. Upon reflection, one can even see that the limit stated isn't well-defined. Fortunately, you present Stoppard's scene that so poignantly illustrates the issues involved with using the law of large numbers to interpret probabilities; the exact issue of it not being a guarantee of convergence. I have an edit which I present for discussion:

...
As *N* gets larger and larger, we *expect* that in our example the ratio *N*_{H}/*N* will get closer and closer to the probability of a single coin flip being heads. Most casual observers are even willing to *define* the probability Pr(*H*) of flipping heads as the mathematical limit, as *N* approaches infinity, of this sequence of ratios:

In actual practice, of course, we cannot flip a coin an infinite number of times; so in general, this formula most accurately applies to situations in which we have already assigned an *a priori* probability to a particular outcome (in this case, our *assumption* that the coin was a "fair" coin). Furthermore, mathematically speaking, the above limit is not well-defined; the law of large numbers is a little more convoluted and dependent upon already having some definition of probability. The theorem states that, given Pr(*H*) and any arbitrarily small probability ε and difference δ, there exists some number *n* such that for all *N* > *n*,

In other words, by saying that "the probability of heads is 1/2", this law asserts that if we flip our coin often enough, it becomes more and more likely that the number of heads over the number of total flips will become arbitrarily close to 1/2. Unfortunately, this theorem says that the ratio will probably get close to the stated probability, and provides no guarantees of convergence.

This aspect of the law of large numbers is sometimes troubling when applied to real world situations. ...

--Tlee 03:44, 13 Apr 2004 (UTC)

- Since
*N*_{H}/*N*is just the sample mean of a Bernoulli random variable, the strong law of large numbers should guarantee the convergence of*N*_{H}/*N*to the mean, Pr(*H*). That is, convergence will occur almost surely, or equivalently

- Nonetheless, I agree that the way probability is `defined' in the current version of the article needs some refinement, and other than the above comment, I like what you've got, Tlee. I'd say just go ahead and make the edit!

- --Ben Cairns 23:47, 15 Apr 2004 (UTC)

### Probability in mathematics

For the reasons stated above, the "definition" given in this section is wrong and misleading for any serious readers. I would try to rewrite this section soon. Are there any objections? The Infidel 18:20, 21 February 2006 (UTC)

- Actually, I dislike all my tries as soon as I write them down. But there will be a new version real soon now. The Infidel 22:31, 25 February 2006 (UTC)

## Accuracy of Probability

This section is very unclear, and probably (sic) wrong.

The statement that, "One in a thousand probability played a thousand times has only a one in a million chance of failure," is patently false. An event with a .001 probability, repeated one thousand times, independently, has a 36.7% chance of not occurring in those thousand trials. That's a far cry from one in a million. I'm not sure what the author was trying to describe by this section, but it didn't work.

## Good Job

This is a very nice article. I was very happy to see the distinction between 0 probability events and impossible events. :) --pippo2001 21:47, 5 Jun 2005 (UTC)

## Entropy?

Would it be worthwhile to extend the physical interpretation of probability to the statistical-mechanics definition of entropy, which suggests that in general, the highest-probability configuration of (insert whatever) is the one generally achieved, and this tends to maximize the disorder in the system? (eg. air molecules in a room - the highest probability outcome is the one in which the number of atoms is about equal per unit volume; another way of saying this is that it is entirely possible that all the air molecules at once could shift to one corner of the room but the length of time required to observe for this low-probability outcome is longer than some absurdly high number). --24.80.119.229 05:25, 19 August 2005 (UTC)

- Yes I think it would be a good idea to explain the difference between how the physicist and the mathematician view this. In essence, a physicist is someone who would consider an event with a very very small probability (but larger than zero) as totally impossible, while a mathematician is someone who would consider some events with exactly zero probability as possible anyway. iNic 17:24, 1 March 2007 (UTC)

## Request for help

Could someone here help arbitrate on a discussion regarding probability on the Answers in Genesis talk page. I would like someone who feels they can be independent and fair to both sides. This page includes religious discussions and is somewhat controversial, but I don't think this should affect the specific discussion regarding the probability of an event. You can also leave a message on my talk page if you have any questions. Thanks Christianjb 03:26, 6 December 2005 (UTC)

## Odds

The person who wrote up odds as currently displayed on the page got it wrong. A 2:1 event is not an event with probability 2/3; it is a 1/3 probability event. The formula a/(a+b) needs to be replaced with b/(a+b) and is probably best illustrated with a reference to gambling.

## certainty

certainty redirects here, but im gonna steal it for an epistemology acrticle. will do a disambig Spencerk 08:01, 26 March 2006 (UTC)

## Theory of errors

I submit that the memoir De erroribus (1823) by C F Gauss should be mentioned. Gauss derives the error function connected with his name, from only three assumptions.

(C Lomnitz, Mexico City)

## A is certain implies P(A) = 1

One of the first sentences on the article is:

"If probability is equal to 1 then that event is certain to happen and if the probability is 0 then that event will never occur."

But, later on:

"An impossible event has a probability of exactly 0, and a certain event has a probability of 1, but the converses are not always true"

The first sentence is not correct (the second one is). Perhaps some clarification is needed.

They are both true are they not?
Both say:

If P(A) = 0 that is impossible

If P(A) = 1 that is certain.

If I am getting this wrong can you please explain why.

--212.58.233.129

- In a uniform distribution on the interval [0,1], every point has the same probability, which must be 0. Do you think this means that every point is impossible? --Zundark 17:13, 16 January 2007 (UTC)

- It's been a while, but I think , so the probability of each
*x*in a uniform distribution is 1. I think the answer to 212.58.233.129's question is that an event of probability 1 does not imply that it is inevitable, and having a probability of 0 doesn't imply that it will never happen. Wake 03:09, 24 February 2007 (UTC)

"on the interval [0,1], every point has the same probability, which must be 0" Why must it be 0? Sorry I am not an expert in maths. Also I am failing to understand why this is wrong. "If probability is equal to 1 then that event is certain to happen and if the probability is 0 then that event will never occur."

An individual point in a continuous uniform distribution has a theoretical probability zero because it is one point in an infinite set of points. The number of points in any segment of the real number line is infinite (see infinity particularly the sentence, "Cardinal arithmetic can be used to show not only that the number of points in a real number line is equal to the number of points in any segment of that line, but that this is equal to the number of points on a plane and, indeed, in any finite-dimensional space."). So, the probability of one particular real number in uniform continuous distribution in the segment [0,1] therefore would intuitively be one over infinity -- expressed as a limit this tends towards zero as the number of points approaches infinity. This changes when you consider a range (a set of points), rather than an individual point. A range in the uniform distribution, for example, could be the interval from zero to .25, which of course would have a probability of 25%. In physical space, lengths are not infinitely divisible, eventually you reach mechanical, molecular, atomic and quantum limits. Jim.Callahan,Orlando (talk) 02:13, 22 April 2013 (UTC)

An analogy would be, the probability of an individual point in a continuous uniform distribution would be the probability of a winning lottery ticket, in a lottery with an infinite number of tickets (vanishingly small)! Jim.Callahan,Orlando (talk) 02:19, 22 April 2013 (UTC)

and what exactly does the last part fo this mean.
"An impossible event has a probability of exactly 0, and a certain event has a probability of 1, but the converses are not always true"
What are the opposites here? A possible event doesnt have a probability of 0, and a uncertain event isnt 1. Surely this is true.
Please can someone explain this. I have basic probability knowledge and would like to understand more into depth the theories and ideas of probabilities.

- loosely speaking the probability of an event is the number of positive outcomes divided by the number of all possible outcomes. Throw a dice: probability of getting a 3 is 1/6. But now consider something where you have an infinite number of possible outcomes: pick a random number between 3 and 4, the probability of getting exactly pi is 1/infinity = 0. However it is not *impossible* to get pi, just very very unlikely. Of course this is very much a theoretical concept, since in pretty much all real world applications you have only a finite number of possible outcomes.193.109.51.150 23:37, 23 February 2007 (UTC)

## Luck

There should be a mention of probability in the luck article, or is there no connection between luck and probability? ISn't luck after all probability resulting in our favour?

Is there enough to say on luck? I have tried to look for a specific definition for luck, but cannot get one in the context of probability. All i can think of it that luck is for an unlikely favourable outcome(although bad luck is an unlikely unfavourable outcome). I would also mention sometime about that good luck would only be discribed in the short term, as probabilities usually even itself out in the long run. I mean if you hit the 35-1 odds on the roulette table, but was your 38th time you played and never hit, that would not be lucky. However if you hit the same number 38 times in a row on a 35-1 shot, that would be extremely lucky. 16 Jan 2007

## Three types of probability

Please can an expert add some content to this page on the frequentist, propensity and subjectivist (Bayesian) interpretations of probability as this seems pretty crucial but is missing from the article. There is a starting point here. Thanks Andeggs 15:32, 23 December 2006 (UTC)

Is it quite true that subjective probability is the same as Bayesianism? I was under the impression that Bayesianism is rather a mathematical theory which is an (or perhaps the only current) interpretation of subjective probability. Ben Finn 18:43, 12 January 2007 (UTC)

There is no mention made of the most important interpretation of probability. Assuming Laplacian Determinism is correct, which most if not all of science assumes, then probability is really a study of observers and their inaccuracies, not of the world they are observing. Purely mathematical discussions on probability may be interesting to mathematicians, but mathematicians are a small subset of people who are interested in the broader concept of probability. Is there not room for other views besides strictly mathematical ones? Thanks --Krantz2 (talk) 18:15, 23 February 2008 (UTC)

- We have a separate article on probability interpretations, which is a better place for this. --Lambiam 12:10, 24 February 2008 (UTC)

References could include such items as the following and SHOULD INCLUDE Savage, Eisenberg/Gale items:

Landa, S.,"CAAPM: Computer Aided Admissible Probability Measurement on Plato IV," Rand Corp., Santa Monica CA, R-1721-ARPA, '76

Brown, T A, Shuford, E., "Quantifying Uncertainty Into Numerical Probabilities for the Reporting of Intelligence," Rand Corp., Santa Monica CA, R-1185-ARPA, '73.

Savage, L J, "Elicitation of Personal Probabilities & Expectations," J. Amer. Stat. Assoc. '71, 783-801.

Good, I J, "Rational Decisions," J. Royal Statistical Society, Ser. B, 14, '52, pp. 107-114.

McCarthy, J., "Measurement of Value of Information," Proc. Nat. Acad Sci., '56, pp. 654-5.

Eisenberg, E., Gale, G. "Consensus of Subjective Probabilities," Ann. Math. Stat., 30, 3/59, pp. 165-168.

Epstein, E. S., "A Scoring System for Probability Forecasts of Ranked Categories," J. Appl. Meteorology, 8 (6), 12/69, pp. 985-987.

Winkler, R. L., "Scoring Rules and Probability Assessors," J. Amer. Stat. Assoc., 64, '69, pp. 1073-78.

Shuford, E, Albert, A., Massengill, H., "Admissible Probability," Psychometrika, 31, '66, pp. 125-145.

Shuford, E., Brown, T.A., "Elicitation of Personal Probabilities," Instructional Sci. 4, '75, pp. 137-188

Brown, T. A., Probabilistic Forecasts and Reproducing Scoring Systems, RM-6299-ARPA, 6/70.

Brown, T. A., An Experiment in Probabilistic Forecasting, R-944-ARPA, July '73.

Brown, T. A., Shuford, E., Quantifying Uncertainty Into Numerical Probabilities, R-1185-ARPA, '73.

The textbook "Essentials of finite mathematics," Robert F. Brown, Brenda W. Brown has these three distinctive views of this term:

1. Experience: relevant past information. 2. Counting: abstract reasoning. 3. Subjective: instinctive feelings, knowledge, training.

I found the frequentist/Bayesian distinction unclear and misleading. I believe Brown/Brown got it right with these 3 categories. —Preceding unsigned comment added by Aklinger (talk • contribs) 05:03, 29 January 2010 (UTC)

## New Section on Physical Chance?

I published a book on physical probability (chance) recently, and thought I would write a section for this article on chance. At present there is just a very brief section on the propensity interpretation. Or should this be a separate article?

By the way, Bayesianism is an approach to scientific reasoning based on the idea that a well-confirmed theory is one that presently has high subjective probability. The name comes from the fact that Bayes's theorem is a central tool in the calculation of the probability of a theory given the evidence.

Richardajohns 23:12, 6 March 2007 (UTC)richardajohns

- I think there ought to be a separate article on chance (including propensity). There's plenty to say about it. Currently I think the whole treatment of the different varieties of probability in connected articles is fairly poor, and could do with improvement & a clean up. Ben Finn 10:09, 20 March 2007 (UTC)

## Moved

I moved this information from probability theory, because it seems like it would fit a lot better into this article. However, I don't understand this text well enough to merge it. Can anyone help me merge it in? MisterSheik 17:55, 28 February 2007 (UTC)

There are different ways to interpret probability. Frequentists will assign probabilities only to *events* that are *random*, i.e., random variables, that are outcomes of actual or theoretical *experiments*. On the other hand, Bayesians assign probabilities to *propositions* that are *uncertain* according either to subjective degrees of belief in their truth, or to logically justifiable degrees of belief in their truth. Among statisticians and philosophers, many more distinctions are drawn beyond this subjective/objective divide. See the article on interpretations of probability at the Stanford Encyclopedia of Philosophy: [1].

A Bayesian may assign a probability to the proposition that 'there was life on Mars a billion years ago', since that is uncertain, whereas a frequentist would not assign probabilities to *statements* at all. A frequentist is actually unable to technically interpret such uses of the probability concept, even though 'probability' is often used in this way in colloquial speech. Frequentists only assign probabilities to outcomes of well defined *random experiments*, that is, where there is a defined sample space as defined above in the theory section. For another illustration of the differences see the two envelopes problem.

Situations do arise where probability theory is somewhat lacking. One method of attempting to circumvent this indeterminancy is the theory of super-probabilityTemplate:Fact, in which situations are given integer values greater than 1. This is an extension of the multi-dimensional space intrinsic to M-theory and modern theoretical physics.

- What is lacking here, I think, is a good common effort to turn the Probability interpretations page into a good article. The paragraphs above should fit nicely into an improved probability interpretations page. That is in fact also true for a large part of the current article. It should be moved to the interpretations page and removed from this page. I'm not sure what would be appropriate to have in this article actually, that wouldn't fit better under some other heading. Maybe the best thing to do with Probability plain and simple is to turn it into an disambiguation page? iNic 16:42, 7 March 2007 (UTC)

## Aliens?

I've removed this seemingly nonsensical statement from the article:

For example, the probability that aliens will come and destroy us is x to 1 because there are so many different possibilities (nice aliens, mean but weaker aliens, non existing aliens)/

Brian Jason Drake 04:32, 11 March 2007 (UTC)

- Come on, that was the only sentence in the whole article that I understood! The rest of it is all dictionary-y and in Latin! --71.116.162.54 (talk) 04:34, 11 February 2009 (UTC)

## Spammed external link

This link: http://www. giacomo.lorenzoni.name/arganprobstat/

has been spammed by multiple IP addresses across articles on several European language Wikipedias. Could regular editors of this article take a look at it to see if it adds high quality, unique information to the external links section in keeping with our guidelines. Thanks -- Siobhan Hansa 14:54, 14 March 2007 (UTC)

## Merge Probability theory to here

Since Probability theory has been reduced to a stub, it may be best to redirect it to here, after merging any valuable information left not already present here (possibly none – didn't check). --Lambiam^{Talk} 10:20, 21 March 2007 (UTC)

- I think we should do nearly the opposite thing: move most of the contents in this article to the Probability theory page and parts of it to the Probability interpretations page. This page, Probability, I think suits best as an disambiguation page, as "probability" can mean so many more things than just the mathematical theory. What do you think? iNic 16:26, 21 March 2007 (UTC)

- That sounds really good to me... MisterSheik 16:40, 21 March 2007 (UTC)

## Probability is the chance that something is likely to happen or be the case

I have concerns about this opening sentence: 1. The definition seems circular (although one might think that there is no way of breaking out of such a circle when it comes to defining probability). 2. Worse, the definition involves a second-order probability: "CHANCE that something is LIKELY". "Probability is the chance that something will happen or be the case" would be better, stripping away one of the probability-laden operators. 3. The word "chance" has to my ear OBJECTIVE connotations. But 'probability' can be a subjective notion: degree of belief. "Chance" seems to fit awkwardly with that notion. But thanks for taking the trouble to write this article! —The preceding unsigned comment was added by 203.129.46.17 (talk • contribs).

- I agree. I've tried to reword it to fix these problems. I also deleted the reference to the old dictionary definition, since it no longer applies to the current text. -- Avenue 02:41, 20 June 2007 (UTC)

- There's no winning on this issue. Likelihood is circular too. Removing "chance that something is likely" is probably (er, likely?) a good thing. JJL 03:06, 20 June 2007 (UTC)

Hi,

I think the sentence

"This means that "deep inside" nature can only be described using probability theory."

is misleading. John von Neumann wrote an article about "hidden variables" in quantum mechanics in which he proves that a certain quantum mechanical system cannot be described within the usual probability theory framework (measure space, Borel algebra, etc.). The Wikipedia article on von Neumann (quantum mechanics section) says this paper contained a conceptual error, however.

By the way, I haven't added to a Wikipedia discussion before so sorry if I haven't followed the proper protocols. I just thought I'd mention my concern in the hope that some hard-core Wikipedia person would follow it up.

JamesDowty 04:51, 21 June 2007 (UTC)

First, I'd like to reiterate that whoever is running Probability is too focused on the math, which for the majority of readers is a dry side issue. What is really interesting to many (most?) people is what does probability mean to us living in the world. I think Pierre Laplace said that the universe is in a dynamic equilibrium in which all events lead from prior events in a never-ending chain. Consequently, if you buy his argument which I do, then the probability of anything occuring is 0 or 1. This leads me to state that probability does not apply to the universe but to the observer who wants to predict the future with limited data. For instance, weather forecasters are typically right 50% of the time. Whatever data they use gives them a 50% PROBABILITY of being correct. If they knew everything there is to know their predictions would always be 100% correct.
Anyway, another interpretation of probability has nothing to do with 'events' in the world, and everything to do with the person attempting to predict future events. Cheers. --Krantz2 (talk) 02:28, 26 February 2008 (UTC)#REDIRECT "chances are zero or 100%

- In the last year more than 100 editors have edited the article (also counting non-vandal editors whose edits were reverted, like yours), so it is very much the result of a collaborative effort, and it isn't quite clear what you mean by "running Probability". Your edits were reverted because they were mini-essays giving
*your*point of view on the issue; you even signed them. That is against our policy of no original research; as editors of an encyclopedia we must instead report what recognized experts have to say on the topic. The issues you raise concern the interpretation of probability and should be dealt with in the article Probability interpretations. If you cannot find your point of view represented there, find published reliable sources that present that point of view, and report on that, while properly citing sources. --Lambiam 13:22, 26 February 2008 (UTC)

## WikiProject class rating

This article was automatically assessed because at least one WikiProject had rated the article as start, and the rating on other projects was brought up to start class. BetacommandBot 04:22, 10 November 2007 (UTC)

## Question on Statement

"between 0 (no chance of occurring) and 1 (will always occur)" :-
Shouldn't this line be changed to '*almost* no chance of occuring' and 'will *almost* always occur' with a link to the article on Almost Surely?
--122.162.135.235 13:13, 11 November 2007 (UTC)

## Mathematical treatment section

I see we have a new section "Mathematical treatment", giving a basic treatment that was missing. There is some overlap between this and a later section "Theory", and I think the two should be merged. Also, now "Mathematical treatment" is the first section after the lead section, but usually that position is reserved for the History section.

In older versions there were two more extensive sections, one titled "Formalizing probability", the other "Probability in mathematics", but these were replaced by one very brief brief section "Theory" referring the reader to Probability theory.[2][3]. While these edits were somewhat extreme, we should guard against the other extreme: a more formal mathematical section growing beyond bounds. Instead we should aim at keeping it confined to an elementary level, referring to other articles for more advanced things. --Lambiam 00:30, 14 November 2007 (UTC)

- Agree, I did pinch some of the text from Prob theory section as it explained some things better. As to position, I've moved it below history. I think there is some text from the edits you mention which could helpfully be reinserted, without the article growing too much. --Salix alba (talk) 07:58, 14 November 2007 (UTC)

## Question on the word and definition of probability

Is probability, better understood as PARTIAL EXISTENTIALITY?

Could it also be: Partial-ability? or Partially existent occurance?

i.e. we can think of a picture that fades into black, and then back to it's original state, as it fades into black, its 'likelyness' of existence decreasing?

I'd love if someone can clear this up for me thanks. BeExcellent2every1 (talk) 08:47, 20 November 2007 (UTC)

- I cannot say in truth that this contributes to
*my*understanding. --Lambiam 16:23, 20 November 2007 (UTC)- Alright, do events exist? We'll use boolean algebra (only yes's and no's), Yes the exist, ok, now do the PARTIAL-EVENTS exist? i.e. when you have an intent to go to the store, before you have begun moving, does the event exist partially in your mind as a probability? Yes or no. Probability = Partial existence, or a partial truth (small truth). Now does truth exist all of the time? Yes or No. Yes. Therefore probability exists, of the type that it is a partially-existent-truth.BeExcellent2every1 (talk) 06:37, 21 November 2007 (UTC)
- I do not particularly subscribe to the Law of excluded middle or the Principle of bivalence, I know of no justification for equating probability with "partial existence" or "partial truth", and I am unable to assign a meaning to the sequence of words "truth does exist all of the time", so the above is totally lost on me. But, in any case, this page is for
*discussing improvements*to the article Probability, and any material added must conform to the requirement of verifiability, which means it must have been published in reliable sources. This page is not intended for a general discussion on the concept of probability. Thank you for your understanding. --Lambiam 12:08, 21 November 2007 (UTC)- Actually it's not the principle of bi-valence, obviously you mis-understood. There are only only three kinds of truth statements in logic, ALL-Actual-exist-is-true, ALL-Partial-exist-is-true, ALL-Negate-exist-is-true. So no I wasn't speaking about the law of the excluded middle. If something exists, it can never ALL-Not-exist at any time, otherwise it can never exist (all-not-exist). So that means all-potential exists (probabilities even), must exist in even in partial form, in some way. Otherwise probability calculations would be impossible (to all-not-exist, is to never be possible to occur or exist). If you make a claim my statement is unintelligable, you make a claim to know which words and which statements, else you don't have a claim to know, so if I am in error, please point out which words and which statements, so that I can correct myself.BeExcellent2every1 (talk) 13:10, 22 November 2007 (UTC)
- I make a claim that this page is not intended for this kind of discussion. --Lambiam 14:44, 22 November 2007 (UTC)

- Actually it's not the principle of bi-valence, obviously you mis-understood. There are only only three kinds of truth statements in logic, ALL-Actual-exist-is-true, ALL-Partial-exist-is-true, ALL-Negate-exist-is-true. So no I wasn't speaking about the law of the excluded middle. If something exists, it can never ALL-Not-exist at any time, otherwise it can never exist (all-not-exist). So that means all-potential exists (probabilities even), must exist in even in partial form, in some way. Otherwise probability calculations would be impossible (to all-not-exist, is to never be possible to occur or exist). If you make a claim my statement is unintelligable, you make a claim to know which words and which statements, else you don't have a claim to know, so if I am in error, please point out which words and which statements, so that I can correct myself.BeExcellent2every1 (talk) 13:10, 22 November 2007 (UTC)

- I do not particularly subscribe to the Law of excluded middle or the Principle of bivalence, I know of no justification for equating probability with "partial existence" or "partial truth", and I am unable to assign a meaning to the sequence of words "truth does exist all of the time", so the above is totally lost on me. But, in any case, this page is for

- Alright, do events exist? We'll use boolean algebra (only yes's and no's), Yes the exist, ok, now do the PARTIAL-EVENTS exist? i.e. when you have an intent to go to the store, before you have begun moving, does the event exist partially in your mind as a probability? Yes or no. Probability = Partial existence, or a partial truth (small truth). Now does truth exist all of the time? Yes or No. Yes. Therefore probability exists, of the type that it is a partially-existent-truth.BeExcellent2every1 (talk) 06:37, 21 November 2007 (UTC)

## Mean of three observations.

The section "History" contains the following mysterious statement: "He [i.e. Laplace] deduced a formula for the mean of three observations." I assume that this is not the formula (x_{1}+x_{2}+x_{3})/3. Does anyone have an idea what this refers to? --Lambiam 20:11, 21 November 2007 (UTC)

- I'm guessing it might refer to his finding that for three observations with identically distributed errors around some value, the median of the posterior distribution of that value (under a uniform prior) minimises the mean absolute error. (See for example p. 213 in the translator's afterword to this book: Theory of the Combination of Observations Least Subject to Errors: Part One.) -- Avenue (talk) 11:58, 12 May 2008 (UTC)

## Central Limit Theorem

*Moved to Talk:Probability theory#Central Limit Theorem, which is where this appears to belong. --Lambiam 05:21, 24 January 2008 (UTC)*

## Events That Aren't Mutually Exclusive

I don't contribute on here much, let alone for math, but the equation or at least the example labeled "events that aren't mutually exclusive" is clearly wrong. I'm still learning, but I can tell you that the probability of drawing two specific cards out of a deck is not 103/2704; that's close to 1/27. Why would the probability be higher than 1/52?

- Please start new topics at the bottom, not at the top. It's higher because of the "or". It would have been very slightly higher again if the "or" was taken to be an "inclusive or". The example quotes the correct probability if you take "or" to be the "exclusive or". However, the example does not relate to the formula that it purports to represent with respect to mutual exclusivity, so I deleted it. --David from Downunder (talk) 11:19, 20 April 2008 (UTC)

As a question, would it be better to use the term independent rather than mutually exclusive? Furthermore, can you explain what is "inclusive" versus "exclusive or?

- Independence and mutual exclusion are completely different things. If you draw a card from a 52-card deck, there is a 1/13 chance that it is a Jack, and a 1/4 chance that it is a Spade. These events are independent: the chance to draw a Jack of Spades is 1/13 × 1/4 = 1/52. They are obviously not mutually exclusive. For "inclusive" versus "exclusive" or, see Inclusive or and Exclusive or. --Lambiam 21:34, 25 April 2008 (UTC)

## Exclusive Or

I know next to nothing about probability, but I came here looking for a formula for getting the probability of the exclusive or of two sets. Is this something that could be added to the page, as it's always included in similar discussions on boolean logic.

The formula is just P(A xor B) = P(A) + P(B) - 2P(A and B) right?Joeldipops (talk) 00:07, 9 May 2008 (UTC)

- That is correct. A xor B is the same event as (A and not B) or (not A and B), in which the two choices for "or" are disjoint events. From that the result follows by applying the given formulas. There are 10 possible combinations involving A and/or B, not counting symmetric combinations twice, of which a formula is given for four. The formulas for the other six can be derived from those. I do not have a clear argument for or against listing these in the article. --Lambiam

## Interpretations

In the above named section, it says "Frequentists talk about probabilities only when dealing with well defined random experiments." It seems doubtfull if frequentists only deal with experiments: surely any repeatable random process will do, even a natural one. So I am picking on the word "experiment", here and in the next bits, as being far too specific. Melcombe (talk) 16:44, 14 May 2008 (UTC)

- I have replaced the word "experiment" by "trial". --Lambiam 09:47, 19 May 2008 (UTC)

- See Talk:Frequency probability#What kind of experiments?. --Lambiam 13:30, 22 May 2008 (UTC)

## Likelihood

I have removed the word "likelihood" from the definition of probability because while in common use the two are synonyms, likelihood has a technical meaning in statistics that is distinct from probability.94.193.241.76 (talk) 20:28, 14 February 2009 (UTC)

- Do you hav' a source for that? ~Asarlaí 15:25, 4 January 2013 (UTC)
- Likelihood function. 50.0.136.106 (talk) 04:46, 4 August 2013 (UTC)

## Confusing examples

I find several of the examples on this page and some the pages linked to it to be confusing. The examples need to be linked to easily understandable, real world situations rather than abstract mathematic explanations. —Preceding unsigned comment added by 86.166.148.24 (talk) 15:35, 10 May 2009 (UTC)

## Probability trees?

No mention of them, I would like to see an example of them.--Dbjohn (talk) 09:56, 30 June 2009 (UTC) hi! —Preceding unsigned comment added by 99.36.94.205 (talk) 03:55, 19 January 2010 (UTC)

## In Our Time

Template:In Our Time
*Rich Farmbrough*, 03:19, 16 September 2010 (UTC).

## Dubious statement?

*Probability is a way of expressing knowledge or belief that an event will occur or has occurred.*

I was always taught that it's not meaningful to assign probabilities to things which have occurred, except 0 or 1, since they either definitely did or definitely did not occur, and ignorance of which is not a basis for a probabilistic statement. To give such a past event a probability of 0.5 would imply, for example, that if you checked the facts an infinite number of times, you'd get two answers with equal frequency. --173.230.96.116 (talk) 00:06, 2 February 2011 (UTC)

## problem with the Mathematical Treatment Section

This is my first time editing or pointing something out, so I apologize if I'm not using the correct etiquette but the union and intersection operators are currently flipped for most of the article rendering the formulas wrong. I don't have time to change it, good luck. Wingda 11:07, 8 February 2012 (UTC)

## Definition

In this proposal, I will discuss why the Wikipedia definition for “Probability,” is not a good, valid definition, and needs to be changed. Probability is something that is used every day. It is what helps us further understand the outcomes of the events in life that are uncertain. However, in order to further our knowledge of probability, we must truly understand the definition of the word first. According to Wikipedia, there is not a clear, valid definition for probability, and I propose to change this.

User:Nparibello-NJITWILL/Summer 2012 | Probability

The given Wikipedia definition is as follows: Probability is ordinarily used to describe an attitude of mind towards some proposition of whose truth is not certain.[1] The proposition of interest is usually of the form "Will a specific event occur?" The attitude of mind is of the form "How certain are we that the event will occur?" The certainty we adopt can be described in terms of a numerical measure and this number, between 0 and 1, we call probability.[2] The higher the probability of an event, the more certain we are that the event will occur. Thus, probability in an applied sense is a measure of the confidence a person has that a (random)event will occur.

The opening sentence brings great concern as to what probability really is. It begins with, “Probability is ordinarily used to describe an attitude of mind towards some proposition of whose truth is not certain.” The word ordinarily is not only ambiguous, but makes the opening sentence tell us very little about what we are reading about. Aside from the ambiguity, the rest of the sentence is extremely confusing as well. Although I am someone who has studied probability not only for my college degree, but also on my own to further my knowledge, I still had to read that sentence over and over again to understand what the writer meant. I would not consider probability an attitude of mind, because this simply does not make sense. The “proposition of interest” and “attitude of mind” are unnecessary and confusing when defining probability. Probability is the analysis of the outcome of an uncertain event. The given Wikipedia definition gives several examples of what probability is, rather than defining it. It states that, “The proposition of interest is usually of the form…” This is not only an example instead of a definition, but it is an extremely poor example. The word usually is very vague, leading the reader to believe that the given example does not apply to probability all of the time. The definition also ends with probability in an applied sense. But what does that really mean? Because there are no other examples of another sense that probability applies to, this makes the definition unclear. The definition then goes on the describe probability in mathematical terms using the range between 0 and 1. For anyone who has not dealt with probability in a mathematical sense, this definition most likely makes little sense. The way it is written is very vague, but what it is trying to say is that there is a range of outcomes in probability starting from 0 (the event will definitely not occur), to 1 (the event will definitely occur). These ranges are also often written in percent form, 0% to 100%. I believe that by adding the fact that these numbers are percentages, the numbers already start to make a more sense and become more valid. The range between these numbers is the percent of likeliness of an outcome. While this definition gave many examples that do not help the reader to further understand probability, I would perhaps include one common example when dealing with the mathematical sense, even though definitions should not include examples. For example, the probability of a fair coin landing heads after it is tossed is .5, or 50%. This is because there are two possible outcomes, heads or tails, and both are equally likely. If the coin had been unfair with heads on both sides, the probability of landing heads would be 1, or 100%, since there are no other possible outcomes. As stated in the original definition, “The higher the probability of an event, the more certain we are that the event will occur.”

Two good quality sources comes from Richard L. Sheaffer in his book, Introduction to Probability and Its Applications, and Glenn Shaffer in his book Probability and Finance. Both Sheaffer and Shaffer are well known mathematicians whose textbooks are used in a vast majority of colleges, including NJIT. Both of their main mathematical focuses aim a probability, making them two of the best sources for this proposal. From a mathematical standpoint, Richard Sheaffer states that “Probability is a numerically valued function that assigns a number P(A) to every event A so that the following axioms hold: P(A) >= 0, and P(S) = 1.” Glenn Shaffer explains these axioms in a more wordy, less mathematical terms for the average reader to understand. First, the probability of an event must be greater than or equal to zero. If it is less than zero, then the event is not possible. Lastly, Shaffer states that the probability that the event will produce any of the available outcomes is 100%. Going back to the coin example, the probability that a fair coin will land either a head or a tail, is 100%, because those are all of the available outcomes.

Sheaffer, R. (2010). Introduction to probability and its applications. (3rd ed.). Glenn Shaffer. (2010, June 18). Probability and Finance. Retrieved June 22, 2012 from http://www.probabilityandfinance.com/ — Preceding unsigned comment added by 68.44.139.107 (talk) 22:01, 26 June 2012 (UTC)

- I've changed it again. Let's hope for discussionBhny (talk) 07:55, 16 August 2012 (UTC)
- To start the discussion- it seems like there's two things battling it out in this article 1) certainty+epistemology and 2) probability+mathematics. This article is nearly all about mathematics but that weird first parag was trying to take it in another direction Bhny (talk) 08:02, 16 August 2012 (UTC)

## Cultural uses

Hello Everyone, I was just wondering if someone could create a sub-page of how probability has been used culturally and in fiction, thanks. Person 06:06, 10 April 2013 — Preceding unsigned comment added by 110.175.213.190 (talk)

## Removing massive technical section

I'm moving the huge mass of (rather sloppy and unclear) errata on mathematical properties of probabilities out of the article and storing it here. I don't believe this stuff has a place in a top-level, non-mathematical summary article; I've replaced it with a brief summary of the basic concepts and links to the more thorough, techincal articles (e.g. Probability theory, Probability space, Probability axioms) as per WP:SUMMARY. Here is the text removed: Template:Hidden begin

The *opposite* or *complement* of an event *A* is the event [not *A*] (that is, the event of *A* not occurring); its probability is given by P(not *A*) = 1 - P(*A*).^{[1]} As an example, the chance of not rolling a six on a six-sided die is 1 – (chance of rolling a six) . See Complementary event for a more complete treatment.

If two events *A* and *B* occur on a single performance of an experiment, this is called the intersection or joint probability of *A* and *B*, denoted as .

### Conditional probability

{{#invoke:main|main}}
The conditional probability of an event *A* is the probability that it will occur, given that some other event *B* also occurs or has occurred.
Conditional probability is written , and is read "the probability of *A*, given *B*". It is defined by^{[2]}

If then is formally undefined by this expression. However, it is possible to define a conditional probability for some zero-probability events using a σ-algebra of such events (such as those arising from a continuous random variable).Template:Cn

For example, in a bag of 2 red balls and 2 blue balls (4 balls in total), the probability of taking a red ball is ; however, when taking a second ball, the probability of it being either a red ball or a blue ball depends on the ball previously taken, such as, if a red ball was taken, the probability of picking a red ball again would be since only 1 red and 2 blue balls would have been remaining.

#### Independent probability

If two events, *A* and *B* are independent then the joint probability is

for example, if two coins are flipped the chance of both being heads is ^{[3]}

#### Mutually exclusive

If either event *A* or event *B* or both events occur on a single performance of an experiment this is called the union of the events *A* and *B* denoted as .
If two events are mutually exclusive then the probability of either occurring is

For example, the chance of rolling a 1 or 2 on a six-sided Template:Dice is

#### Not mutually exclusive

If the events are not mutually exclusive then

For example, when drawing a single card at random from a regular deck of cards, the chance of getting a heart or a face card (J,Q,K) (or one that is both) is , because of the 52 cards of a deck 13 are hearts, 12 are face cards, and 3 are both: here the possibilities included in the "3 that are both" are included in each of the "13 hearts" and the "12 face cards" but should only be counted once.

### Inverse probability

In probability theory and applications, Bayes' rule relates the odds of event to event , before (prior to) and after (posterior to) conditioning on another event . The odds on to event is simply the ratio of the probabilities of the two events. When arbitrarily many events are of interest, not just two, the rule can be rephrased as **posterior is proportional to prior times likelihood**, where the proportionality symbol means that the left hand side is proportional to (i.e., equals a constant times) the right hand side as varies, for fixed or given (Lee, 2012; Bertsch McGrayne, 2012). In this form it goes back to Laplace (1774) and to Cournot (1843); see Fienberg (2005).

### Summary of probabilities

Event | Probability |
---|---|

A | |

not A | |

A or B | |

A and B | |

A given B |

Template:Hidden end Open to discussion if anyone really feels strongly that all of this stuff needs to be in a non-technical summary article, but I feel pretty strongly that it does not. Bryanrutherford0 (talk) 18:09, 13 December 2013 (UTC)

- I see it has already been reverted. I don't think this is a non-technical article. It should have at least high school math. Bhny (talk) 19:49, 13 December 2013 (UTC)

- Probability Theory is the article about the mathematical theory; the details and axioms and theorems are explored in a number of even more specific articles (Probability axioms, Probability space, etc.). This is the article about the basic concept of probability, including prominently the non-mathematical ideas and interpretations that are referred to by this term. It's not reasonable or helpful for more than a third of the article to be taken up by equations that are already explained more clearly and thoroughly in the articles that actually treat those topics. Let's use summary style here, people! Bryanrutherford0 (talk) 21:35, 13 December 2013 (UTC)