Shotgun shell: Difference between revisions

From formulasearchengine
Jump to navigation Jump to search
en>Widr
m Reverted edits by 76.180.122.5 (talk) to last version by BeadleB
 
Line 1: Line 1:
{{Probability distribution|
The writer is called Irwin. Doing ceramics is what my family and I appreciate. Bookkeeping is what I do. North Dakota is our birth place.<br><br>My site - [http://holder11.dothome.co.kr/xe/center/551582 at home std testing]
  pdf_image  =|
  cdf_image  =|
  name      =Multinomial|
  type      =mass|
  parameters =<math>n > 0</math> number of trials ([[integer]])<br /><math>p_1, \ldots, p_k</math> event probabilities (<math>\Sigma p_i = 1</math>)|
  support    =<math>X_i \in \{0,\dots,n\}</math><br><math>\Sigma X_i = n\!</math>|
  pdf        =<math>\frac{n!}{x_1!\cdots x_k!} p_1^{x_1} \cdots p_k^{x_k}</math>|
  cdf        =|
  mean      =<math>E\{X_i\} = np_i</math>|
  median    =|
  mode      =|
  variance  =<math>\textstyle{\mathrm{Var}}(X_i) = n p_i (1-p_i)</math><br><math>\textstyle {\mathrm{Cov}}(X_i,X_j) = - n p_i p_j~~(i\neq j)</math>|
  skewness  =|
  kurtosis  =|
  entropy    =|
  mgf        =<math>\biggl( \sum_{i=1}^k p_i e^{t_i} \biggr)^n</math>|
  char      =<math> \left(\sum_{j=1}^k p_je^{it_j}\right)^n</math> where <math>i^2= -1</math>|
  pgf = <math>\biggl( \sum_{i=1}^k p_i z_i \biggr)^n\text{ for }(z_1,\ldots,z_k)\in\mathbb{C}^k</math>|
  conjugate  =[[Dirichlet distribution|Dirichlet]]: <math>\mathrm{Dir}(\alpha+\beta)</math>|
}}
In [[probability theory]], the '''multinomial distribution''' is a generalization of the [[binomial distribution]]. For ''n'' [[statistical independence|independent]] trials each of which leads to a success for exactly one of ''k'' categories, with each category having a given fixed success probability, the multinomial distribution gives the probability of any particular combination of numbers of successes for the various categories.
 
The binomial distribution is the [[probability distribution]] of the number of
successes for one of just two categories in ''n'' independent [[Bernoulli trial]]s, with the same probability of success on each trial. In a multinomial distribution, the analog of the Bernoulli distribution is the [[categorical distribution]], where each trial results in exactly one of some fixed finite number ''k'' possible outcomes, with probabilities ''p''<sub>1</sub>, ..., ''p''<sub>''k''</sub> (so that ''p''<sub>''i''</sub>&nbsp;≥&nbsp;0 for ''i''&nbsp;=&nbsp;1,&nbsp;...,&nbsp;''k'' and <math>\sum_{i=1}^k p_i = 1</math>), and there are ''n'' independent trials.  Then if the random variables ''X''<sub>''i''</sub> indicate the number of times outcome number ''i'' is observed over the ''n'' trials, the vector ''X''&nbsp;=&nbsp;(''X''<sub>1</sub>,&nbsp;...,&nbsp;''X''<sub>''k''</sub>) follows a multinomial distribution with parameters ''n'' and '''p''', where '''p'''&nbsp;=&nbsp;(''p''<sub>1</sub>,&nbsp;...,&nbsp;''p''<sub>''k''</sub>).
 
Note that, in some fields, such as [[natural language processing]], the categorical and multinomial distributions are [[conflate]]d, and it is common to speak of a "multinomial distribution" when a [[categorical distribution]] is actually meant. This stems from the fact that it is sometimes convenient to express the outcome of a categorical distribution as a "1-of-K" vector (a vector with one element containing a 1 and all other elements containing a 0) rather than as an integer in the range <math>1 \dots K</math>; in this form, a categorical distribution is equivalent to a multinomial distribution over a single observation.
 
==Specification==
 
===Probability mass function===
Suppose one does an experiment of extracting ''n'' balls of ''k'' different categories from a bag, replacing the extracted ball after each draw. Balls from the same category are equivalent. Denote the variable which is the number of extracted balls of category ''i'' (''i'' = 1, ..., ''k'') as ''X''<sub>''i''</sub>, and denote as ''p''<sub>''i''</sub> the probability that a given extraction will be in category ''i''. Let there be ''n'' balls extracted.  The [[probability mass function]] of this multinomial distribution is:
 
: <math> \begin{align}
f(x_1,\ldots,x_k;n,p_1,\ldots,p_k) & {} = \Pr(X_1 = x_1\mbox{ and }\dots\mbox{ and }X_k = x_k) \\  \\
& {} = \begin{cases} { \displaystyle {n! \over x_1!\cdots x_k!}p_1^{x_1}\cdots p_k^{x_k}}, \quad &
\mbox{when } \sum_{i=1}^k x_i=n \\  \\
0 & \mbox{otherwise,} \end{cases}
\end{align}
</math>
 
for non-negative integers ''x''<sub>1</sub>, ..., ''x''<sub>''k''</sub>.
 
==Visualization==
 
=== As slices of generalized Pascal's triangle ===
 
Just like one can interpret the [[binomial distribution]] as (normalized) 1D slices of [[Pascal's triangle]], so too can one interpret the multinomial distribution as 2D (triangular) slices of [[Pascal's pyramid]], or 3D/4D/+ (pyramid-shaped) slices of higher-dimensional analogs of Pascal's triangle. This reveals an interpretation of the [[Range (mathematics)|range]] of the distribution: discretized equilaterial "pyramids" in arbitrary dimension—i.e. a [[simplex]] with a grid.
 
=== As polynomial coefficients ===
 
Similarly, just like one can interpret the [[binomial distribution]] as the polynomial coefficients of <math>(p x_1 + (1-p) x_2)^n</math> when expanded, one can interpret the multinomial distribution as the coefficients of <math>(p_1 x_1 + p_2 x_2 + p_3 x_3 + ... + p_k x_k)^n</math> when expanded. (Note that just like the binomial distribution, the coefficients must sum to 1.) This is the origin of the name "''multinomial'' distribution".
 
==Properties==
 
The [[Expected value|expected]] number of times the outcome ''i'' was observed over ''n'' trials is
 
:<math>\operatorname{E}(X_i) = n p_i.\,</math>
 
The [[covariance matrix]] is as follows.  Each diagonal entry is the [[variance]] of a binomially distributed random variable, and is therefore
 
:<math>\operatorname{var}(X_i)=np_i(1-p_i).\,</math>
 
The off-diagonal entries are the [[covariance]]s:
 
:<math>\operatorname{cov}(X_i,X_j)=-np_i p_j\,</math>
 
for ''i'', ''j'' distinct.
 
All covariances are negative because for fixed ''n'', an increase in one component of a multinomial vector requires a decrease in another component.
 
This is a ''k'' &times; ''k'' [[Positive-definite matrix#Negative-definite.2C semidefinite and indefinite matrices|positive-semidefinite]] matrix of rank ''k''&nbsp;&minus;&nbsp;1. In the special case where ''k''&nbsp;=&nbsp;''n'' and where the ''p''<sub>''i''</sub> are all equal, the covariance matrix is the [[centering matrix]].
 
The entries of the corresponding [[Correlation matrix#Correlation matrices|correlation matrix]] are
 
:<math>\rho(X_i,X_i) = 1.</math>
 
:<math>\rho(X_i,X_j) = \frac{\operatorname{cov}(X_i,X_j)}{\sqrt{\operatorname{var}(X_i)\operatorname{var}(X_j)}} = \frac{-p_i  p_j}{\sqrt{p_i(1-p_i) p_j(1-p_j)}} = -\sqrt{\frac{p_i  p_j}{(1-p_i)(1-p_j)}}.</math>
 
Note that the sample size drops out of this expression.
 
Each of the ''k'' components separately has a binomial distribution with parameters ''n'' and ''p''<sub>''i''</sub>, for the appropriate value of the subscript ''i''.
 
The [[Support (mathematics)|support]] of the multinomial distribution is the set
 
: <math>\{(n_1,\dots,n_k)\in \mathbb{N}^{k}| n_1+\cdots+n_k=n\}.\,</math>
 
Its number of elements is
 
: <math>{n+k-1 \choose k-1}.</math>
 
==Example==
 
In a recent three-way election for a large country, candidate A received 20% of the votes, candidate B received 30% of the votes, and candidate C received 50% of the votes.  If six voters are selected randomly, what is the probability that there will be exactly one supporter for candidate A, two supporters for candidate B and three supporters for candidate C in the sample?
 
''Note: Since we’re assuming that the voting population is large, it is reasonable and permissible to think of the probabilities as unchanging once a voter is selected for the sample.  Technically speaking this is sampling without replacement, so the correct distribution is the [[Hypergeometric distribution#Multivariate hypergeometric distribution|multivariate hypergeometric distribution]], but the distributions converge as the population grows large.''
 
: <math> \Pr(A=1,B=2,C=3) = \frac{6!}{1! 2! 3!}(0.2^1) (0.3^2) (0.5^3) = 0.135 </math>
 
==Sampling from a multinomial distribution==
 
First, reorder the parameters <math>p_1, \ldots, p_k</math> such that they are sorted in descending order (this is only to speed up computation and not strictly necessary). Now, for each trial, draw an auxiliary variable ''X'' from a uniform (0,&nbsp;1) distribution. The resulting outcome is the component
 
: <math>j = \min \left\{ j' \in \{1,\dots,k\} : \sum_{i=1}^{j'} p_i - X \geq 0 \right\}.</math>
 
{''X''<sub>''j''</sub> = 1, ''X''<sub>''k''</sub> = 0 for ''k''≠''j'' } is one observation from the multinomial distribution with <math>p_1, \ldots, p_k</math> and ''n''&nbsp;=&nbsp;1.  A sum of independent repetitions of this experiment is an observation from a multinomial distribution with ''n'' equal to the number of such repetitions.
 
==To simulate a multinomial distribution==
 
Various methods may be used to simulate a multinomial distribution. A very simple one is to use a random number generator to generate numbers between 0 and 1. First, we divide the interval from 0 to 1 in k subintervals equal in size to the probabilities of the k categories. Then, we generate a random number for each of n trials and use a logical test to classify the virtual measure or observation in one of the categories.
 
'''Example'''
 
If we have :
 
{| class="wikitable"
|-
| '''Categories''' || 1|| 2|| 3|| 4|| 5|| 6
|-
| '''Probabilities'''|| 0.15|| 0.20 || 0.30|| 0.16|| 0.12|| 0.07
|-
| '''Superior limits of subintervals'''|| 0.15|| 0.35|| 0.65|| 0.81|| 0.93|| 1.00
|}
 
Then, with a software like Excel, we may use the following recipe:
 
{| class="wikitable"
|-
| '''Cells :'''|| Ai|| Bi|| Ci|| ... || Gi
|-
| '''Formulae :''' || Alea()|| =If($Ai<0.15;1;0)|| =If(And($Ai>=0.15;$Ai<0.35);1;0)|| ... || =If($Ai>=0.93;1;0)
|}
 
After that, we will use functions such as SumIf to accumulate the observed results by category and to calculate the estimated covariance matrix for each simulated sample.
 
Another way with Excel, is to use the discrete random number generator. In that case, the categories must be label or relabel with numeric values.
 
In the two cases, the result is a multinomial distribution with k categories without any correlation. This is equivalent, with a continuous random distribution, to simulate k independent standardized normal distributions, or a multinormal distribution N(0,I) having k components identically distributed and statistically independent.
 
==Related distributions==
* When ''k'' = 2, the multinomial distribution is the [[binomial distribution]].
* The continuous analogue is [[Multivariate normal distribution]].
* [[Categorical distribution]], the distribution of each trial; for ''k'' = 2, this is the [[Bernoulli distribution]].
* The [[Dirichlet distribution]] is the [[conjugate prior]] of the multinomial in [[Bayesian statistics]].
* [[Dirichlet-multinomial distribution]].
* [[Beta-binomial model]].
 
== See also ==
* [[Fisher's exact test]]
* [[Multinomial theorem]]
* [[Negative multinomial distribution]]
 
{{No footnotes|date=March 2011}}
 
==References==
 
*{{cite book
| last1 = Evans
| first1 = Merran
|last2= Hastings |first2=Nicholas
|last3= Peacock |first3= Brian
| title = Statistical Distributions
| publisher = Wiley
| year = 2000
| location = New York
| pages = 134–136
| id = 3rd ed.
| isbn = 0-471-37124-6 }}
 
{{ProbDistributions|multivariate}}
 
{{DEFAULTSORT:Multinomial Distribution}}
[[Category:Discrete distributions]]
[[Category:Multivariate discrete distributions]]
[[Category:Factorial and binomial topics]]
[[Category:Exponential family distributions]]
[[Category:Probability distributions]]

Latest revision as of 14:26, 29 December 2014

The writer is called Irwin. Doing ceramics is what my family and I appreciate. Bookkeeping is what I do. North Dakota is our birth place.

My site - at home std testing