Borel–Cantelli lemma: Difference between revisions

From formulasearchengine
Jump to navigation Jump to search
 
Line 1: Line 1:
The writer's name is Evan thought you can create is not his birth name. Curing people is how he an extraordinary living the particular husband will not change it anytime soon. It's not a common thing but what he likes doing is to check ballet but he's thinking on starting something progressive. For years he's been living in New Hampshire. If you want to find uot more the look at his website: http://tda.nu/mediawikitest/index.php/Seven_Reasons_Why_Having_An_Excellent_Mimi_Faust_Porno_Is_Not_Enough<br><br>Here is my website; reality show; [http://tda.nu/mediawikitest/index.php/Seven_Reasons_Why_Having_An_Excellent_Mimi_Faust_Porno_Is_Not_Enough have a peek at these guys],
{{Distinguish2|the use of [[likelihood ratios in diagnostic testing]]}}
{{Multiple issues|
{{Citations missing|date=September 2009}}
{{Expert-subject|Statistics|date=November 2008}}
{{More footnotes|date=November 2010}}
}}
 
In [[statistics]], a '''likelihood ratio test''' is a [[statistical test]] used to compare the fit of two models, one of which (the ''[[null hypothesis|null]] model'') is a special case of the other (the ''[[alternative hypothesis|alternative]] model''). The test is based on the [[likelihood]] ratio, which expresses how many times more likely the data are under one model than the other. This likelihood ratio, or equivalently its [[logarithm]], can then be used to compute a [[p-value]], or compared to a [[critical value#Statistics|critical value]] to decide whether to reject the null model in favour of the alternative model. When the logarithm of the likelihood ratio is used, the statistic is known as a '''log-likelihood ratio statistic''', and the [[probability distribution]] of this test statistic, assuming that the null model is true, can be approximated using '''Wilks' theorem'''.
 
In the case of distinguishing between two models, each of which has no unknown [[statistical parameters|parameters]], use of the likelihood ratio test can be justified by the [[Neyman–Pearson lemma]], which demonstrates that such a test has the highest [[statistical power|power]] among all competitors.<ref name=NP1>{{cite doi|10.1098/rsta.1933.0009}}</ref>
 
==Use==
Each of the two competing models, the null model and the alternative model, is separately fitted to the data and the log-[[likelihood function|likelihood]] recorded. The test statistic (often denoted by ''D'') is twice the difference in these log-likelihoods:
 
: <math>
\begin{align}
D & = -2\ln\left( \frac{\text{likelihood for null model}}{\text{likelihood for alternative model}} \right) \\
&= -2\ln(\text{likelihood for null model}) + 2\ln(\text{likelihood for alternative model}) \\
\end{align}
</math>
 
The model with more parameters will always fit at least as well (have an equal or greater log-likelihood). Whether it fits significantly better and should thus be preferred is determined by deriving the probability or [[p-value]] of the difference&nbsp;''D''. Where the null hypothesis represents a special case of the alternative hypothesis, the [[probability distribution]] of the [[test statistic]] is approximately a [[chi-squared distribution]] with [[degrees of freedom (statistics)|degrees of freedom]] equal to ''df''2&nbsp;&minus;&nbsp;''df''1&nbsp;.<ref name="Huelsenbeck1997">{{cite jstor|2952500}}</ref> Symbols ''df''1 and ''df''2 represent the number of free parameters of models 1 and 2, the null model and the alternative model, respectively.
The test requires nested models, that is: models in which the more complex one can be transformed into the simpler model by imposing a set of constraints on the parameters.<ref>An example using phylogenetic analyses is described at {{cite doi|10.1093/sysbio/45.4.546 }}</ref>
 
For example: if the null model has 1 free parameter and a log-likelihood of &minus;8024 and the alternative model has 3 parameters and a log-likelihood of &minus;8012, then the probability of this difference is that of chi-squared value of +2·(8024&nbsp;&minus;&nbsp;8012)&nbsp;=&nbsp;24 with 3&nbsp;&minus;&nbsp;1&nbsp;=&nbsp;2 degrees of freedom. Certain assumptions must be met for the statistic to follow a chi-squared distribution and often empirical p-values are computed.
 
==Background==
{{ref improve section|date=July 2012}}
The '''likelihood ratio,''' often denoted by <math>\Lambda</math> (the capital [[Greek alphabet|Greek letter]] [[lambda]]), is the ratio of the [[likelihood function]] varying the parameters over two different sets in the numerator and denominator.
A '''likelihood ratio test''' is a statistical test for making a decision between two hypotheses based on the value of this ratio.
 
It is central to the [[Jerzy Neyman|Neyman]]–[[Egon Pearson|Pearson]] approach to statistical hypothesis testing, and, like statistical hypothesis testing in general, is both widely used and criticized.{{Citation needed|date=July 2012}}
 
==Simple-vs-simple hypotheses==
 
{{Main|Neyman–Pearson lemma}}
 
A statistical model is often a [[parametrized family]] of [[probability density function]]s or [[probability mass function]]s <math>f(x|\theta)</math>. A simple-vs-simple hypotheses test has completely specified models under both the [[Null hypothesis|null]] and [[Alternative hypothesis|alternative]] hypotheses, which for convenience are written in terms of fixed values of a notional parameter <math>\theta</math>:
 
:<math>
\begin{align}
H_0 &:& \theta=\theta_0 ,\\
H_1 &:& \theta=\theta_1 .
\end{align}
</math>
Note that under either hypothesis, the distribution of the data is fully specified; there are no unknown parameters to estimate. The likelihood ratio test statistic can be written as:<ref>Mood, A.M.; Graybill, F.A. (1963)  ''Introduction to the Theory of Statistics'', 2nd edition. McGraw-Hill ISBN 978-0070428638 (page 286)</ref><ref>Kendall, M.G., Stuart, A. (1973) ''The Advanced Theory of Statistics, Volume 2'', Griffin. ISBN 0852642156 (page 234)</ref>
:<math>
\Lambda(x) = \frac{ L(\theta_0|x) }{ L(\theta_1|x) } = \frac{ f(x|\theta_0) }{ f(x|\theta_1) }
</math>
or
:<math>\Lambda(x)=\frac{L(\theta_0\mid x)}{\sup\{\,L(\theta\mid x):\theta\in\{\theta_0,\theta_1\}\}},</math>
 
where <math>L(\theta|x)</math> is the [[likelihood function]]. Note that some references may use the reciprocal as the definition.<ref>Cox, D. R. and Hinkley, D. V ''Theoretical Statistics'', Chapman and Hall, 1974. (page 92)</ref> In the form stated here, the likelihood ratio is small if the alternative model is better than the null model and the likelihood ratio test provides the decision rule as:
 
:If <math>\Lambda > c </math>, do not reject <math>H_0</math>;
 
:If <math>\Lambda < c </math>, reject <math>H_0</math>;
 
:Reject with probability <math>q</math> if <math>\Lambda = c .</math>
The values <math>c, \; q</math> are usually chosen to obtain a specified [[significance level]] <math>\alpha</math>, through the relation: <math>q\cdot P(\Lambda=c \;|\; H_0) + P(\Lambda < c \; | \; H_0) = \alpha </math>.{{cn|date=June 2012}} The [[Neyman-Pearson lemma]] states that this likelihood ratio test is the [[Statistical power|most powerful]] among all level <math>\alpha</math> tests for this problem.<ref name=NP1/>
 
== Definition (likelihood ratio test for composite hypotheses) ==
A null hypothesis is often stated by saying the parameter <math>\theta</math> is in a specified subset <math>\Theta_0</math> of the parameter space  <math>\Theta</math>.
 
:<math>
\begin{align}
H_0 &:& \theta \in \Theta_0\\
H_1 &:& \theta \in \Theta_0^{\complement}
\end{align}
</math>
 
The [[likelihood function]] is <math>L(\theta|x) = f(x|\theta)</math> (with <math>f(x|\theta)</math> being the pdf or pmf) is a function of the parameter <math>\theta</math> with <math>x</math> held fixed at the value that was actually observed, ''i.e.'', the data.  The '''likelihood ratio test statistic''' is <ref>Casella, George; Berger, Roger L. (2001) ''Statistical Inference'', Second edition. ISBN 978-0534243128  (page 375)</ref>
 
: <math>\Lambda(x)=\frac{\sup\{\,L(\theta\mid x):\theta\in\Theta_0\,\}}{\sup\{\,L(\theta\mid x):\theta\in\Theta\,\}}.</math>
 
Here, the <math>\sup</math> notation refers to the [[Supremum]] function.
 
A '''likelihood ratio test''' is any test with critical region (or rejection region) of the form <math>\{x|\Lambda \le c\}</math> where <math>c</math> is any number satisfying <math>0\le c\le 1</math>. Many common test statistics such as the [[Z-test]], the [[F-test]], [[Pearson's chi-squared test]] and the [[G-test]] are tests for nested models and can be phrased as log-likelihood ratios or approximations thereof.
 
===Interpretation===
 
Being a function of the data <math>x</math>, the LR is therefore a [[statistic]].  The '''likelihood ratio test''' rejects the null hypothesis if the value of this statistic is too small.  How small is too small depends on the significance level of the test, ''i.e.'', on what probability of [[Type I error]] is considered tolerable ("Type I" errors consist of the rejection of a null hypothesis that is true).
 
The [[numerator]] corresponds to the maximum likelihood of an observed outcome under the [[null hypothesis]]. The [[denominator]] corresponds to the maximum likelihood of an observed outcome varying parameters over the whole parameter space. The numerator of this ratio is less than the denominator.  The likelihood ratio hence is between 0 and 1.  Low values of the likelihood ratio mean that the observed result was less likely to occur under the null hypothesis as compared to the alternative.  High values of the statistic mean that the observed outcome was nearly as likely to occur under the null hypothesis as compared to the alternative, and the null hypothesis cannot be rejected.
 
==={{anchor|Wilks' theorem}} Distribution: Wilks' theorem===
If the distribution of the likelihood ratio corresponding to a particular null and alternative hypothesis can be explicitly determined then it can directly be used to form decision regions (to accept/reject the null hypothesis).  In most cases, however, the exact distribution of the likelihood ratio corresponding to specific hypotheses is very difficult to determine.  A convenient result, attributed to [[Samuel S. Wilks]], says that as the sample size <math>n</math> approaches [[Infinity|<math>\infty</math>]], the test statistic <math>-2 \log(\Lambda)</math> for a nested model will be asymptotically [[chi-squared distribution|<math>\chi^2</math> distributed]] with [[degrees of freedom (statistics)|degrees of freedom]] equal to the difference in dimensionality of <math>\Theta</math> and <math>\Theta_0</math>.<ref>{{cite doi|10.1214/aoms/1177732360}}</ref> This means that for a great variety of hypotheses, a practitioner can compute the likelihood ratio <math>\Lambda</math> for the data and compare <math>-2\log(\Lambda)</math> to the chi squared value corresponding to a desired [[statistical significance]] as an approximate statistical test.
 
== Examples ==
=== Coin tossing ===
An example, in the case of Pearson's test, we might try to compare two coins to determine whether they have the same probability of coming up heads.  Our observation can be put into a contingency table with rows corresponding to the coin and columns corresponding to heads or tails.  The elements of the contingency table will be the number of times the coin for that row came up heads or tails.  The contents of this table are our observation <math>X</math>.
<table align=center>
<tr>
  <td></td>
  <td> '''Heads''' </td>
  <td> '''Tails'''</td>
</tr><tr>
  <td> '''Coin 1''' </td>
  <td align=center> <math>k_{1H}</math> </td>
  <td align=center> <math>k_{1T}</math> </td>
</tr><tr>
  <td> '''Coin 2''' </td>
  <td align=center> <math>k_{2H}</math> </td>
  <td align=center> <math>k_{2T}</math> </td>
</tr>
</table>
Here <math>\Theta</math> consists of the parameters <math>p_{1H}</math>, <math>p_{1T}</math>, <math>p_{2H}</math>, and <math>p_{2T}</math>, which are the probability that coins 1 and 2 come up heads or tails.  The hypothesis space <math>H</math> is defined by the usual constraints on a distribution, <math>0 \le p_{ij} \le 1</math>, and <math> p_{iH} + p_{iT} = 1 </math>.  The null hypothesis <math>H_0</math> is the subspace where <math> p_{1j} = p_{2j}</math>.  In all of these constraints, <math>i = 1,2</math> and <math>j = H,T</math>.
 
Writing <math>n_{ij}</math> for the best values for <math>p_{ij}</math> under the hypothesis <math>H</math>, maximum likelihood is achieved with
 
:<math>n_{ij} = \frac{k_{ij}}{k_{iH}+k_{iT}}.</math>
 
Writing <math>m_{ij}</math> for the best values for <math>p_{ij}</math> under the null hypothesis <math>H_0</math>, maximum likelihood is achieved with
 
:<math>m_{ij} = \frac{k_{1j}+k_{2j}}{k_{1H}+k_{2H}+k_{1T}+k_{2T}},</math>
 
which does not depend on the coin <math>i</math>.
 
The hypothesis and null hypothesis can be rewritten slightly so that they satisfy the constraints for the logarithm of the likelihood ratio to have the desired nice distribution.  Since the constraint causes the two-dimensional <math>H</math> to be reduced to the one-dimensional <math>H_0</math>, the asymptotic distribution for the test will be <math>\chi^2(1)</math>, the <math>\chi^2</math> distribution with one degree of freedom.
 
For the general contingency table, we can write the log-likelihood ratio statistic as
 
:<math>-2 \log \Lambda = 2\sum_{i, j} k_{ij} \log \frac{n_{ij}}{m_{ij}}.</math>
 
==References==
{{Reflist|2}}
 
==External links==
* [http://www.itl.nist.gov/div898/handbook/apr/section2/apr233.htm Practical application of likelihood ratio test described]
* [http://faculty.vassar.edu/lowry/clin2.html Richard Lowry's Predictive Values and Likelihood Ratios] Online Clinical Calculator
 
{{Statistics|inference}}
 
{{DEFAULTSORT:Likelihood-Ratio Test}}
[[Category:Statistical ratios]]
[[Category:Statistical tests]]
[[Category:Statistical theory]]

Revision as of 15:35, 5 July 2013

Template:Distinguish2 Template:Multiple issues

In statistics, a likelihood ratio test is a statistical test used to compare the fit of two models, one of which (the null model) is a special case of the other (the alternative model). The test is based on the likelihood ratio, which expresses how many times more likely the data are under one model than the other. This likelihood ratio, or equivalently its logarithm, can then be used to compute a p-value, or compared to a critical value to decide whether to reject the null model in favour of the alternative model. When the logarithm of the likelihood ratio is used, the statistic is known as a log-likelihood ratio statistic, and the probability distribution of this test statistic, assuming that the null model is true, can be approximated using Wilks' theorem.

In the case of distinguishing between two models, each of which has no unknown parameters, use of the likelihood ratio test can be justified by the Neyman–Pearson lemma, which demonstrates that such a test has the highest power among all competitors.[1]

Use

Each of the two competing models, the null model and the alternative model, is separately fitted to the data and the log-likelihood recorded. The test statistic (often denoted by D) is twice the difference in these log-likelihoods:

The model with more parameters will always fit at least as well (have an equal or greater log-likelihood). Whether it fits significantly better and should thus be preferred is determined by deriving the probability or p-value of the difference D. Where the null hypothesis represents a special case of the alternative hypothesis, the probability distribution of the test statistic is approximately a chi-squared distribution with degrees of freedom equal to df2 − df1 .[2] Symbols df1 and df2 represent the number of free parameters of models 1 and 2, the null model and the alternative model, respectively. The test requires nested models, that is: models in which the more complex one can be transformed into the simpler model by imposing a set of constraints on the parameters.[3]

For example: if the null model has 1 free parameter and a log-likelihood of −8024 and the alternative model has 3 parameters and a log-likelihood of −8012, then the probability of this difference is that of chi-squared value of +2·(8024 − 8012) = 24 with 3 − 1 = 2 degrees of freedom. Certain assumptions must be met for the statistic to follow a chi-squared distribution and often empirical p-values are computed.

Background

Template:Ref improve section The likelihood ratio, often denoted by (the capital Greek letter lambda), is the ratio of the likelihood function varying the parameters over two different sets in the numerator and denominator. A likelihood ratio test is a statistical test for making a decision between two hypotheses based on the value of this ratio.

It is central to the NeymanPearson approach to statistical hypothesis testing, and, like statistical hypothesis testing in general, is both widely used and criticized.Potter or Ceramic Artist Truman Bedell from Rexton, has interests which include ceramics, best property developers in singapore developers in singapore and scrabble. Was especially enthused after visiting Alejandro de Humboldt National Park.

Simple-vs-simple hypotheses

Mining Engineer (Excluding Oil ) Truman from Alma, loves to spend time knotting, largest property developers in singapore developers in singapore and stamp collecting. Recently had a family visit to Urnes Stave Church.

A statistical model is often a parametrized family of probability density functions or probability mass functions . A simple-vs-simple hypotheses test has completely specified models under both the null and alternative hypotheses, which for convenience are written in terms of fixed values of a notional parameter :

Note that under either hypothesis, the distribution of the data is fully specified; there are no unknown parameters to estimate. The likelihood ratio test statistic can be written as:[4][5]

or

where is the likelihood function. Note that some references may use the reciprocal as the definition.[6] In the form stated here, the likelihood ratio is small if the alternative model is better than the null model and the likelihood ratio test provides the decision rule as:

If , do not reject ;
If , reject ;
Reject with probability if

The values are usually chosen to obtain a specified significance level , through the relation: .Template:Cn The Neyman-Pearson lemma states that this likelihood ratio test is the most powerful among all level tests for this problem.[1]

Definition (likelihood ratio test for composite hypotheses)

A null hypothesis is often stated by saying the parameter is in a specified subset of the parameter space .

The likelihood function is (with being the pdf or pmf) is a function of the parameter with held fixed at the value that was actually observed, i.e., the data. The likelihood ratio test statistic is [7]

Here, the notation refers to the Supremum function.

A likelihood ratio test is any test with critical region (or rejection region) of the form where is any number satisfying . Many common test statistics such as the Z-test, the F-test, Pearson's chi-squared test and the G-test are tests for nested models and can be phrased as log-likelihood ratios or approximations thereof.

Interpretation

Being a function of the data , the LR is therefore a statistic. The likelihood ratio test rejects the null hypothesis if the value of this statistic is too small. How small is too small depends on the significance level of the test, i.e., on what probability of Type I error is considered tolerable ("Type I" errors consist of the rejection of a null hypothesis that is true).

The numerator corresponds to the maximum likelihood of an observed outcome under the null hypothesis. The denominator corresponds to the maximum likelihood of an observed outcome varying parameters over the whole parameter space. The numerator of this ratio is less than the denominator. The likelihood ratio hence is between 0 and 1. Low values of the likelihood ratio mean that the observed result was less likely to occur under the null hypothesis as compared to the alternative. High values of the statistic mean that the observed outcome was nearly as likely to occur under the null hypothesis as compared to the alternative, and the null hypothesis cannot be rejected.

<Wilks' theorem>...</Wilks' theorem> Distribution: Wilks' theorem

If the distribution of the likelihood ratio corresponding to a particular null and alternative hypothesis can be explicitly determined then it can directly be used to form decision regions (to accept/reject the null hypothesis). In most cases, however, the exact distribution of the likelihood ratio corresponding to specific hypotheses is very difficult to determine. A convenient result, attributed to Samuel S. Wilks, says that as the sample size approaches , the test statistic for a nested model will be asymptotically distributed with degrees of freedom equal to the difference in dimensionality of and .[8] This means that for a great variety of hypotheses, a practitioner can compute the likelihood ratio for the data and compare to the chi squared value corresponding to a desired statistical significance as an approximate statistical test.

Examples

Coin tossing

An example, in the case of Pearson's test, we might try to compare two coins to determine whether they have the same probability of coming up heads. Our observation can be put into a contingency table with rows corresponding to the coin and columns corresponding to heads or tails. The elements of the contingency table will be the number of times the coin for that row came up heads or tails. The contents of this table are our observation .

Heads Tails
Coin 1
Coin 2

Here consists of the parameters , , , and , which are the probability that coins 1 and 2 come up heads or tails. The hypothesis space is defined by the usual constraints on a distribution, , and . The null hypothesis is the subspace where . In all of these constraints, and .

Writing for the best values for under the hypothesis , maximum likelihood is achieved with

Writing for the best values for under the null hypothesis , maximum likelihood is achieved with

which does not depend on the coin .

The hypothesis and null hypothesis can be rewritten slightly so that they satisfy the constraints for the logarithm of the likelihood ratio to have the desired nice distribution. Since the constraint causes the two-dimensional to be reduced to the one-dimensional , the asymptotic distribution for the test will be , the distribution with one degree of freedom.

For the general contingency table, we can write the log-likelihood ratio statistic as

References

43 year old Petroleum Engineer Harry from Deep River, usually spends time with hobbies and interests like renting movies, property developers in singapore new condominium and vehicle racing. Constantly enjoys going to destinations like Camino Real de Tierra Adentro.

External links

Template:Statistics

  1. 1.0 1.1 Template:Cite doi
  2. Template:Cite jstor
  3. An example using phylogenetic analyses is described at Template:Cite doi
  4. Mood, A.M.; Graybill, F.A. (1963) Introduction to the Theory of Statistics, 2nd edition. McGraw-Hill ISBN 978-0070428638 (page 286)
  5. Kendall, M.G., Stuart, A. (1973) The Advanced Theory of Statistics, Volume 2, Griffin. ISBN 0852642156 (page 234)
  6. Cox, D. R. and Hinkley, D. V Theoretical Statistics, Chapman and Hall, 1974. (page 92)
  7. Casella, George; Berger, Roger L. (2001) Statistical Inference, Second edition. ISBN 978-0534243128 (page 375)
  8. Template:Cite doi