https://en.formulasearchengine.com/index.php?title=Talk:Likelihood-ratio_test&feed=atom&action=historyTalk:Likelihood-ratio test - Revision history2022-01-18T15:31:47ZRevision history for this page on the wikiMediaWiki 1.37.0-alphahttps://en.formulasearchengine.com/index.php?title=Talk:Likelihood-ratio_test&diff=286502&oldid=preven>Illia Connell: Stats rating using AWB2013-02-18T19:44:02Z<p>Stats rating using <a href="/index.php?title=Testwiki:AWB&action=edit&redlink=1" class="new" title="Testwiki:AWB (page does not exist)">AWB</a></p>
<p><b>New page</b></p><div>{{Maths rating|frequentlyviewed=yes<br />
|field = probability and statistics<br />
|importance = high<br />
|class = Start<br />
|historical =<br />
}}<br />
{{WikiProject Statistics<br />
|importance = high<br />
|class = start<br />
}}<br />
<br />
{{Technical|date=September 2010}}<br />
== Added General Audience Introduction and Created Examples Contents ==<br />
<br />
The instructions for creating less technical articles suggest starting with a simplier explanation upfront and then get into the technical details later. With a table of contents, the instructions indicate that provides for having something accessible for those that haven't extensively studied this topic; while at the same time, leaving a meaty article for those interested in something more sophisticated. I dont know if I pulled it off perfectly, but I think it improves the article in a way in which who ever put the "to technical" banner would approve.<br />
<br />
At the same time I moved the example into its own contents tab to seperate it from the theory portion.<br />
<br />
[[User:Jeremiahrounds|Jeremiahrounds]] 18:59, 20 June 2007 (UTC)<br />
<br />
<br />
Would it be possible for any one to add a proof of why the test follows a chi squared distribution ? <small><span class="autosigned">—Preceding [[Wikipedia:Signatures|unsigned]] comment added by [[User:Thedreamshaper|Thedreamshaper]] ([[User talk:Thedreamshaper|talk]] • [[Special:Contributions/Thedreamshaper|contribs]]) 20:52, 17 February 2010 (UTC)</span></small><!-- Template:Unsigned --> <!--Autosigned by SineBot--><br />
<br />
I think the introduction might benifit from a re-write, perhaps this formula would be more appropriate than the asymptotic version: <math>\Lambda(x)=\frac{\sup\{\,L(\theta\mid x):\theta\in\Theta_0\,\}}{\sup\{\,L(\theta\mid x):\theta\in\Theta\,\}}.</math> --[[Special:Contributions/131.111.243.37|131.111.243.37]] ([[User talk:131.111.243.37|talk]]) 10:18, 25 May 2010 (UTC)<br />
<br />
I added a non-technical description about when these tests arise in practice to the first paragraph. Not an expert, but using this page without something like that was not helpful. <span style="font-size: smaller;" class="autosigned">—Preceding [[Wikipedia:Signatures|unsigned]] comment added by [[Special:Contributions/98.143.103.218|98.143.103.218]] ([[User talk:98.143.103.218|talk]]) 04:07, 29 September 2010 (UTC)</span><!-- Template:UnsignedIP --> <!--Autosigned by SineBot--><br />
<br />
== difficult take on the likelihood viewpoint? ==<br />
<br />
I believe this essentially obscures the idea here:<br />
:<math>\Lambda(x)=\frac{\sup\{\,L(\theta\mid x):\theta\in\Theta_0\,\}}{\sup\{\,L(\theta\mid x):\theta\in\Theta\,\}}.</math><br />
<br />
The likelihood ratio test is the ratio of the probability of the result GIVEN the maximum likelihood estimator in the domain of the null and alternative hypothesis.<br />
<br />
The supremums in that equation sort of combine the maximum likelihood method into the theory of likelihood ratios.<br />
<br />
I am not making this up. For example, the text Hoel, Introduction of Statistical Theory uses L(x| theta0) / L(x | theta) where each theta is the maximum likelihood estimate applicable to each hypothesis.<br />
<br />
You can more simply state it as Hoel does and just note that the thetas are produced by maximum likelihood estimates. So the supremum doesnt need to appear in the theory of likelihood ratios. Then you get a ratio of probabilities that is easier to read and even think about.<br />
<br />
I actually initially called the offered equation an error. But that is a bridge to far I think. Putting the supremums in the context where you appear to be maximizing something after the data is taken isnt very useful for understanding the actual method though.<br />
<br />
[[User:Jeremiahrounds|Jeremiahrounds]] 12:11, 20 June 2007 (UTC)<br />
:I don't think there is any maximum involved in the Likelihood-ratio test, you just have to make the ratio of the likelihood under hypothesis H0 and H1. I'm not an expert in statistics but I think this equation introduces a confusion between Likelihood-ratio test and maximum likelihood estimation. I have never seen it presented this way anyway... [[User:Sylenius|Sylenius]] 14:45, 27 June 2007 (UTC)<br />
<br />
I think Jeremiahrounds is mistaken. In case the MLEs actually exist, the likelihood-ratio test statistic is in fact equal to what Hoel's book says it is, and also it is equal to the expression in [[TeX]] above, which appears in this article. But the likelihood-ratio test statistic can exist even in cases where MLEs don't exist, simply because the sup exists and the max does not, i.e. the sup is not actually attained. Moreover, the problem of non-unique MLEs doesn't matter, since it is only the value of the sup rather than the value of θ where the sup occurs that matters. [[User:Michael Hardy|Michael Hardy]] 19:05, 27 June 2007 (UTC)<br />
<br />
== Untitled ==<br />
<br />
Can someone please replace the awful ascii-art in this article with TeX, please?<br />
<br />
----<br />
<br />
I may get to that if someone doesn't beat me to it. Hundreds of articles here are in need of TeX to replace what was used here before 2003. [[User:Michael Hardy|Michael Hardy]] 22:57 Feb 2, 2003 (UTC)<br />
<br />
----<br />
<br />
The article uses λ in some places, and Λ in others -- is this intentional, or should they all be one or the other?<br />
<br />
This article needs thorough checking and copyediting.<br />
<br />
----<br />
<br />
(Capital) Λ is the most frequently used notation for the test statistic.<br />
[[User:Michael Hardy|Michael Hardy]] 20:12 Feb 4, 2003 (UTC)<br />
<br />
Can the Likelihoor ratio test be used in place of the F-test for a fixed effects models. Any diffrences from the F-test in this case? What about using LRT for testing fixed effects in mixed model?<br />
<br />
::The F-test '''is''' the likelihood ratio test in such models. [[User:Michael Hardy|Michael Hardy]] 22:30, 3 September 2005 (UTC)<br />
<br />
----<br />
<br />
Hi. I may be misguided or mistaken here, I'm hardly expert. But I think the definition of the test statistic given is inconsistent with the test statistic given. The unrestricted numerator will be larger than the restricted denominator, so the ratio will be greater than 1, and its log will be positive, so -2 log Λ will be negative and can hardly be chi-square distributed. I think that either the ratio should be inverted, or the test statistic multiplied by negative 1, to keep things consistent. (My apologies again if I'm making a basic mistake, a possibility of which the likelihood is high.) [[User:Stevewaldman|Stevewaldman]] ([[User talk:Stevewaldman|talk]]) 00:58, 20 January 2008 (UTC)<br />
<br />
== "asymptotically" ==<br />
<br />
"If the null hypothesis is true, then −2 log Λ will be asymptotically χ2 distributed"<br />
The validity conditions of this theorem should be given. "asymptotically" when what tends to what value ?<br />
<br />
:I have now answered this question in the article. [[User:Michael Hardy|Michael Hardy]] 02:36, 28 October 2005 (UTC)<br />
<br />
:: There's really no further restriction on the random variables ("n independent identically distributed random variables")? [[User:Dchudz|Dchudz]] 15:22, 13 July 2007 (UTC)<br />
<br />
== References ==<br />
<br />
This article lacks references. For a instance, who proved that the likelihood ratio has density function is <math>\chi^{2}+O_{p}(n^{-1})</math>?<br />
<br />
I believe the critical paper is WILKS, SS (1938): "The Large Sample Distribution of the Likelihood Ratio for Testing Composite Hypotheses," Annals of Mathematical Statistics, 9, 60-62.<br />
<br />
Freely available online at http://projecteuclid.org/euclid.aoms/1177732360 <small>—Preceding [[Wikipedia:Signatures|unsigned]] comment added by [[Special:Contributions/61.18.170.102|61.18.170.102]] ([[User talk:61.18.170.102|talk]]) 18:15, 6 April 2008 (UTC)</small><!-- Template:UnsignedIP --> <!--Autosigned by SineBot--><br />
<br />
== Coins ==<br />
Hi,<br />
I think your example of the coins is fine but needs elaborating.<br />
<ul><br />
<li>You haven't defined m<sub>ij</sub> which I assume is the probability of event j when the two coins have the same probability of event j. It might be better calling it m<sub>j</sub> then m<sub>ij</sub>.<br />
<li>I think you should put in the equation for the likelihood ratio lambda, then follow it with the -2 log lambda (-2LL) equation<br />
<li>I'm not sure your -2LL equation is right, though I may be wrong. It looks to me as if your -2LL equation converts to lambda ''squared'' equals the ratio of the max likelihood of the data for the two hypotheses.<br />
</ul><br />
Desmond D.Campbell@iop.kcl.ac.uk<br />
[[User:89.241.126.245|89.241.126.245]] 01:36, 24 March 2007 (UTC)<br />
<br />
== This page is FUNDAMENTALLY WRONG. ==<br />
<br />
Where to begin? A likelihood ratio test is for simple-vs-simple hypothesis. The test statistic given is a generalized, or maximum, likelihood ratio statistic. It may be commonly referred to in conversation as an LRT, but no competent mathematical statistics text will refer to it as such.<br />
<br />
The distinction is critical. For example, the Neyman-Pearson lemma, mentioned in the article, is only directly applicable to the simple-vs-simple test. It may be extended to some composite alternatives (UMP test) through eg. Monotone likelihood ratios. For most practical composite hypotheses, the best results are generally more restrictive, eg. UMPU.<br />
<br />
As for the flag about "too technical for a general audience". Blah. No choice, but one has to understand some mathematical statistics to have a chance of understanding LRT's. Conversely, a "General Audience" will have little concern over LRT's.<br />
<br />
Anyway, another page for the "expert needed" flag... --[[User:Zaqrfv|Zaqrfv]] ([[User talk:Zaqrfv|talk]]) 09:37, 25 August 2008 (UTC)<br />
<br />
::: "Zaqrfv", your main point is wrong. It is true that the LR test referred to in the Neyman&ndash;Pearson lemma is for simple-versus-simple. But to say that no respectable text will use this term for the generalized version is wrong. It's quite commonplace. [[User:Michael Hardy|Michael Hardy]] ([[User talk:Michael Hardy|talk]]) 20:51, 17 March 2009 (UTC)<br />
<br />
:Blah? I wholeheartedly disagree that this article can't be directed at a more general audience (e.g. physicians wanting to interpret the diagnostic validity of a test). The more complex stuff is fine towards the end of the article, but let's put the accessible stuff up front. Currently, the Wikipedia article is one of the least accessible articles on LRs on the web. After having read this article, I still have no idea what they are.[[Special:Contributions/164.111.16.221|164.111.16.221]] ([[User talk:164.111.16.221|talk]]) 13:50, 5 November 2008 (UTC)<br />
<br />
Hi, I think that the definition given for the ratio is wrong:<br />
"The numerator corresponds to the maximum probability of an observed result under the null hypothesis. The denominator corresponds to the maximum probability of an observed result under the alternative hypothesis."<br />
I was checking in some books, Mathematics and Statistics for science page 157 for example and the definition is the other way around.<br />
Then the interpretation needs another review also. <small><span class="autosigned">—Preceding [[Wikipedia:Signatures|unsigned]] comment added by [[User:Isapedraza|Isapedraza]] ([[User talk:Isapedraza|talk]] • [[Special:Contributions/Isapedraza|contribs]]) 12:52, 26 February 2009 (UTC)</span></small><!-- Template:Unsigned --> <!--Autosigned by SineBot--><br />
<br />
: It can be done either way; you just need to say that in one case you reject the null if the ratio is too big and in the other case if it's too small; the test is the same either way (in the sense that any dataset will lead to rejection in one case if and only if it leads to rejection in the other case). [[User:Michael Hardy|Michael Hardy]] ([[User talk:Michael Hardy|talk]]) 20:51, 17 March 2009 (UTC)<br />
<br />
:: A problem might be in the '''Criticism > Practical''' paragraph, which states that a disease is present if the likelihood ratio is ''large''. This would be the other way around if we want to be consistent with the definition given. [[User:Sjlver|Jonas Wagner]] ([[User talk:Sjlver|talk]]) 13:16, 10 June 2010 (UTC)<br />
<br />
== What is f(.) ==<br />
Are we talking cdf or pdf? The probability that x is observed exactly as is? or that x or something more extreme than x was observed [[User:Cancan101|cancan101]] ([[User talk:Cancan101|talk]]) 03:02, 18 February 2009 (UTC)<br />
: In standard usage a lower-case ''&fnof;'' is the pdf, and capital ''F'' is the cdf. [[User:Michael Hardy|Michael Hardy]] ([[User talk:Michael Hardy|talk]]) 22:09, 20 March 2009 (UTC)<br />
<br />
== Dubious ==<br />
<br />
I have revised the section, including the para marked dubious. Is it better/good enough? Otherwise give details of apparent problem points. [[User:Melcombe|Melcombe]] ([[User talk:Melcombe|talk]]) 10:16, 18 February 2009 (UTC)<br />
: I have revised the section, the paragraph does not seem to hold good under the revised definition and hence is omitted. [[User:Kniwor|Kniwor]] ([[User talk:Kniwor|talk]]) 18:58, 23 August 2009 (UTC)<br />
<br />
==Inconsistencies (Revised for improvement)==<br />
<br />
The sections and the definitions(though correct) seem inconsistent to me, and thoroughly confusing for a reader unfamiliar with the topic. I have revised and rewritten the first two sections to avoid any confusion and make things clear and consistent. Please point out any errors. [[User:Kniwor|Kniwor]] ([[User talk:Kniwor|talk]]) 18:57, 23 August 2009 (UTC)<br />
<br />
<br />
==The ratio==<br />
Since the test is for nested ones, so it is better to state in the following way:<br />
<math>D=2\left(L\left(unconstrained\right)-L\left(constrained\right)\right)</math><br />
<br />
rather than the original articulation in the article.<br />
<br />
For the non-logarithmized one, <math>D=\frac{l\left(unconstrained\right)}{l\left(constrained\right)}</math><br />
<br />
[[User:Jackzhp|Jackzhp]] ([[User talk:Jackzhp|talk]]) 22:18, 9 February 2011 (UTC)<br />
<br />
<br />
==Wilk's theorem==<br />
can someone put a reference so we can see where to look for its precise format. [[User:Jackzhp|Jackzhp]] ([[User talk:Jackzhp|talk]]) 22:21, 9 February 2011 (UTC)<br />
:I've added the relevant ref: {{cite doi|10.1214/aoms/1177732360}} --[[User:Qwfp|Qwfp]] ([[User talk:Qwfp|talk]]) 19:37, 11 February 2011 (UTC)<br />
<br />
== Background? ==<br />
<br />
Wasn't the likelihood ratio test a result of [[Søren Johansen]]'s work, or am I mistaken? Shouldn't he be mentioned in the Background section? I've only ever heard this referred to as ''Johansen's likelihood ratio'' and ''Johansen's likelihood ratio test''. <span style="font-size: smaller;" class="autosigned">— Preceding [[Wikipedia:Signatures|unsigned]] comment added by [[Special:Contributions/64.71.89.15|64.71.89.15]] ([[User talk:64.71.89.15|talk]]) 19:34, 29 November 2011 (UTC)</span><!-- Template:Unsigned IP --> <!--Autosigned by SineBot--><br />
:I think Johansen just developed a specific likelihood ratio test for [[cointegration]]. He was born in 1939 and [[Wilks' theorem]] about the asymptotic distribution of the log-likelihood ratio dates from 1938, so it seems improbable that likelihood ratio tests in general are a result of Johansen's work. [[User:Qwfp|Qwfp]] ([[User talk:Qwfp|talk]]) 20:10, 29 November 2011 (UTC)<br />
<br />
::Thanks! I understand now. Apologies for forgetting to login and sign my previous post - didn't realize I was logged out. [[User:John Shandy`|<span style="font: bold italic 11pt Candara; text-shadow: #8ca5bd 0.1em 0.1em 0.1em;"><span style="color: #2c7c9f">John</span> <span style="color: #1d5575">Shandy`</span></span>]] &bull; [[User_talk:John Shandy`|talk]] 20:57, 29 November 2011 (UTC)<br />
<br />
== Definition of Deviance is wrong ==<br />
<br />
<br />
<br />
The definition of deviance on the page now, 25. of January 2013 is wrong.<br />
<br />
It should be: -2ln [ likelihood of fitted model / likelihood of saturated model ]<br />
Which is the correct definition from Hosmer and Lemeshow's Applied logistic regression p. 13. <span style="font-size: smaller;" class="autosigned">— Preceding [[Wikipedia:Signatures|unsigned]] comment added by [[Special:Contributions/62.242.0.66|62.242.0.66]] ([[User talk:62.242.0.66|talk]]) 11:29, 25 January 2013 (UTC)</span><!-- Template:Unsigned IP --> <!--Autosigned by SineBot--><br />
<br />
:Deviance is not mentioned in this article at all. There is a different quantity denoted by ''D''. [[Special:Contributions/81.98.35.149|81.98.35.149]] ([[User talk:81.98.35.149|talk]]) 19:19, 25 January 2013 (UTC)</div>en>Illia Connell