Euclidean quantum gravity: Difference between revisions
en>JYBot m r2.7.1) (Robot: Adding it:Gravità quantistica euclidea |
en>ChrisGualtieri m →References: Remove stub tag(s). Page is start class or higher + General Fixes + Checkwiki fixes using AWB |
||
Line 1: | Line 1: | ||
In [[statistics]], '''''G''-tests''' are [[likelihood ratio test|likelihood-ratio]] or [[maximum likelihood]] [[statistical significance]] tests that are increasingly being used in situations where [[chi-squared test]]s were previously recommended.{{citation needed|date=June 2012}} | |||
The general formula for ''G'' is | |||
:<math> G = 2\sum_{i} {O_{i} \cdot \ln(O_{i}/E_{i}) }, </math> | |||
where O<sub>i</sub> is the observed frequency in a cell, E is the expected frequency on the null hypothesis, where ln denotes the [[natural logarithm]] (log to the base ''[[e (mathematical constant)|e]]'') and the sum is taken over all non-empty cells. | |||
''G''-tests are coming into increasing use, particularly since they were recommended at least since the 1981 edition of the popular statistics textbook by Sokal and Rohlf.<ref>[[Robert R. Sokal|Sokal, R. R.]] and Rohlf, F. J. (1981). ''Biometry: the principles and practice of statistics in biological research.'', New York: Freeman. ISBN 0-7167-2411-1.</ref> | |||
==Distribution and usage== | |||
Given the null hypothesis that the observed frequencies result from random sampling from a distribution with the given expected frequencies, the [[probability distribution|distribution]] of ''G'' is approximately a [[chi-squared distribution]], with the same number of [[degrees of freedom (statistics)|degrees of freedom]] as in the corresponding chi-squared test. | |||
For very small samples the [[multinomial test]] for goodness of fit, and [[Fisher's exact test]] for contingency tables, or even Bayesian hypothesis selection are preferable to the ''G''-test .{{citation needed|date=August 2011}} | |||
==Relation to the chi-squared test== | |||
The commonly used [[chi-squared test]]s for goodness of fit to a distribution and for independence in [[contingency table]]s are in fact approximations of the [[log-likelihood ratio]] on which the G-tests are based. The general formula for Pearson's chi-squared test statistic is | |||
:<math> \chi^2 = \sum_{i} {(O_{i} - E_{i})^2 \over E_{i}} .</math> | |||
The approximation of ''G'' by chi squared is obtained by a second order Taylor expansion of the natural logarithm around 1. This approximation was developed by [[Karl Pearson]] because at the time it was unduly laborious to calculate log-likelihood ratios. With the advent of electronic calculators and personal computers, this is no longer a problem. A derivation of how the chi-squared test is related to the G-test and likelihood ratios, including to a full Bayesian solution is provided in.<ref>Hoey, J (2012). ''[http://arxiv.org/abs/1206.4881v2 The Two-Way Likelihood Ratio (G) Test and Comparison to Two-Way Chi-Squared Test]''</ref> | |||
For samples of a reasonable size, the ''G''-test and the chi-squared test will lead to the same conclusions. However, the approximation to the theoretical chi-squared distribution for the ''G''-test is better than for the Pearson chi-squared tests.<ref>Harremoës, P. and Tusnády, G. (2012). ''[http://arxiv.org/abs/1202.1125 Information Divergence is more chi squared distributed than the chi squared statistic]'', Proceedings ISIT 2012, pp. 538-543.</ref> In cases where <math> O_i >2 \cdot E_i </math> for some cell case the ''G''-test is always better than the chi-squared test.{{citation needed|date=August 2011}} | |||
For testing goodness-of-fit the ''G''-test is infinitely more efficient than the chi squared test in the sense of Bahadur, but the two tests are equally efficient in the sense of Pitman or in the sense of Hodge and Lehman.<ref>Quine, M. P. and Robinson, J. (1985), Efficiencies of chi-square and likelihood ratio goodness-of-fit tests. Ann. Statist. 13, 727-742</ref><ref>Harremoës, P. and Vajda, I. (2008) On the Bahadur-Efficient Testing of Uniformity by means of the Entropy, IEEE Trans. Inform Theory, vol 54, pp. 321-331</ref> | |||
==Relation to Kullback-Leibler divergence== | |||
The ''G''-test quantity is proportional to the [[Kullback–Leibler divergence]] of the empirical distribution from the theoretical distribution. | |||
==Relation to mutual information== | |||
For analysis of contingency tables the value of ''G'' can also be expressed in terms of [[mutual information]]. | |||
Let | |||
:<math>N = \sum_{ij}{O_{ij}} \; </math> , <math> \; \pi_{ij} = {O_{ij} \over N} \;</math> , <math>\; \pi_{i.} = {\sum_j O_{ij} \over N} \; </math> and <math>\; \pi_{. j} = {\sum_i O_{ij} \over N} \;</math> . | |||
Then ''G'' can be expressed in several alternative forms: | |||
:<math> G = 2 \cdot N \cdot \sum_{ij}{\pi_{ij} \left( \ln(\pi_{ij})-\ln(\pi_{i.})-\ln(\pi_{.j}) \right)} ,</math> | |||
:<math> G = 2 \cdot N \cdot \left[ H(row) + H(col) - H(row,col) \right] , </math> | |||
:<math> G = 2 \cdot N \cdot MI(row,col) \, ,</math> | |||
where the [[Entropy (information theory)|entropy]] of a discrete random variable <math>X \,</math> is defined as | |||
:<math> H(X) = - {\sum_x p(x) \log p(x)} \, ,</math> | |||
and where | |||
:<math> MI(row,col)= H(row) + H(col) - H(row,col) \, </math> | |||
is the [[mutual information]] between the row vector and the column vector of the contingency table. | |||
It can also be shown{{citation needed|date=August 2011}} that the inverse document frequency weighting commonly used for text retrieval is an approximation of ''G'' applicable when the row sum for the query is much smaller than the row sum for the remainder of the corpus. Similarly, the result of Bayesian inference applied to a choice of single multinomial distribution for all rows of the contingency table taken together versus the more general alternative of a separate multinomial per row produces results very similar to the ''G'' statistic.{{citation needed|date=August 2011}} | |||
==Application== | |||
* The [[McDonald–Kreitman test]] in [[statistical genetics]] is an application of the G-test. | |||
* Dunning<ref>Dunning, Ted (1993). ''[http://acl.ldc.upenn.edu/J/J93/J93-1003.pdf Accurate Methods for the Statistics of Surprise and Coincidence]'', Computational Linguistics, Volume 19, issue 1 (March, 1993).</ref> introduced the test to the [[computational linguistics]] community where it is now widely used. | |||
==Statistical software== | |||
* The [[R programming language]] has the [http://rforge.net/doc/packages/Deducer/likelihood.test.html likelihood.test] function in the [http://rforge.net/doc/packages/Deducer/html/00Index.html Deducer] package. | |||
*In [[SAS System|SAS]], one can conduct G-Test by applying the <code>/chisq</code> option in <code>proc freq</code>.<ref>[http://udel.edu/~mcdonald/statgtestind.html G-test of independence], [http://udel.edu/~mcdonald/statgtestgof.html G-test for goodness-of-fit] in Handbook of Biological Statistics, University of Delaware. ( pp. 46-51, 64-69 in: McDonald, J.H. (2009) ''Handbook of Biological Statistics'' (2nd ed.). Sparky House Publishing, Baltimore, Maryland.)</ref> | |||
*Fisher's G-Test in the [http://cran.r-project.org/web/packages/GeneCycle/ GeneCycle Package] of the [[R programming language]] (fisher.g.test) does not implement the G-test as described in this article, but rather Fisher's exact test of Gaussian white-noise in a time series.<ref>Fisher, R.A. (1929) "Tests of significance in harmonic analysis" Proceedings of the Royal Society of London. Series A, Volume 125, Issue 796, pp. 54-59.</ref> | |||
==References== | |||
<references/> | |||
==External links== | |||
* [http://ucrel.lancs.ac.uk/llwizard.html G<sup>2</sup>/Log-likelihood calculator] | |||
{{DEFAULTSORT:G-Test}} | |||
[[Category:Categorical data]] | |||
[[Category:Statistical tests]] |
Revision as of 21:05, 12 December 2013
In statistics, G-tests are likelihood-ratio or maximum likelihood statistical significance tests that are increasingly being used in situations where chi-squared tests were previously recommended.Potter or Ceramic Artist Truman Bedell from Rexton, has interests which include ceramics, best property developers in singapore developers in singapore and scrabble. Was especially enthused after visiting Alejandro de Humboldt National Park.
The general formula for G is
where Oi is the observed frequency in a cell, E is the expected frequency on the null hypothesis, where ln denotes the natural logarithm (log to the base e) and the sum is taken over all non-empty cells.
G-tests are coming into increasing use, particularly since they were recommended at least since the 1981 edition of the popular statistics textbook by Sokal and Rohlf.[1]
Distribution and usage
Given the null hypothesis that the observed frequencies result from random sampling from a distribution with the given expected frequencies, the distribution of G is approximately a chi-squared distribution, with the same number of degrees of freedom as in the corresponding chi-squared test.
For very small samples the multinomial test for goodness of fit, and Fisher's exact test for contingency tables, or even Bayesian hypothesis selection are preferable to the G-test .Potter or Ceramic Artist Truman Bedell from Rexton, has interests which include ceramics, best property developers in singapore developers in singapore and scrabble. Was especially enthused after visiting Alejandro de Humboldt National Park.
Relation to the chi-squared test
The commonly used chi-squared tests for goodness of fit to a distribution and for independence in contingency tables are in fact approximations of the log-likelihood ratio on which the G-tests are based. The general formula for Pearson's chi-squared test statistic is
The approximation of G by chi squared is obtained by a second order Taylor expansion of the natural logarithm around 1. This approximation was developed by Karl Pearson because at the time it was unduly laborious to calculate log-likelihood ratios. With the advent of electronic calculators and personal computers, this is no longer a problem. A derivation of how the chi-squared test is related to the G-test and likelihood ratios, including to a full Bayesian solution is provided in.[2]
For samples of a reasonable size, the G-test and the chi-squared test will lead to the same conclusions. However, the approximation to the theoretical chi-squared distribution for the G-test is better than for the Pearson chi-squared tests.[3] In cases where for some cell case the G-test is always better than the chi-squared test.Potter or Ceramic Artist Truman Bedell from Rexton, has interests which include ceramics, best property developers in singapore developers in singapore and scrabble. Was especially enthused after visiting Alejandro de Humboldt National Park.
For testing goodness-of-fit the G-test is infinitely more efficient than the chi squared test in the sense of Bahadur, but the two tests are equally efficient in the sense of Pitman or in the sense of Hodge and Lehman.[4][5]
Relation to Kullback-Leibler divergence
The G-test quantity is proportional to the Kullback–Leibler divergence of the empirical distribution from the theoretical distribution.
Relation to mutual information
For analysis of contingency tables the value of G can also be expressed in terms of mutual information.
Let
Then G can be expressed in several alternative forms:
where the entropy of a discrete random variable is defined as
and where
is the mutual information between the row vector and the column vector of the contingency table.
It can also be shownPotter or Ceramic Artist Truman Bedell from Rexton, has interests which include ceramics, best property developers in singapore developers in singapore and scrabble. Was especially enthused after visiting Alejandro de Humboldt National Park. that the inverse document frequency weighting commonly used for text retrieval is an approximation of G applicable when the row sum for the query is much smaller than the row sum for the remainder of the corpus. Similarly, the result of Bayesian inference applied to a choice of single multinomial distribution for all rows of the contingency table taken together versus the more general alternative of a separate multinomial per row produces results very similar to the G statistic.Potter or Ceramic Artist Truman Bedell from Rexton, has interests which include ceramics, best property developers in singapore developers in singapore and scrabble. Was especially enthused after visiting Alejandro de Humboldt National Park.
Application
- The McDonald–Kreitman test in statistical genetics is an application of the G-test.
- Dunning[6] introduced the test to the computational linguistics community where it is now widely used.
Statistical software
- The R programming language has the likelihood.test function in the Deducer package.
- In SAS, one can conduct G-Test by applying the
/chisq
option inproc freq
.[7] - Fisher's G-Test in the GeneCycle Package of the R programming language (fisher.g.test) does not implement the G-test as described in this article, but rather Fisher's exact test of Gaussian white-noise in a time series.[8]
References
- ↑ Sokal, R. R. and Rohlf, F. J. (1981). Biometry: the principles and practice of statistics in biological research., New York: Freeman. ISBN 0-7167-2411-1.
- ↑ Hoey, J (2012). The Two-Way Likelihood Ratio (G) Test and Comparison to Two-Way Chi-Squared Test
- ↑ Harremoës, P. and Tusnády, G. (2012). Information Divergence is more chi squared distributed than the chi squared statistic, Proceedings ISIT 2012, pp. 538-543.
- ↑ Quine, M. P. and Robinson, J. (1985), Efficiencies of chi-square and likelihood ratio goodness-of-fit tests. Ann. Statist. 13, 727-742
- ↑ Harremoës, P. and Vajda, I. (2008) On the Bahadur-Efficient Testing of Uniformity by means of the Entropy, IEEE Trans. Inform Theory, vol 54, pp. 321-331
- ↑ Dunning, Ted (1993). Accurate Methods for the Statistics of Surprise and Coincidence, Computational Linguistics, Volume 19, issue 1 (March, 1993).
- ↑ G-test of independence, G-test for goodness-of-fit in Handbook of Biological Statistics, University of Delaware. ( pp. 46-51, 64-69 in: McDonald, J.H. (2009) Handbook of Biological Statistics (2nd ed.). Sparky House Publishing, Baltimore, Maryland.)
- ↑ Fisher, R.A. (1929) "Tests of significance in harmonic analysis" Proceedings of the Royal Society of London. Series A, Volume 125, Issue 796, pp. 54-59.