Main Page: Difference between revisions
No edit summary |
BenedicOlsen (talk | contribs) mNo edit summary |
||
Line 1: | Line 1: | ||
In [[statistics]], the '''score''', '''score function''', '''efficient score'''<ref name=Cox1>Cox & Hinkley (1974), p 107</ref> or '''informant'''<ref>{{SpringerEOM| title=Informant |id=i/i051030 |first=N.N. |last=Chentsov}}</ref> indicates how sensitively a [[likelihood function]] <math>L(\theta; X)</math> depends on its [[parametric model|parameter]] <math>\theta</math>. Explicitly, the score for <math>\theta</math> is the [[gradient]] of the log-likelihood with respect to <math>\theta</math>. | |||
In | |||
The score plays an important role in several aspects of [[statistical inference|inference]]. For example: | |||
:*in formulating a [[test statistic]] for a locally most powerful test;<ref>Cox & Hinkley (1974), p 113</ref> | |||
:*in approximating the error in a [[maximum likelihood]] estimate;<ref name="Cox & Hinkley 1974, p 295">Cox & Hinkley (1974), p 295</ref> | |||
:*in demonstrating the asymptotic sufficiency of a [[maximum likelihood]] estimate;<ref name="Cox & Hinkley 1974, p 295"/> | |||
:*in the formulation of [[confidence interval]]s;<ref>Cox & Hinkley (1974), p 222–3</ref> | |||
:*in demonstrations of the [[Cramér–Rao bound|Cramér–Rao inequality]].<ref>Cox & Hinkley (1974), p 254</ref> | |||
The score function also plays an important role in [[computational statistics]], as it can play a part in the computation of | |||
maximum likelihood estimates. | |||
==Definition== | |||
The score or efficient score <ref name="Cox1"/> is the [[gradient]] (the vector of [[partial derivative]]s), with respect to some parameter <math>\theta</math>, of the [[logarithm]] (commonly the [[natural logarithm]]) of the [[likelihood function]] (the log-likelihood). | |||
If the observation is <math>X</math> and its likelihood is <math>L(\theta;X)</math>, then the score <math>V</math> can be found through the [[chain rule]]: | |||
: <math> | :<math> | ||
V \equiv V(\theta, X) | |||
= | |||
\frac{\partial}{\partial\theta} \log L(\theta;X) | |||
= | |||
\frac{1}{L(\theta;X)} \frac{\partial L(\theta;X)}{\partial\theta}. | |||
</math> | |||
Note that, | Thus the score <math>V</math> indicates the [[Sensitivity analysis|sensitivity]] of <math>L(\theta;X)</math> (its derivative normalized by its value). Note that <math>V</math> is a function of <math>\theta</math> and the observation <math>X</math>, so that, in general, it is not a [[statistic]]. However in certain applications, such as the [[score test]], the score is evaluated at a specific value of <math>\theta</math> (such as a null-hypothesis value, or at the maximum likelihood estimate of <math>\theta</math>), in which case the result is a statistic. | ||
In older literature, the term "linear score" may be used to refer to the score with respect to infintesimal translation of a given density. This convention arises from a time when the primary parameter of interest was the mean or median of a distribution. In this case, the likelihood of an observation is given by a density of the form <math>L(\theta;X)=f(X+\theta)</math>. The "linear score" is then defined as | |||
In this | |||
:<math> | |||
V_{\rm linear} | |||
= | |||
\frac{\partial}{\partial X} \log f(X) | |||
</math> | |||
==Properties== | |||
===Mean=== | |||
Under some regularity conditions, the [[expected value]] of <math>V</math> with respect to the observation <math>x</math>, given <math>\theta</math>, written <math>\mathbb{E}(V\mid\theta)</math>, is zero. | |||
To see this rewrite the likelihood function L as a [[probability density function]] <math>L(\theta; x) = f(x; \theta)</math>. Then: | |||
:<math> | |||
\mathbb{E}(V\mid\theta) | |||
=\int_{-\infty}^{+\infty} | |||
f(x; \theta) \frac{\partial}{\partial\theta} \log L(\theta;X) | |||
\,dx | |||
=\int_{-\infty}^{+\infty} | |||
\frac{\partial}{\partial\theta} \log L(\theta;X) f(x; \theta) \, dx | |||
</math> | |||
{{ | :<math> | ||
=\int_{-\infty}^{+\infty} | |||
\frac{1}{f(x; \theta)}\frac{\partial f(x; \theta)}{\partial \theta}f(x; \theta)\, dx | |||
=\int_{-\infty}^{+\infty} \frac{\partial f(x; \theta)}{\partial \theta} \, dx | |||
</math> | |||
If certain differentiability conditions are met (see [[Leibniz integral rule]]), the integral may be rewritten as | |||
[[ | |||
:<math> | |||
\frac{\partial}{\partial\theta} \int_{-\infty}^{+\infty} | |||
f(x; \theta) \, dx | |||
= | |||
\frac{\partial}{\partial\theta}1 = 0. | |||
</math> | |||
It is worth restating the above result in words: the expected value of the score is zero. | |||
Thus, if one were to repeatedly sample from some distribution, and repeatedly calculate the score, then the mean value of the scores would tend to zero as the number of repeat samples approached infinity. | |||
== | ===Variance=== | ||
{{Main|Fisher information}} | |||
The variance of the score is known as the [[Fisher information]] and is written <math>\mathcal{I}(\theta)</math>. Because the expectation of the score is zero, this may be written as | |||
= | :<math> | ||
{{ | \mathcal{I}(\theta) | ||
= | |||
\mathbb{E} | |||
\left\{\left. | |||
\left[ | |||
\frac{\partial}{\partial\theta} \log L(\theta;X) | |||
\right]^2 | |||
\right|\theta\right\}. | |||
</math> | |||
Note that the Fisher information, as defined above, is not a function of any particular observation, as the random variable <math>X</math> has been averaged out. | |||
< | This concept of information is useful when comparing two methods of observation of some [[random process]]. | ||
=== | ==Examples== | ||
===Bernoulli process=== | |||
Consider a [[Bernoulli process]], with ''A'' successes and ''B'' failures; the probability of success is ''θ''. | |||
Then the likelihood ''L'' is | |||
{{ | :<math> | ||
L(\theta;A,B)=\frac{(A+B)!}{A!B!}\theta^A(1-\theta)^B,</math> | |||
so the score ''V'' is | |||
[ | |||
:<math> | |||
V=\frac{1}{L}\frac{\partial L}{\partial\theta} = \frac{A}{\theta}-\frac{B}{1-\theta}. | |||
[[ | </math> | ||
[[ | |||
[[ | We can now verify that the expectation of the score is zero. Noting that the expectation of ''A'' is ''n''θ and the expectation of ''B'' is ''n''(1 − θ) [recall that ''A'' and ''B'' are random variables], we can see that the expectation of ''V'' is | ||
[[ | |||
[[ | :<math> | ||
[[ | E(V) | ||
[[ | = \frac{n\theta}{\theta} - \frac{n(1-\theta)}{1-\theta} | ||
[[ | = n - n | ||
= 0. | |||
[[ | </math> | ||
We can also check the variance of <math>V</math>. We know that ''A'' + ''B'' = ''n'' (so ''B'' = ''n'' − ''A'') and the variance of ''A'' is ''n''θ(1 − θ) so the variance of ''V'' is | |||
:<math> | |||
\begin{align} | |||
\operatorname{var}(V) & =\operatorname{var}\left(\frac{A}{\theta}-\frac{n-A}{1-\theta}\right) | |||
=\operatorname{var}\left(A\left(\frac{1}{\theta}+\frac{1}{1-\theta}\right)\right) \\ | |||
& =\left(\frac{1}{\theta}+\frac{1}{1-\theta}\right)^2\operatorname{var}(A) | |||
=\frac{n}{\theta(1-\theta)}. | |||
\end{align} | |||
</math> | |||
===Binary outcome model=== | |||
For models with binary outcomes (''Y'' = 1 or 0), the model can be scored with the logarithm of predictions | |||
<math> S = Y \log( p ) + ( Y - 1 ) ( \log( 1 - p ) ) </math> | |||
where ''p'' is the probability in the model to be estimated and ''S'' is the score.<ref name=Steyerberg2010>Steyerberg EW, Vickers AJ, Cook NR, Gerds T, Gonen M, Obuchowski N, Pencina MJ and Kattan MW (2010) Assessing the performance of prediction models. A framework for traditional and novel measures. Epidemiology 21 (1) 128–138 [[DOI: 10.1097/EDE.0b013e3181c30fb2]]</ref> | |||
==Applications== | |||
===Scoring algorithm=== | |||
{{Main|Scoring algorithm}} | |||
The scoring algorithm is an iterative method for numerically determining the [[maximum likelihood]] [[estimator]]. | |||
===Score test=== | |||
{{Main|Score test}} | |||
{{Expand section|date=December 2009}} | |||
==See also== | |||
*[[Fisher information]] | |||
*[[Information theory]] | |||
*[[Score test]] | |||
*[[Scoring algorithm]] | |||
*[[Support curve]] | |||
==Notes== | |||
{{Reflist}} | |||
==References== | |||
*Cox, D.R., Hinkley, D.V. (1974) ''Theoretical Statistics'', Chapman & Hall. ISBN 0-412-12420-3 | |||
*{{cite book | |||
| last = Schervish | |||
| first = Mark J. | |||
| title = Theory of Statistics | |||
| publisher =Springer | |||
| date =1995 | |||
| location =New York | |||
| pages = Section 2.3.1 | |||
| isbn = 0-387-94546-6 | |||
| nopp = true}} | |||
[[Category:Estimation theory]] |
Revision as of 10:14, 12 August 2014
In statistics, the score, score function, efficient score[1] or informant[2] indicates how sensitively a likelihood function depends on its parameter . Explicitly, the score for is the gradient of the log-likelihood with respect to .
The score plays an important role in several aspects of inference. For example:
- in formulating a test statistic for a locally most powerful test;[3]
- in approximating the error in a maximum likelihood estimate;[4]
- in demonstrating the asymptotic sufficiency of a maximum likelihood estimate;[4]
- in the formulation of confidence intervals;[5]
- in demonstrations of the Cramér–Rao inequality.[6]
The score function also plays an important role in computational statistics, as it can play a part in the computation of maximum likelihood estimates.
Definition
The score or efficient score [1] is the gradient (the vector of partial derivatives), with respect to some parameter , of the logarithm (commonly the natural logarithm) of the likelihood function (the log-likelihood). If the observation is and its likelihood is , then the score can be found through the chain rule:
Thus the score indicates the sensitivity of (its derivative normalized by its value). Note that is a function of and the observation , so that, in general, it is not a statistic. However in certain applications, such as the score test, the score is evaluated at a specific value of (such as a null-hypothesis value, or at the maximum likelihood estimate of ), in which case the result is a statistic.
In older literature, the term "linear score" may be used to refer to the score with respect to infintesimal translation of a given density. This convention arises from a time when the primary parameter of interest was the mean or median of a distribution. In this case, the likelihood of an observation is given by a density of the form . The "linear score" is then defined as
Properties
Mean
Under some regularity conditions, the expected value of with respect to the observation , given , written , is zero. To see this rewrite the likelihood function L as a probability density function . Then:
If certain differentiability conditions are met (see Leibniz integral rule), the integral may be rewritten as
It is worth restating the above result in words: the expected value of the score is zero. Thus, if one were to repeatedly sample from some distribution, and repeatedly calculate the score, then the mean value of the scores would tend to zero as the number of repeat samples approached infinity.
Variance
Mining Engineer (Excluding Oil ) Truman from Alma, loves to spend time knotting, largest property developers in singapore developers in singapore and stamp collecting. Recently had a family visit to Urnes Stave Church. The variance of the score is known as the Fisher information and is written . Because the expectation of the score is zero, this may be written as
Note that the Fisher information, as defined above, is not a function of any particular observation, as the random variable has been averaged out. This concept of information is useful when comparing two methods of observation of some random process.
Examples
Bernoulli process
Consider a Bernoulli process, with A successes and B failures; the probability of success is θ.
Then the likelihood L is
so the score V is
We can now verify that the expectation of the score is zero. Noting that the expectation of A is nθ and the expectation of B is n(1 − θ) [recall that A and B are random variables], we can see that the expectation of V is
We can also check the variance of . We know that A + B = n (so B = n − A) and the variance of A is nθ(1 − θ) so the variance of V is
Binary outcome model
For models with binary outcomes (Y = 1 or 0), the model can be scored with the logarithm of predictions
where p is the probability in the model to be estimated and S is the score.[7]
Applications
Scoring algorithm
Mining Engineer (Excluding Oil ) Truman from Alma, loves to spend time knotting, largest property developers in singapore developers in singapore and stamp collecting. Recently had a family visit to Urnes Stave Church. The scoring algorithm is an iterative method for numerically determining the maximum likelihood estimator.
Score test
Mining Engineer (Excluding Oil ) Truman from Alma, loves to spend time knotting, largest property developers in singapore developers in singapore and stamp collecting. Recently had a family visit to Urnes Stave Church. Template:Expand section
See also
Notes
43 year old Petroleum Engineer Harry from Deep River, usually spends time with hobbies and interests like renting movies, property developers in singapore new condominium and vehicle racing. Constantly enjoys going to destinations like Camino Real de Tierra Adentro.
References
- Cox, D.R., Hinkley, D.V. (1974) Theoretical Statistics, Chapman & Hall. ISBN 0-412-12420-3
- 20 year-old Real Estate Agent Rusty from Saint-Paul, has hobbies and interests which includes monopoly, property developers in singapore and poker. Will soon undertake a contiki trip that may include going to the Lower Valley of the Omo.
My blog: http://www.primaboinca.com/view_profile.php?userid=5889534
- ↑ 1.0 1.1 Cox & Hinkley (1974), p 107
- ↑ 53 yrs old Fitter (Common ) Batterton from Carp, likes to spend some time kid advocate, property developers in singapore and handball. Completed a cruise liner experience that was comprised of passing by Gusuku Sites and Related Properties of the Kingdom of Ryukyu.
Here is my web page www.mtfgaming.com - ↑ Cox & Hinkley (1974), p 113
- ↑ 4.0 4.1 Cox & Hinkley (1974), p 295
- ↑ Cox & Hinkley (1974), p 222–3
- ↑ Cox & Hinkley (1974), p 254
- ↑ Steyerberg EW, Vickers AJ, Cook NR, Gerds T, Gonen M, Obuchowski N, Pencina MJ and Kattan MW (2010) Assessing the performance of prediction models. A framework for traditional and novel measures. Epidemiology 21 (1) 128–138 DOI: 10.1097/EDE.0b013e3181c30fb2