Blood flow: Difference between revisions

From formulasearchengine
Jump to navigation Jump to search
en>Hmains
m copyedit, MOS value rules and AWB general fixes using AWB
 
(One intermediate revision by one other user not shown)
Line 1: Line 1:
In [[statistics]], the '''score''', '''score function''', '''efficient score'''<ref name=Cox1>Cox & Hinkley (1974), p 107</ref> or '''informant'''<ref>{{SpringerEOM| title=Informant |id=i/i051030 |first=N.N. |last=Chentsov}}</ref> indicates how sensitively a [[likelihood function]] <math>L(\theta; X)</math> depends on its [[parametric model|parameter]] <math>\theta</math>. Explicitly, the score for <math>\theta</math> is the [[gradient]] of the log-likelihood with respect to <math>\theta</math>.
High there, I am Adrianne and I totally get that name. Vermont has always been simple home and I really love every day living in this case. Gardening is what I do regular. I am a people manager even so soon I'll be on my own. You can find my net site here: http://circuspartypanama.com<br><br>my web page :: [http://circuspartypanama.com Clash Of Clans hack android]
 
The score plays an important role in several aspects of [[statistical inference|inference]]. For example:
:*in formulating a [[test statistic]] for a locally most powerful test;<ref>Cox & Hinkley (1974), p 113</ref>
:*in approximating the error in a [[maximum likelihood]] estimate;<ref name="Cox & Hinkley 1974, p 295">Cox & Hinkley (1974), p 295</ref>
:*in demonstrating the asymptotic sufficiency of a [[maximum likelihood]] estimate;<ref name="Cox & Hinkley 1974, p 295"/>
:*in the formulation of [[confidence interval]]s;<ref>Cox & Hinkley (1974), p 222–3</ref>
:*in demonstrations of the [[Cramér–Rao bound|Cramér–Rao inequality]].<ref>Cox & Hinkley (1974), p 254</ref>
 
The score function also plays an important role in [[computational statistics]], as it can play a part in the computation of
maximum likelihood estimates.
 
==Definition==
 
The score or efficient score <ref name="Cox1"/> is the [[gradient]] (the vector of [[partial derivative]]s), with respect to some parameter <math>\theta</math>, of the [[logarithm]] (commonly the [[natural logarithm]]) of the [[likelihood function]] (the log-likelihood).
If the observation is <math>X</math> and its likelihood is <math>L(\theta;X)</math>, then the score <math>V</math> can be found through the [[chain rule]]:
 
:<math>
V \equiv V(\theta, X)
=
\frac{\partial}{\partial\theta} \log L(\theta;X)
=
\frac{1}{L(\theta;X)} \frac{\partial L(\theta;X)}{\partial\theta}.
</math>
 
Thus the score <math>V</math> indicates the [[Sensitivity analysis|sensitivity]] of <math>L(\theta;X)</math> (its derivative normalized by its value). Note that <math>V</math> is a function of <math>\theta</math> and the observation <math>X</math>, so that, in general, it is not a [[statistic]]. However in certain applications, such as the [[score test]], the score is evaluated at a specific value of <math>\theta</math> (such as a null-hypothesis value, or at the maximum likelihood estimate of  <math>\theta</math>), in which case the result is a statistic.
 
==Properties==
===Mean===
Under some regularity conditions, the [[expected value]] of <math>V</math> with respect to the observation <math>x</math>, given <math>\theta</math>, written <math>\mathbb{E}(V\mid\theta)</math>, is zero.
To see this rewrite the likelihood function L as a [[probability density function]]  <math>L(\theta; x) = f(x; \theta)</math>. Then:
 
:<math>
\mathbb{E}(V\mid\theta)
=\int_{-\infty}^{+\infty}
f(x; \theta) \frac{\partial}{\partial\theta} \log L(\theta;X)
\,dx
=\int_{-\infty}^{+\infty}
\frac{\partial}{\partial\theta} \log L(\theta;X) f(x; \theta) \, dx
</math>
 
:<math>
=\int_{-\infty}^{+\infty}
\frac{1}{f(x; \theta)}\frac{\partial f(x; \theta)}{\partial \theta}f(x; \theta)\, dx
= \int_{-\infty}^{+\infty} \frac{\partial f(x; \theta)}{\partial \theta} \, dx
</math>
 
If certain differentiability conditions are met (see [[Leibniz integral rule]]), the integral may be rewritten as
 
:<math>
\frac{\partial}{\partial\theta} \int_{-\infty}^{+\infty}
f(x; \theta) \, dx
=
\frac{\partial}{\partial\theta}1 = 0.
</math>
 
It is worth restating the above result in words: the expected value of the score is zero.
Thus, if one were to repeatedly sample from some distribution, and repeatedly calculate the score, then the mean value of the scores would tend to zero as the number of repeat samples approached infinity.
 
===Variance===
{{Main|Fisher information}}
The variance of the score is known as the [[Fisher information]] and is written <math>\mathcal{I}(\theta)</math>. Because the expectation of the score is zero, this may be written as
 
:<math>
\mathcal{I}(\theta)
=
\mathbb{E}
\left\{\left.
\left[
  \frac{\partial}{\partial\theta} \log L(\theta;X)
\right]^2
\right|\theta\right\}.
</math>
 
Note that the Fisher information, as defined above, is not a function of any particular observation, as the random variable <math>X</math> has been averaged out.
This concept of information is useful when comparing two methods of observation of some [[random process]].
 
==Examples==
 
===Bernoulli process===
 
Consider a [[Bernoulli process]], with ''A'' successes and ''B'' failures; the probability of success is&nbsp;''θ''.
 
Then the likelihood ''L'' is
 
:<math>
L(\theta;A,B)=\frac{(A+B)!}{A!B!}\theta^A(1-\theta)^B,</math>
 
so the score ''V'' is
 
:<math>
V=\frac{1}{L}\frac{\partial L}{\partial\theta} = \frac{A}{\theta}-\frac{B}{1-\theta}.
</math>
 
We can now verify that the expectation of the score is zero.  Noting that the expectation of ''A'' is ''n''θ and the expectation of ''B'' is ''n''(1&nbsp;&minus;&nbsp;θ) [recall that ''A'' and ''B'' are random variables], we can see that the expectation of ''V'' is
 
:<math>
E(V)
= \frac{n\theta}{\theta} - \frac{n(1-\theta)}{1-\theta}
= n - n
= 0.
</math>
 
We can also check the variance of <math>V</math>. We know that ''A'' + ''B'' = ''n'' (so ''B'' =&nbsp;''n''&nbsp;&minus;&nbsp;''A'') and the variance of ''A'' is ''n''θ(1&nbsp;&minus;&nbsp;θ) so the variance of ''V'' is
 
:<math>
\begin{align}
\operatorname{var}(V) & =\operatorname{var}\left(\frac{A}{\theta}-\frac{n-A}{1-\theta}\right)
=\operatorname{var}\left(A\left(\frac{1}{\theta}+\frac{1}{1-\theta}\right)\right) \\
& =\left(\frac{1}{\theta}+\frac{1}{1-\theta}\right)^2\operatorname{var}(A)
=\frac{n}{\theta(1-\theta)}.
\end{align}
</math>
 
===Binary outcome model===
 
For models with binary outcomes (''Y'' = 1 or 0), the model can be scored with the logarithm of predictions
 
<math> S = Y \log( p ) + ( Y - 1 ) ( \log( 1 - p ) ) </math>
 
where ''p'' is the probability in the model to be estimated and ''S'' is the score.<ref name=Steyerberg2010>Steyerberg EW, Vickers AJ, Cook NR, Gerds T, Gonen M,  Obuchowski N, Pencina MJ and Kattan MW (2010) Assessing the performance of prediction models. A framework for traditional and novel measures. Epidemiology 21 (1) 128–138 [[DOI: 10.1097/EDE.0b013e3181c30fb2]]</ref>
 
==Applications==
===Scoring algorithm===
{{Main|Scoring algorithm}}
The scoring algorithm is an iterative method for numerically determining the [[maximum likelihood]] [[estimator]].
 
===Score test===
{{Main|Score test}}
{{Expand section|date=December 2009}}
 
==See also==
*[[Fisher information]]
*[[Information theory]]
*[[Score test]]
*[[Scoring algorithm]]
*[[Support curve]]
 
==Notes==
{{Reflist}}
==References==
*Cox, D.R., Hinkley, D.V. (1974) ''Theoretical Statistics'', Chapman & Hall. ISBN 0-412-12420-3
*{{cite book
| last = Schervish
| first = Mark J.
| title = Theory of Statistics
| publisher =Springer
| date =1995
| location =New York
| pages = Section 2.3.1
| isbn = 0-387-94546-6
| nopp = true}}
 
[[Category:Estimation theory]]

Latest revision as of 22:05, 1 December 2014

High there, I am Adrianne and I totally get that name. Vermont has always been simple home and I really love every day living in this case. Gardening is what I do regular. I am a people manager even so soon I'll be on my own. You can find my net site here: http://circuspartypanama.com

my web page :: Clash Of Clans hack android