Wigner quasiprobability distribution: Difference between revisions

From formulasearchengine
Jump to navigation Jump to search
en>Cuzkatzimhut
Undid revision 579800932 by 108.185.155.91 (talk) In-text TeX is deprecated mode in WP:MATH. Pls discuss in Talk page before reverting.
en>Cuzkatzimhut
m →‎top: wikilink
 
Line 1: Line 1:
'''[[Charles Stein (statistician)|Stein]]'s example''' (or '''phenomenon''' or '''paradox'''), in [[decision theory]] and [[estimation theory]], is the phenomenon that when three or more parameters are estimated simultaneously, there exist combined [[estimator]]s more accurate on average (that is, having lower expected mean-squared error) than any method that handles the parameters separately.  
I would like to introduce myself to you, I am Consuelo and I love it. Invoicing is where his [http://data.Gov.uk/data/search?q=primary+income primary income] arrives from and his salary has been really [http://Www.bing.com/search?q=satisfying&form=MSNNWS&mkt=en-us&pq=satisfying satisfying]. Wisconsin is where her house is but she will have to transfer 1 day or another. One of the extremely best things in the world for him is playing mah jongg but he doesn't have the time recently. I'm not good at webdesign but you might want to check my web site: http://wiki.acceed.de/index.php/The_Lost_Secret_Of_Nya_Internet_Casino_P%C3%A5_N%C3%A4tet<br><br>


[[#An intuitive explanation|An intuitive explanation]] is that optimizing for the mean-squared error of a ''combined'' estimator is not the same as optimizing for the errors of separate estimators of the individual parameters. In practical terms, if the combined error is in fact of interest, then a combined estimator should be used, even if the underlying parameters are independent; this occurs in [[channel estimation]] in telecommunications, for instance (different factors affect overall channel performance). On the other hand, if one is instead interested in estimating an individual parameter, then using a combined estimator does not help and is in fact worse.
Have a look at my web page ... Närvarande hittar ni nya casino kungen webben ([http://wiki.acceed.de/index.php/The_Lost_Secret_Of_Nya_Internet_Casino_P%C3%A5_N%C3%A4tet hennes siste blogg])
 
== Formal statement ==
 
The following is perhaps the simplest form of the paradox. Let '''''θ''''' be a vector consisting of ''n''&nbsp;≥&nbsp;3 unknown parameters. To estimate these parameters, a single measurement ''X''<sub>''i''</sub> is performed for each parameter ''&theta;''<sub>''i''</sub>, resulting in a vector '''X''' of length&nbsp;''n''.  Suppose the measurements are [[statistical independence|independent]], [[Normal distribution|Gaussian]] [[random variable]]s, with mean '''''θ''''' and variance 1, i.e.,
 
:<math>{\mathbf X} \sim N({\boldsymbol \theta}, I). \, </math>
 
Thus, each parameter is estimated using a single noisy measurement, and each measurement is equally inaccurate.
 
Under such conditions, it is most intuitive (and most common) to use each measurement as an estimate of its corresponding parameter. This so-called "ordinary" decision rule can be written as
:<math>\hat {\boldsymbol \theta} = {\mathbf X}. \,</math>
 
The quality of such an estimator is measured by its [[risk function]]. A commonly used risk function is the [[mean squared error]], defined as
:<math>\operatorname{E} \left[ \| {\boldsymbol \theta} - \hat {\boldsymbol \theta} \|^2 \right].</math>
Surprisingly, it turns out that the "ordinary" estimator proposed above is suboptimal in terms of mean squared error. In other words, in the setting discussed here, there exist alternative estimators which ''always'' achieve lower '''''mean''''' squared error, no matter what the value of <math>{\boldsymbol \theta}</math> is.
 
For a given θ one could obviously define a perfect "estimator" which is always just θ, but this estimator would be bad for other values of θ. The estimators of Stein's paradox are, for a given θ, better than ''X'' for some values of ''X'' but necessarily worse for others (except perhaps for one particular θ vector, for which the new estimate is always better than ''X''). It is only on average that they are better.
 
More accurately, an estimator <math>\hat {\boldsymbol \theta}_1</math> is said to [[dominating decision rule|dominate]] another estimator <math>\hat {\boldsymbol \theta}_2</math> if, for all values of <math>{\boldsymbol \theta}</math>, the risk of <math>\hat {\boldsymbol \theta}_1</math> is lower than, or equal to, the risk of <math>\hat {\boldsymbol \theta}_2</math>, ''and'' if the inequality is [[strict]] for some <math>{\boldsymbol \theta}</math>. An estimator is said to be [[admissible decision rule|admissible]] if no other estimator dominates it, otherwise it is ''inadmissible''. Thus, Stein's example can be simply stated as follows: ''The ordinary decision rule for estimating the mean of a multivariate Gaussian distribution is inadmissible under mean squared error risk.''
 
Many simple, practical estimators achieve better performance than the ordinary estimator. The best-known example is the [[James–Stein estimator]], which works by starting at ''X'' and moving towards a particular point (such as the origin) by an amount inversely proportional to the distance of ''X'' from that point.
 
For a sketch of the proof of this result, see [[Proof of Stein's example]].
 
== Implications ==
 
Stein's example is surprising, since the "ordinary" decision rule is intuitive and commonly used. In fact, numerous methods for estimator construction, including [[maximum likelihood estimation]], [[BLUE|best linear unbiased estimation]], [[least squares]] estimation and optimal [[equivariant estimation]], all result in the "ordinary" estimator. Yet, as discussed above, this estimator is suboptimal.
 
To demonstrate the unintuitive nature of Stein's example, consider the following real-world example. Suppose we are to estimate three unrelated parameters, such as the US wheat yield for 1993, the number of spectators at the Wimbledon tennis tournament in 2001, and the weight of a randomly chosen candy bar from the supermarket. Suppose we have independent Gaussian measurements of each of these quantities. Stein's example now tells us that we can get a better estimate (on average) for the vector of three parameters by simultaneously using the three unrelated measurements.
 
At first sight it appears that somehow we get a better estimator for US wheat yield by measuring some other unrelated statistics such as the number of spectators at Wimbledon and the weight of a candy bar. This is of course absurd; we have not obtained a better estimator for US wheat yield by itself, but we have produced an estimator for the vector of the means of all three random variables, which has a reduced ''total'' risk.  This occurs because the cost of a bad estimate in one component of the vector is compensated by a better estimate in another component. Also, a specific set of the three estimated mean values obtained with the new estimator will not necessarily be better than the ordinary set (the measured values). It is only on average that the new estimator is better.
 
==An intuitive explanation==
For any particular value of '''θ''' the new estimator will improve at least one of the individual mean square errors <math>\operatorname{E} \left[ ( {\theta_i} - {\hat \theta_i})^2 \right].</math> This is not hard − for instance, if <math>\theta_1</math> is between −1 and 1, and σ = 1, then an estimator that moves <math>X_1</math> towards 0 by 0.5 (or sets it to zero if its absolute value was less than 0.5) will have a lower mean square error than <math>X_1</math> itself. But there are other values of <math>\theta_1</math> for which this estimator is worse than <math>X_1</math> itself. The trick of the Stein estimator, and others that yield the Stein paradox, is that they adjust the shift in such a way that there is always (for any '''θ''' vector) at least one <math>X_i</math> whose mean square error is improved, and its improvement more than compensates for any degradation in mean square error that might occur for another <math>\hat \theta_i</math>. The trouble is that, without knowing '''θ''', you don't know which of the ''n'' mean square errors are improved, so you can't use the Stein estimator only for those parameters.
 
== See also ==
* [[James–Stein estimator]]
 
== References ==
* {{cite journal
  | last = Efron
  | first = B.
  | authorlink = Bradley Efron
  | coauthors = Morris, C.
  | title = Stein's paradox in statistics
  | journal = Scientific American
  | volume = 236
  | issue = 5
  | pages = 119–127
  | year = 1977
  | url = http://www-stat.stanford.edu/~ckirby/brad/other/Article1977.pdf
  | doi = 10.1038/scientificamerican0577-119 }}
* {{cite book
  | last = Lehmann
  | first = E. L.
  | coauthors = Casella, G.
  | title = Theory of Point Estimation
  | year = 1998
  | edition = 2nd
  | chapter = 5
  | isbn = 0-471-05849-1 }}
* {{cite conference
  | first = C.
  | last = Stein
  | authorlink = Charles Stein (statistician)
  | title = Inadmissibility of the usual estimator for the mean of a multivariate distribution
  | url =http://projecteuclid.org/euclid.bsmsp/1200501656
  | booktitle = Proceedings of the Third Berkeley Symposium on Mathematical Statistics and Probability
  | volume = 1
  | pages =197–206
  | date = 1956
  | mr = 0084922 }}
 
[[Category:Decision theory]]
[[Category:Mathematical examples]]
[[Category:Statistical paradoxes]]

Latest revision as of 18:11, 21 December 2014

I would like to introduce myself to you, I am Consuelo and I love it. Invoicing is where his primary income arrives from and his salary has been really satisfying. Wisconsin is where her house is but she will have to transfer 1 day or another. One of the extremely best things in the world for him is playing mah jongg but he doesn't have the time recently. I'm not good at webdesign but you might want to check my web site: http://wiki.acceed.de/index.php/The_Lost_Secret_Of_Nya_Internet_Casino_P%C3%A5_N%C3%A4tet

Have a look at my web page ... Närvarande hittar ni nya casino kungen webben (hennes siste blogg)