Secant method: Difference between revisions

From formulasearchengine
Jump to navigation Jump to search
en>Nicoguaro
m Moving the image to the introduction paragraph. It is more illustrative.
en>Glrx
rvt last 2; no edit comment
 
Line 1: Line 1:
The '''Gram–Charlier A series''' (named in honor of [[Jørgen Pedersen Gram]] and [[Carl Charlier]]), and the '''Edgeworth series''' (named in honor of [[Francis Ysidro Edgeworth]]) are [[series (mathematics)|series]] that approximate a [[probability distribution]] in terms of its [[cumulant]]s.  The series are the same; but, the arrangement of terms (and thus the accuracy of truncating the series) differ.
Hello, dear friend! I am Mickie. I am delighted that I could unify to the entire world. I live in Austria, in the south region. I dream to go to the different nations, to obtain familiarized with appealing individuals.<br><br>My website - [http://drive.ilovetheory.com/content/save-money-these-fantastic-promo-codes-bookbyte-tips-1 bookbyte sell back promo code]
 
==Gram–Charlier A series==
 
The key idea of these expansions is to write the [[Characteristic function (probability theory)|characteristic function]] of the distribution whose [[probability density function]] is <math>\mathit{F}</math> to be approximated in terms of the characteristic function of a distribution with known and suitable properties, and to recover <math>\mathit{F}</math>  through the inverse [[Fourier transform]].
 
Let <math>\mathit{f}</math> be the characteristic function of the distribution whose density function is <math>\mathit{F}</math>, and <math>\kappa_r</math> its [[cumulant]]s. We expand in terms of a known distribution with probability density function <math>\Psi</math>, characteristic function <math>\psi</math>, and cumulants <math>\gamma_r</math>. The density <math>\Psi</math> is generally chosen to be that of the [[normal distribution]], but other choices are possible as well. By the definition of the cumulants, we have the following formal identity:
 
:<math>f(t)=\exp\left[\sum_{r=1}^\infty(\kappa_r-\gamma_r)\frac{(it)^r}{r!}\right]\psi(t)\,.</math>
 
By the properties of the Fourier transform, (''it'')<sup>''r''</sup>ψ(''t'') is the Fourier transform of (−1)<sup>''r''</sup> ''D''<sup>''r''</sup> <math>\Psi</math>(''x''), where ''D'' is the differential operator with respect to ''x''. Thus, we find for ''F'' the formal expansion
 
:<math>F(x) = \exp\left[\sum_{r=1}^\infty(\kappa_r - \gamma_r)\frac{(-D)^r}{r!}\right]\Psi(x)\,.</math>
 
If <math>\Psi</math> is chosen as the normal density with mean and variance as given by ''F'', that is, mean μ = κ<sub>1</sub> and variance  σ<sup>2</sup> = κ<sub>2</sub>, then the expansion becomes
 
:<math>
F(x) = \exp\left[\sum_{r=3}^\infty\kappa_r\frac{(-D)^r}{r!}\right]\frac{1}{\sqrt{2\pi}\sigma}\exp\left[-\frac{(x-\mu)^2}{2\sigma^2}\right]\,.</math>
 
By expanding the exponential and collecting terms according to the order of the derivatives, we arrive at the Gram–Charlier A series. If we include only the first two correction terms to the normal distribution, we obtain
 
:<math> F(x) \approx \frac{1}{\sqrt{2\pi}\sigma}\exp\left[-\frac{(x-\mu)^2}{2\sigma^2}\right]\left[1+\frac{\kappa_3}{3!\sigma^3}H_3\left(\frac{x-\mu}{\sigma}\right)+\frac{\kappa_4}{4!\sigma^4}H_4\left(\frac{x-\mu}{\sigma}\right)\right]\,,</math>
 
with ''H''<sub>3</sub>(''x'') = ''x''<sup>3</sup> − 3''x'' and
''H''<sub>4</sub>(''x'') = ''x''<sup>4</sup> − 6''x''<sup>2</sup> + 3 (these are [[Hermite polynomials]]).
 
Note that this expression is not guaranteed to be positive, and is therefore not a valid probability distribution. The Gram–Charlier A series diverges in many cases of interest—it converges only if ''F''(''x'') falls off faster than exp(−''x''<sup>2</sup>/4) at infinity (Cramér 1957). When it does not converge, the series is also not a true [[asymptotic expansion]], because it is not possible to estimate the error of the expansion. For this reason, the Edgeworth series (see next section) is generally preferred over the Gram–Charlier A series.
 
==Edgeworth series==
 
Edgeworth developed a similar expansion as an improvement to the [[central limit theorem]]. The advantage of the Edgeworth series is that the error is controlled, so that it is a true [[asymptotic expansion]].
 
Let {''X''<sub>''i''</sub>} be a sequence of [[independent and identically distributed]] random variables with mean ''μ'' and variance ''σ''<sup>2</sup>, and let ''Y''<sub>''n''</sub> be their standardized sums:
: <math>
    Y_n = \frac{1}{\sqrt{n}} \sum_{i=1}^n \frac{X_i - \mu}{\sigma}.
  </math>
 
Let ''F''<sub>''n''</sub> denote the [[cumulative distribution function]]s of the variables ''Y''<sub>''n''</sub>. Then by the central limit theorem,
: <math>
    \lim_{n\to\infty} F_n(x) = \Phi(x) \equiv \int_{-\infty}^x \tfrac{1}{\sqrt{2\pi}}e^{-\frac{1}{2}q^2}dq
  </math>
 
for every ''x'', as long as the mean and variance are finite.
 
Now assume that the random variables ''X''<sub>''i''</sub> have mean μ, variance σ<sup>2</sup>, and higher cumulants κ<sub>''r''</sub>=σ<sup>''r''</sup>λ<sub>''r''</sub>. If we expand in terms of the standard normal distribution, that is, if we set
 
:<math>\Psi(x)=\frac{1}{\sqrt{2\pi}}\exp(-\tfrac{1}{2}x^2)</math>
 
then the cumulant differences in the formal expression of the characteristic function ''f''<sub>''n''</sub>(t) of ''F''<sub>''n''</sub> are
 
:<math> \kappa^{F(n)}_1-\gamma_1 = 0\,,</math>
 
:<math> \kappa^{F(n)}_2-\gamma_2 = 0\,,</math>
 
:<math> \kappa^{F(n)}_r-\gamma_r = \frac{\kappa_r}{\sigma^rn^{r/2-1}} = \frac{\lambda_r}{n^{r/2-1}}; \quad r\geq 3\,.</math>
 
The Edgeworth series is developed similarly to the Gram–Charlier A series, only that now terms are collected according to powers of ''n''. Thus, we have
 
:<math> f_n(t)=\left[1+\sum_{j=1}^\infty \frac{P_j(it)}{n^{j/2}}\right] \exp(-t^2/2)\,,</math>
 
where ''P''<sub>''j''</sub>(''x'') is a [[polynomial]] of degree 3''j''. Again, after inverse Fourier transform, the density function ''F''<sub>''n''</sub> follows as
 
:<math> F_n(x) = \Phi(x) + \sum_{j=1}^\infty \frac{P_j(-D)}{n^{j/2}} \Phi(x)\,.</math>
 
The first five terms of the expansion are<ref>{{MathWorld |title=Edgeworth Series|urlname=EdgeworthSeries}}</ref>
: <math>\begin{align}
    F_n(x) =
      & \Phi(x) \\
      & - \frac{1}{n^{1/2}}\bigg( \tfrac{1}{6}\lambda_3\,\Phi^{(3)}(x) \bigg) \\
      & + \frac{1}{n}\bigg( \tfrac{1}{24}\lambda_4\,\Phi^{(4)}(x) + \tfrac{1}{72}\lambda_3^2\,\Phi^{(6)}(x) \bigg) \\
      & - \frac{1}{n^{3/2}}\bigg( \tfrac{1}{120}\lambda_5\,\Phi^{(5)}(x) + \tfrac{1}{144}\lambda_3\lambda_4\,\Phi^{(7)}(x) + \tfrac{1}{1296}\lambda_3^3\,\Phi^{(9)}(x)\bigg) \\
      & + \frac{1}{n^2}\bigg( \tfrac{1}{720}\lambda_6\,\Phi^{(6)}(x) + \big(\tfrac{1}{1152}\lambda_4^2 + \tfrac{1}{720}\lambda_3\lambda_5\big)\Phi^{(8)}(x) \\
      &\qquad\quad + \tfrac{1}{1728}\lambda_3^2\lambda_4\,\Phi^{(10)}(x) + \tfrac{1}{31104}\lambda_3^4\,\Phi^{(12)}(x) \bigg) \\
      & + O(n^{-5/2})\,.
  \end{align}
</math>
 
Here,  {{math|<VAR>&Phi;</VAR><sup>(''j'')</sup>(''x'')}} is the ''j''-th derivative of {{math|<VAR>&Phi;</VAR>(·)}} at point ''x''. Remembering that the [[Normal_distribution#Symmetries_and_derivatives|derivatives of the density of the normal distribution]] are related to the normal density by ''ϕ''<sup>(''n'')</sup>(''x'') is (-1)<sup>''n''</sup>''H<sub>n</sub>''(''x'')''ϕ''(''x''), (where ''H<sub>n</sub>'' is the [[Hermite polynomial]] of order ''n''), this explains the alternative representations in terms of the density function. Blinnikov and Moessner (1998) have given a simple algorithm to calculate higher-order terms of the expansion.
 
{{More footnotes|date=February 2011}}
 
==Illustration: density of the sample mean of 3 Χ²==
 
[[File:Edgeworth expansion of the density of the sample mean of three Chi2 variables.png|thumb|Density of the sample mean of three chi2 variables. The chart compares the true density, the normal approximation, and two edgeworth expansions]]
 
Take  <math> X_i \sim \chi^2(k=2) \qquad i=1, 2, 3 </math> and the sample mean <math> \bar X = \frac{1}{3} \sum_{i=1}^{3} X_i </math>.
 
We can use several distributions for <math> \bar X </math>:
* The exact distribution, which follows a [[gamma distribution]]: <math> \bar X \sim \mathrm{Gamma}\left(\alpha=n\cdot k /2, \theta= 2/n \right)</math> = <math>\mathrm{Gamma}\left(\alpha=3, \theta= 2/3 \right)</math>
* The asymptotic normal distribution: <math> \bar X  \xrightarrow{n \to \infty} N(k, 2\cdot k /n ) = N(2, 4/3 )</math>
* Two Edgeworth expansion, of degree 2 and 3
 
 
 
==Disadvantages of the Edgeworth expansion==
 
Edgeworth expansions can suffer from a few issues:
* They are not guaranteed to be a proper [[probability distribution]] as:
** The integral of the density needs not integrate to 1
** Probabilities can be negative
* They can be inaccurate, especially in the tails, due to mainly two reasons:
** They are obtained under a Taylor series around the mean
** They guarantee (asymptotically) an [[Approximation error|absolute error]], not a relative one. This is an issue when one wants to approximate very small quantities, for which the absolute error might be small, but the relative error important.
 
 
 
==See also==
 
* [[Cornish–Fisher expansion]]
 
==References==
{{Reflist}}
 
==Further reading==
* [[Harald Cramér|H. Cramér]]. (1957). ''Mathematical Methods of Statistics''. Princeton University Press, Princeton.
* D. L. Wallace. (1958). "Asymptotic approximations to distributions". ''Annals of Mathematical Statistics'', ''29'': 635–654.
* M. Kendall & A. Stuart. (1977), ''The advanced theory of statistics'', Vol 1: Distribution theory, 4th Edition, Macmillan, New York.
* [[Peter McCullagh|P. McCullagh]] (1987). ''Tensor Methods in Statistics''.  Chapman and Hall, London.
* [[David Cox (statistician)|D. R. Cox]] and [[Ole Barndorff-Nielsen|O. E. Barndorff-Nielsen]] (1989). ''Asymptotic Techniques for Use in Statistics''.  Chapman and Hall, London.
* P. Hall (1992). ''The Bootstrap and Edgeworth Expansion''. Springer, New York.
* {{springer|title=Edgeworth series|id=p/e035060}}
* S. Blinnikov and R. Moessner (1998). [http://aas.aanda.org/articles/aas/pdf/1998/10/h0596.pdf ''Expansions for nearly Gaussian distributions'']. ''Astronomy and astrophysics Supplement series'', ''130'': 193–205.
* J. E. Kolassa (2006). ''Series Approximation Methods in Statistics'' (3rd ed.). (Lecture Notes in Statistics #88). Springer, New York.
 
{{DEFAULTSORT:Edgeworth Series}}
[[Category:Mathematical series]]
[[Category:Statistical theory]]
[[Category:Statistical approximations]]

Latest revision as of 20:38, 8 January 2015

Hello, dear friend! I am Mickie. I am delighted that I could unify to the entire world. I live in Austria, in the south region. I dream to go to the different nations, to obtain familiarized with appealing individuals.

My website - bookbyte sell back promo code