Dedekind psi function: Difference between revisions
Reference on Dedekind Psi function |
en>Crisófilax No edit summary |
||
| Line 1: | Line 1: | ||
{{Probability distribution | | |||
name =Inverse Gaussian| | |||
type =density| | |||
pdf_image =[[Image:PDF invGauss.svg|325px]]|| | |||
cdf_image =| | |||
parameters =<math>\lambda > 0 </math> <br/><math> \mu > 0</math>| | |||
support =<math> x \in (0,\infty)</math>| | |||
pdf =<math> \left[\frac{\lambda}{2 \pi x^3}\right]^{1/2} \exp{\frac{-\lambda (x-\mu)^2}{2 \mu^2 x}}</math>| | |||
cdf =<math> \Phi\left(\sqrt{\frac{\lambda}{x}} \left(\frac{x}{\mu}-1 \right)\right) </math> <math>+\exp\left(\frac{2 \lambda}{\mu}\right) \Phi\left(-\sqrt{\frac{\lambda}{x}}\left(\frac{x}{\mu}+1 \right)\right) </math> | |||
where <math> \Phi \left(\right)</math> is the [[normal distribution|standard normal (standard Gaussian) distribution]] c.d.f. | | |||
mean =<math> \mu </math>| | |||
median =| | |||
mode =<math>\mu\left[\left(1+\frac{9 \mu^2}{4 \lambda^2}\right)^\frac{1}{2}-\frac{3 \mu}{2 \lambda}\right]</math>| | |||
variance =<math>\frac{\mu^3}{\lambda} </math>| | |||
skewness =<math>3\left(\frac{\mu}{\lambda}\right)^{1/2} </math>| | |||
kurtosis =<math>\frac{15 \mu}{\lambda} </math>| | |||
entropy =| | |||
mgf =<math>e^{\left(\frac{\lambda}{\mu}\right)\left[1-\sqrt{1-\frac{2\mu^2t}{\lambda}}\right]}</math>| | |||
char =<math>e^{\left(\frac{\lambda}{\mu}\right)\left[1-\sqrt{1-\frac{2\mu^2\mathrm{i}t}{\lambda}}\right]}</math>| | |||
}} | |||
In [[probability theory]], the '''inverse Gaussian distribution''' (also known as the '''Wald distribution''') is a two-parameter family of [[continuous probability distribution]]s with [[support (mathematics)|support]] on (0,∞). | |||
Its [[probability density function]] is given by | |||
: <math> f(x;\mu,\lambda) | |||
= \left[\frac{\lambda}{2 \pi x^3}\right]^{1/2} \exp{\frac{-\lambda (x-\mu)^2}{2 \mu^2 x}}</math> | |||
for ''x'' > 0, where <math>\mu > 0</math> is the mean and <math>\lambda > 0</math> is the shape parameter. | |||
As λ tends to infinity, the inverse Gaussian distribution becomes more like a [[normal distribution|normal (Gaussian) distribution]]. The inverse Gaussian distribution has several properties analogous to a Gaussian distribution. The name can be misleading: it is an "inverse" only in that, while the Gaussian describes a [[Wiener process|Brownian Motion's]] level at a fixed time, the inverse Gaussian describes the distribution of the time a Brownian Motion with positive drift takes to reach a fixed positive level. | |||
Its cumulant generating function (logarithm of the characteristic function) is the inverse of the cumulant generating function of a Gaussian random variable. | |||
To indicate that a [[random variable]] ''X'' is inverse Gaussian-distributed with mean μ and shape parameter λ we write | |||
:<math>X \sim IG(\mu, \lambda).\,\!</math> | |||
==Properties== | |||
===Summation=== | |||
If ''X''<sub>''i''</sub> has a IG(''μ''<sub>0</sub>''w''<sub>''i''</sub>, ''λ''<sub>0</sub>''w''<sub>''i''</sub><sup>2</sup>) distribution for ''i'' = 1, 2, ..., ''n'' | |||
and all ''X''<sub>''i''</sub> are [[statistical independence|independent]], then | |||
:<math> | |||
S=\sum_{i=1}^n X_i | |||
\sim | |||
IG \left( \mu_0 \sum w_i, \lambda_0 \left(\sum w_i \right)^2 \right). </math> | |||
Note that | |||
:<math> | |||
\frac{\textrm{Var}(X_i)}{\textrm{E}(X_i)}= \frac{\mu_0^2 w_i^2 }{\lambda_0 w_i^2 }=\frac{\mu_0^2}{\lambda_0} | |||
</math> | |||
is constant for all ''i''. This is a [[Necessary and sufficient conditions|necessary condition]] for the summation. Otherwise ''S'' would not be inverse Gaussian. | |||
===Scaling=== | |||
For any ''t'' > 0 it holds that | |||
:<math> | |||
X \sim IG(\mu,\lambda) \,\,\,\,\,\, \Rightarrow \,\,\,\,\,\, tX \sim IG(t\mu,t\lambda). | |||
</math> | |||
===Exponential family=== | |||
The inverse Gaussian distribution is a two-parameter [[exponential family]] with [[natural parameters]] -λ/(2μ²) and -λ/2, and [[natural statistics]] ''X'' and ''1/X''. | |||
==Relationship with Brownian motion== | |||
The [[stochastic process]] ''X''<sub>''t''</sub> given by | |||
:<math>X_0 = 0\quad</math> | |||
:<math>X_t = \nu t + \sigma W_t\quad\quad\quad\quad</math> | |||
(where ''W''<sub>''t''</sub> is a standard [[Wiener process|Brownian motion]] and <math>\nu > 0</math>) | |||
is a Brownian motion with drift ''ν''. | |||
Then, the [[first passage time]] for a fixed level <math>\alpha > 0</math> by ''X''<sub>''t''</sub> is distributed according to an inverse-gaussian: | |||
:<math>T_\alpha = \inf\{ 0 < t \mid X_t=\alpha \} \sim IG(\tfrac\alpha\nu, \tfrac {\alpha^2} {\sigma^2}).\,</math> | |||
===When drift is zero=== | |||
A common special case of the above arises when the Brownian motion has no drift. In that case, parameter | |||
μ tends to infinity, and the first passage time for fixed level α has probability density function | |||
: <math> f \left( x; 0, \left(\frac{\alpha}{\sigma}\right)^2 \right) | |||
= \frac{\alpha}{\sigma \sqrt{2 \pi x^3}} \exp\left(-\frac{\alpha^2 }{2 x \sigma^2}\right).</math> | |||
This is a [[Lévy distribution]] with parameters <math>c=\frac{\alpha^2}{\sigma^2}</math> and <math>\mu=0</math>. | |||
==Maximum likelihood== | |||
The model where | |||
:<math> | |||
X_i \sim IG(\mu,\lambda w_i), \,\,\,\,\,\, i=1,2,\ldots,n | |||
</math> | |||
with all ''w''<sub>''i''</sub> known, (''μ'', ''λ'') unknown and all ''X''<sub>''i''</sub> [[statistical independence|independent]] has the following likelihood function | |||
:<math> | |||
L(\mu, \lambda)= | |||
\left( \frac{\lambda}{2\pi} \right)^\frac n 2 | |||
\left( \prod^n_{i=1} \frac{w_i}{X_i^3} \right)^{\frac{1}{2}} | |||
\exp\left(\frac{\lambda}{\mu}\sum_{i=1}^n w_i -\frac{\lambda}{2\mu^2}\sum_{i=1}^n w_i X_i - \frac\lambda 2 \sum_{i=1}^n w_i \frac1{X_i} \right). | |||
</math> | |||
Solving the likelihood equation yields the following maximum likelihood estimates | |||
:<math> | |||
\hat{\mu}= \frac{\sum_{i=1}^n w_i X_i}{\sum_{i=1}^n w_i}, \,\,\,\,\,\,\,\, \frac{1}{\hat{\lambda}}= \frac{1}{n} \sum_{i=1}^n w_i \left( \frac{1}{X_i}-\frac{1}{\hat{\mu}} \right). | |||
</math> | |||
<math>\hat{\mu}</math> and <math>\hat{\lambda}</math> are independent and | |||
:<math> | |||
\hat{\mu} \sim IG \left(\mu, \lambda \sum_{i=1}^n w_i \right) \,\,\,\,\,\,\,\, \frac{n}{\hat{\lambda}} \sim \frac{1}{\lambda} \chi^2_{n-1}. | |||
</math> | |||
==Generating random variates from an inverse-Gaussian distribution== | |||
The following algorithm may be used.<ref>''Generating Random Variates Using Transformations with Multiple Roots'' by John R. Michael, William R. Schucany and Roy W. Haas, ''[[American Statistician]]'', Vol. 30, No. 2 (May, 1976), pp. 88–90</ref> | |||
Generate a random variate from a normal distribution with a mean of 0 and 1 standard deviation | |||
:<math> | |||
\displaystyle \nu = N(0,1). | |||
</math> | |||
Square the value | |||
:<math> | |||
\displaystyle y = \nu^2 | |||
</math> | |||
and use this relation | |||
:<math> | |||
x = \mu + \frac{\mu^2 y}{2\lambda} - \frac{\mu}{2\lambda}\sqrt{4\mu \lambda y + \mu^2 y^2}. | |||
</math> | |||
Generate another random variate, this time sampled from a uniform distribution between 0 and 1 | |||
:<math> | |||
\displaystyle z = U(0,1). | |||
</math> | |||
If | |||
:<math> | |||
z \le \frac{\mu}{\mu+x} | |||
</math> | |||
then return | |||
:<math> | |||
\displaystyle | |||
x | |||
</math> | |||
else return | |||
:<math> | |||
\frac{\mu^2}{x}. | |||
</math> | |||
Sample code in [[Java (programming language)|Java]]: | |||
<source lang="java"> | |||
public double inverseGaussian(double mu, double lambda) { | |||
Random rand = new Random(); | |||
double v = rand.nextGaussian(); // sample from a normal distribution with a mean of 0 and 1 standard deviation | |||
double y = v*v; | |||
double x = mu + (mu*mu*y)/(2*lambda) - (mu/(2*lambda)) * Math.sqrt(4*mu*lambda*y + mu*mu*y*y); | |||
double test = rand.nextDouble(); // sample from a uniform distribution between 0 and 1 | |||
if (test <= (mu)/(mu + x)) | |||
return x; | |||
else | |||
return (mu*mu)/x; | |||
} | |||
</source> | |||
[[File:Wald Distribution matplotlib.jpg|thumb|right|Wald Distribution using Python with aid of matplotlib and NumPy]] | |||
And to plot Wald distribution in [[Python (programming language)|Python]] using [[matplotlib]] and [[Numpy|NumPy]]: | |||
<source lang="python"> | |||
import matplotlib.pyplot as plt | |||
import numpy as np | |||
h = plt.hist(np.random.wald(3, 2, 100000), bins=200, normed=True) | |||
plt.show() | |||
</source> | |||
==Related distributions== | |||
* If <math> X \sim \textrm{IG}(\mu,\lambda)\,</math> then <math> k X \sim \textrm{IG}(k \mu,k \lambda)\,</math> | |||
* If <math> X_i \sim \textrm{IG}(\mu,\lambda)\,</math> then <math> \sum_{i=1}^{n} X_i \sim \textrm{IG}(n \mu,n^2 \lambda)\,</math> | |||
* If <math> X_i \sim \textrm{IG}(\mu,\lambda)\,</math> for <math>i=1,\ldots,n\,</math> then <math> \bar{X} \sim \textrm{IG}(\mu,n \lambda)\,</math> | |||
* If <math> X_i \sim \textrm{IG}(\mu_i,2 \mu^2_i)\,</math> then <math> \sum_{i=1}^{n} X_i \sim \textrm{IG}\left(\sum_{i=1}^n \mu_i, 2 {\left( \sum_{i=1}^{n} \mu_i \right)}^2\right)\,</math> | |||
The convolution of a Wald distribution and an exponential (the ex-Wald distribution) is used as a model for response times in psychology.<ref name=Schwarz2001>Schwarz W (2001) The ex-Wald distribution as a descriptive model of response times. Behav Res Methods Instrum Comput 33(4):457-469</ref> | |||
==History== | |||
This distribution appears to have been first derived by Schrödinger in 1915 as the time to first passage of a Brownian motion.<ref name=Schrodinger1915>Schrodinger E (1915) Zur Theorie der Fall—und Steigversuche an Teilchenn mit Brownscher Bewegung. Physikalische Zeitschrift 16, 289-295</ref> The name inverse Gaussian was proposed by Tweedie in 1945.<ref name=Folks1978>Folks JL & Chhikara RS (1978) The inverse Gaussian and its statistical application - a review. J Roy Stat Soc 40(3) 263-289</ref> Wald re-derived this distribution in 1947 as the limiting form of a sample in a sequential probability ratio test. Tweedie investigated this distribution in 1957 and established some of its statistical properties. | |||
==Software== | |||
The R programming language has software for this distribution. [http://www.stat.ucl.ac.be/ISdidactique/Rhelp/library/SuppDists/html/invGauss.html] | |||
== See also == | |||
*[[Generalized inverse Gaussian distribution]] | |||
* [[Tweedie distributions]]—The inverse Gaussian distribution is a member of the family of Tweedie [[exponential dispersion model]]s | |||
*[[Stopping time]] | |||
==Notes== | |||
{{Reflist}} | |||
==References== | |||
* ''The inverse gaussian distribution: theory, methodology, and applications'' by Raj Chhikara and Leroy Folks, 1989 ISBN 0-8247-7997-5 | |||
* ''System Reliability Theory'' by Marvin Rausand and [[Arnljot Høyland]] | |||
* ''The Inverse Gaussian Distribution'' by Dr. V. Seshadri, Oxford Univ Press, 1993 | |||
==External links== | |||
* [http://mathworld.wolfram.com/InverseGaussianDistribution.html Inverse Gaussian Distribution] in Wolfram website. | |||
{{ProbDistributions|continuous-semi-infinite}} | |||
{{DEFAULTSORT:Inverse Gaussian Distribution}} | |||
[[Category:Continuous distributions]] | |||
[[Category:Exponential family distributions]] | |||
[[Category:Probability distributions]] | |||
Revision as of 04:02, 2 September 2013
Template:Probability distribution
In probability theory, the inverse Gaussian distribution (also known as the Wald distribution) is a two-parameter family of continuous probability distributions with support on (0,∞).
Its probability density function is given by
for x > 0, where is the mean and is the shape parameter.
As λ tends to infinity, the inverse Gaussian distribution becomes more like a normal (Gaussian) distribution. The inverse Gaussian distribution has several properties analogous to a Gaussian distribution. The name can be misleading: it is an "inverse" only in that, while the Gaussian describes a Brownian Motion's level at a fixed time, the inverse Gaussian describes the distribution of the time a Brownian Motion with positive drift takes to reach a fixed positive level.
Its cumulant generating function (logarithm of the characteristic function) is the inverse of the cumulant generating function of a Gaussian random variable.
To indicate that a random variable X is inverse Gaussian-distributed with mean μ and shape parameter λ we write
Properties
Summation
If Xi has a IG(μ0wi, λ0wi2) distribution for i = 1, 2, ..., n and all Xi are independent, then
Note that
is constant for all i. This is a necessary condition for the summation. Otherwise S would not be inverse Gaussian.
Scaling
For any t > 0 it holds that
Exponential family
The inverse Gaussian distribution is a two-parameter exponential family with natural parameters -λ/(2μ²) and -λ/2, and natural statistics X and 1/X.
Relationship with Brownian motion
The stochastic process Xt given by
(where Wt is a standard Brownian motion and ) is a Brownian motion with drift ν.
Then, the first passage time for a fixed level by Xt is distributed according to an inverse-gaussian:
When drift is zero
A common special case of the above arises when the Brownian motion has no drift. In that case, parameter μ tends to infinity, and the first passage time for fixed level α has probability density function
This is a Lévy distribution with parameters and .
Maximum likelihood
The model where
with all wi known, (μ, λ) unknown and all Xi independent has the following likelihood function
Solving the likelihood equation yields the following maximum likelihood estimates
Generating random variates from an inverse-Gaussian distribution
The following algorithm may be used.[1]
Generate a random variate from a normal distribution with a mean of 0 and 1 standard deviation
Square the value
and use this relation
Generate another random variate, this time sampled from a uniform distribution between 0 and 1
If
then return
else return
Sample code in Java:
public double inverseGaussian(double mu, double lambda) {
Random rand = new Random();
double v = rand.nextGaussian(); // sample from a normal distribution with a mean of 0 and 1 standard deviation
double y = v*v;
double x = mu + (mu*mu*y)/(2*lambda) - (mu/(2*lambda)) * Math.sqrt(4*mu*lambda*y + mu*mu*y*y);
double test = rand.nextDouble(); // sample from a uniform distribution between 0 and 1
if (test <= (mu)/(mu + x))
return x;
else
return (mu*mu)/x;
}
And to plot Wald distribution in Python using matplotlib and NumPy:
import matplotlib.pyplot as plt
import numpy as np
h = plt.hist(np.random.wald(3, 2, 100000), bins=200, normed=True)
plt.show()
Related distributions
The convolution of a Wald distribution and an exponential (the ex-Wald distribution) is used as a model for response times in psychology.[2]
History
This distribution appears to have been first derived by Schrödinger in 1915 as the time to first passage of a Brownian motion.[3] The name inverse Gaussian was proposed by Tweedie in 1945.[4] Wald re-derived this distribution in 1947 as the limiting form of a sample in a sequential probability ratio test. Tweedie investigated this distribution in 1957 and established some of its statistical properties.
Software
The R programming language has software for this distribution. [1]
See also
- Generalized inverse Gaussian distribution
- Tweedie distributions—The inverse Gaussian distribution is a member of the family of Tweedie exponential dispersion models
- Stopping time
Notes
43 year old Petroleum Engineer Harry from Deep River, usually spends time with hobbies and interests like renting movies, property developers in singapore new condominium and vehicle racing. Constantly enjoys going to destinations like Camino Real de Tierra Adentro.
References
- The inverse gaussian distribution: theory, methodology, and applications by Raj Chhikara and Leroy Folks, 1989 ISBN 0-8247-7997-5
- System Reliability Theory by Marvin Rausand and Arnljot Høyland
- The Inverse Gaussian Distribution by Dr. V. Seshadri, Oxford Univ Press, 1993
External links
- Inverse Gaussian Distribution in Wolfram website.
55 yrs old Metal Polisher Records from Gypsumville, has interests which include owning an antique car, summoners war hack and spelunkering. Gets immense motivation from life by going to places such as Villa Adriana (Tivoli).
my web site - summoners war hack no survey ios
- ↑ Generating Random Variates Using Transformations with Multiple Roots by John R. Michael, William R. Schucany and Roy W. Haas, American Statistician, Vol. 30, No. 2 (May, 1976), pp. 88–90
- ↑ Schwarz W (2001) The ex-Wald distribution as a descriptive model of response times. Behav Res Methods Instrum Comput 33(4):457-469
- ↑ Schrodinger E (1915) Zur Theorie der Fall—und Steigversuche an Teilchenn mit Brownscher Bewegung. Physikalische Zeitschrift 16, 289-295
- ↑ Folks JL & Chhikara RS (1978) The inverse Gaussian and its statistical application - a review. J Roy Stat Soc 40(3) 263-289