Butler–Volmer equation: Difference between revisions
→See also: goldman eq |
No edit summary |
||
Line 1: | Line 1: | ||
{{Probability distribution | |||
| name = Half-normal distribution | |||
| type = density | |||
| pdf_image = | |||
| cdf_image = | |||
| notation = | |||
| parameters = {{nowrap|''σ''<sup>2</sup> > 0}} — ([[scale parameter|scale]]) | |||
| support = {{nowrap|''x'' ∈ [0,<math>\infty</math>)}} | |||
| pdf = <math>f(x; \sigma) = \frac{\sqrt{2}}{\sigma\sqrt{\pi}}\exp \left( -\frac{x^2}{2\sigma^2} \right) \quad x>0</math> | |||
| cdf = <math>F(x; \sigma) = \mbox{erf}\left(\frac{x}{\sqrt{2}\sigma}\right)</math> | |||
| mean = <math>\frac{\sigma\sqrt{2}}{\sqrt{\pi}}</math> | |||
| median = <math>\sigma\sqrt{2}\mbox{erf}^{-1}(1/2)</math> | |||
| mode = | |||
| variance = <math>\sigma^2\left(1 - \frac{2}{\pi}\right) </math> | |||
| skewness = | |||
| kurtosis = <!-- DO NOT REPLACE THIS WITH THE OLD-STYLE KURTOSIS --> | |||
| entropy = <math> \frac{1}{2} \log \left( \frac{ \pi \sigma^2 }{2} \right) + \frac{1}{2} </math> | |||
| mgf = | |||
| char = | |||
| fisher = | |||
}} | |||
The '''half-normal distribution''' is a special case of the [[folded normal distribution]]. | |||
Let <math>X</math> follow an ordinary [[normal distribution]], <math>N(0,\sigma^2)</math>, then <math>Y=|X|</math> follows a half-normal distribution. Thus, the half-normal distribution is a fold at the mean of an ordinary normal distribution with mean zero. | |||
Using the <math>\sigma</math> parametrization of the normal distribution, the [[probability density function]] (PDF) of the half-normal is given by | |||
: <math>f_Y(y; \sigma) = \frac{\sqrt{2}}{\sigma\sqrt{\pi}}\exp \left( -\frac{y^2}{2\sigma^2} \right) \quad y>0</math>, | |||
Where <math>E[Y] = \mu = \frac{\sigma\sqrt{2}}{\sqrt{\pi}}</math>. | |||
Alternatively using a scaled precision (inverse of the variance) parametrization (to avoid issues if <math>\sigma</math> is near zero), obtained by setting <math>\theta=\frac{\sqrt{\pi}}{\sigma\sqrt{2}}</math>, the [[probability density function]] is given by | |||
: <math>f_Y(y; \theta) = \frac{2\theta}{\pi}\exp \left( -\frac{y^2\theta^2}{\pi} \right) \quad y>0</math>, | |||
where <math>E[Y] = \mu = \frac{1}{\theta}</math>. | |||
The [[cumulative distribution function]] (CDF) is given by | |||
: <math>F_Y(y; \sigma) = \int_0^y \frac{1}{\sigma}\sqrt{\frac{2}{\pi}} \, \exp \left( -\frac{x^2}{2\sigma^2} \right)\, dx</math> | |||
Using the change-of-variables <math>z = x/(\sqrt{2}\sigma)</math>, the CDF can be written as | |||
: <math>F_Y(y; \sigma) = \frac{2}{\sqrt{\pi}} \,\int_0^{y/(\sqrt{2}\sigma)}\exp \left(-z^2\right)dz = \mbox{erf}\left(\frac{y}{\sqrt{2}\sigma}\right), | |||
</math> | |||
where erf(x) is the [[error function]], a standard function in many mathematical software packages. | |||
The expectation is then given by | |||
: <math>E(Y) = \sigma \sqrt{2/\pi},</math> | |||
The variance is given by | |||
: <math>\operatorname{Var}(Y) = \sigma^2\left(1 - \frac{2}{\pi}\right). </math> | |||
Since this is proportional to the variance σ<sup>2</sup> of ''X'', σ can be seen as a [[scale parameter]] of the new distribution. | |||
The entropy of the half-normal distribution is exactly one bit less the entropy of a zero-mean normal distribution with the same second moment about 0. This can be understood intuitively since the magnitude operator reduces information by one bit (if the probability distribution at its input is even). Alternatively, since a half-normal distribution is always positive, the one bit it would take to record whether a standard normal random variable were positive (say, a 1) or negative (say, a 0) is no longer necessary. Thus, | |||
: <math> H(Y) = \frac{1}{2} \log \left( \frac{ \pi \sigma^2 }{2} \right) + \frac{1}{2} </math> | |||
== Parameter estimation == | |||
Given numbers <math>\{x_i\}_{i=1}^n</math> drawn from a half-normal distribution, the unknown parameter <math>\sigma</math> of that distribution can be estimated by the method of [[maximum likelihood]], giving | |||
: <math> \hat \sigma = \sqrt{\frac 1 n \sum_{i=1}^n x_i^2}</math> | |||
== Related distributions == | |||
* The distribution is a special case of the [[folded normal distribution]] with μ = 0. | |||
* (''Y''/σ)^2 has a [[chi square distribution]] with 1 degree of freedom. | |||
* ''Y''/σ has a [[chi distribution]] with 1 degree of freedom, if <math>Y\sim HN(\sigma)</math> then <math>Y/\sigma \sim \chi_1</math>. | |||
==External links== | |||
*[http://mathworld.wolfram.com/Half-NormalDistribution.html Half-Normal Distribution] at [[MathWorld]] | |||
== References== | |||
{{ProbDistributions|continuous-semi-infinite}} | |||
[[Category:Continuous distributions]] | |||
[[Category:Normal distribution]] | |||
[[Category:Probability distributions]] |
Revision as of 12:13, 25 November 2013
Template:Probability distribution
The half-normal distribution is a special case of the folded normal distribution.
Let follow an ordinary normal distribution, , then follows a half-normal distribution. Thus, the half-normal distribution is a fold at the mean of an ordinary normal distribution with mean zero.
Using the parametrization of the normal distribution, the probability density function (PDF) of the half-normal is given by
Alternatively using a scaled precision (inverse of the variance) parametrization (to avoid issues if is near zero), obtained by setting , the probability density function is given by
The cumulative distribution function (CDF) is given by
Using the change-of-variables , the CDF can be written as
where erf(x) is the error function, a standard function in many mathematical software packages.
The expectation is then given by
The variance is given by
Since this is proportional to the variance σ2 of X, σ can be seen as a scale parameter of the new distribution.
The entropy of the half-normal distribution is exactly one bit less the entropy of a zero-mean normal distribution with the same second moment about 0. This can be understood intuitively since the magnitude operator reduces information by one bit (if the probability distribution at its input is even). Alternatively, since a half-normal distribution is always positive, the one bit it would take to record whether a standard normal random variable were positive (say, a 1) or negative (say, a 0) is no longer necessary. Thus,
Parameter estimation
Given numbers drawn from a half-normal distribution, the unknown parameter of that distribution can be estimated by the method of maximum likelihood, giving
Related distributions
- The distribution is a special case of the folded normal distribution with μ = 0.
- (Y/σ)^2 has a chi square distribution with 1 degree of freedom.
- Y/σ has a chi distribution with 1 degree of freedom, if then .
External links
References
55 yrs old Metal Polisher Records from Gypsumville, has interests which include owning an antique car, summoners war hack and spelunkering. Gets immense motivation from life by going to places such as Villa Adriana (Tivoli).
my web site - summoners war hack no survey ios