Acutance: Difference between revisions

From formulasearchengine
Jump to navigation Jump to search
No edit summary
 
(One intermediate revision by one other user not shown)
Line 1: Line 1:
[[File:Sinc simple.svg|frame|200px|right|The characteristic function of a uniform ''U''(–1,1) random variable. This function is real-valued because it corresponds to a random variable that is symmetric around the origin; however in general case characteristic functions may be complex-valued.]]
Nice to satisfy you, my name is Refugia. Hiring is his occupation. One of the issues he loves most is ice skating but he is having difficulties to find time for it. Years in the past he moved to North Dakota and his family enjoys it.<br><br>Here is my website ... [http://simple-crafts.com/groups/best-techniques-for-keeping-yeast-infections-under-control/ simple-crafts.com]
 
In [[probability theory]] and [[statistics]], the '''characteristic function''' of any [[real-valued]] [[random variable]] completely defines its [[probability distribution]]. If a random variable admits a [[probability density function]], then the characteristic function is the [[inverse Fourier transform]] of the probability density function. Thus it provides the basis of an alternative route to analytical results compared with working directly with [[probability density function]]s or [[cumulative distribution function]]s. There are particularly simple results for the characteristic functions of distributions defined by the weighted sums of random variables.
 
In addition to univariate distributions, characteristic functions can be defined for vector- or matrix-valued random variables, and can even be extended to more generic cases.
 
The characteristic function always exists when treated as a function of a real-valued argument, unlike the [[moment-generating function]]. There are relations between the behavior of the characteristic function of a distribution and properties of the distribution, such as the existence of moments and the existence of a density function.
 
==Introduction==
The characteristic function provides an alternative way for describing a [[random variable]]. Similarly to the [[cumulative distribution function]]
 
:<math>F_X(x) = \operatorname{E} \left [\mathbf{1}_{\{X\leq x\}} \right],</math>
 
( where '''1'''<sub>{''X ≤ x''}</sub> is the [[indicator function]] — it is equal to 1 when {{nowrap|''X ≤ x''}}, and zero otherwise), which completely determines behavior and properties of the probability distribution of the random variable ''X'', the '''characteristic function'''
 
: <math>    \varphi_X(t) = \operatorname{E} \left [ e^{itX} \right ]</math>
 
also completely determines behavior and properties of the probability distribution of the random variable ''X''. The two approaches are equivalent in the sense that knowing one of the functions it is always possible to find the other, yet they both provide different insight for understanding the features of the random variable. However, in particular cases, there can be differences in whether these functions can be represented as expressions involving simple standard functions.
 
If a random variable admits a [[probability density function|density function]], then the characteristic function is its [[Duality (mathematics)|dual]], in the sense that each of them is a [[Fourier transform]] of the other. If a random variable has a [[moment-generating function]], then the domain of the characteristic function can be extended to the complex plane, and
 
: <math>    \varphi_X(-it) = M_X(t). </math><ref>Lukacs (1970) p. 196</ref>
 
Note however that the characteristic function of a distribution always exists, even when the [[probability density function]] or [[moment-generating function]] do not.
 
The characteristic function approach is particularly useful in analysis of linear combinations of independent random variables: a classical proof of the [[Central Limit Theorem]] uses characteristic functions and [[Lévy's continuity theorem]]. Another important application is to the theory of the [[Indecomposable distribution|decomposability]] of random variables.
 
==Definition==
For a scalar random variable ''X'' the '''characteristic function''' is defined as the [[expected value]] of ''e<sup>itX</sup>'', where ''i'' is the [[imaginary unit]], and {{nowrap|''t'' ∈ '''R'''}} is the argument of the characteristic function:
 
:<math>\begin{cases} \varphi_X\!:\mathbf{R}\to\mathbf{C} \\ \varphi_X(t) = \operatorname{E}\left[e^{itX}\right] = \int_{\mathbf{R}} e^{itx}\,dF_X(x) = \int_{\mathbf{R}} e^{itx} f_X(x)\,dx = \int_0^1 e^{it Q_X(p)}\,dp \end{cases}</math>
 
Here ''F<sub>X</sub>'' is the [[cumulative distribution function]] of ''X'', and the integral is of the [[Riemann–Stieltjes integral|Riemann–Stieltjes]] kind. If random variable ''X'' has a [[probability density function]] ''f<sub>X</sub>'', then the characteristic function is its [[Fourier transform]],<ref>{{harvtxt|Billingsley|1995}}</ref> and the last formula in parentheses is valid. ''Q<sub>X</sub>''(''p'') is the inverse cumulative distribution function of ''X'' also called the [[quantile function]] of ''X''.<ref>{{Citation |last1=Shaw |first1=W. T. |last2=McCabe |first2=J. |year=2009 |title=Monte Carlo sampling given a Characteristic Function: Quantile Mechanics in Momentum Space |journal=Eprint-arXiv:0903,1592 }}</ref>
 
It should be noted though, that this convention for the constants appearing in the definition of the characteristic function differs from the usual convention for the Fourier transform.<ref>{{harvtxt|Pinsky|2002}}</ref> For example some authors<ref>{{harvtxt|Bochner|1955}}</ref> define {{nowrap|''φ<sub>X</sub>''(''t'') {{=}} E''e''<sup>−2''πitX''</sup>}}, which is essentially a change of parameter. Other notation may be encountered in the literature: <math style="vertical-align:-.3em">\scriptstyle\hat p</math> as the characteristic function for a probability measure ''p'', or <math style="vertical-align:-.3em">\scriptstyle\hat f</math> as the characteristic function corresponding to a density ''f''.
 
==Generalizations==
The notion of characteristic functions generalizes to multivariate random variables and more complicated [[random element]]s. The argument of the characteristic function will always belong to the [[continuous dual]] of the space where random variable ''X'' takes values. For common cases such definitions are listed below:
 
*If ''X'' is a ''k''-dimensional [[random vector]], then for {{nowrap|''t'' ∈ '''R'''<sup>''k''</sup>}}
::<math>    \varphi_X(t) = \operatorname{E}\left[\exp({i\,t^T\!X})\right],  </math>
 
*If ''X'' is a ''k×p''-dimensional [[random matrix]], then for {{nowrap|''t'' ∈ '''R'''<sup>''k×p''</sup>}}
::<math>    \varphi_X(t) = \operatorname{E}\left[\exp \left({i\,\operatorname{tr}(t^T\!X)} \right )\right], </math>
 
*If ''X'' is a [[complex number|complex]] [[random variable]], then for {{nowrap|''t'' ∈ '''C'''}} <ref>{{harvtxt|Andersen|Højbjerre|Sørensen|Eriksen|1995|loc=Definition 1.10}}</ref>
::<math>\varphi_X(t) = \operatorname{E}\left[\exp({i\,\operatorname{Re}(\overline{t}X)})\right], </math>
 
*If ''X'' is a ''k''-dimensional [[complex number|complex]] [[random vector]], then for {{nowrap|''t'' ∈ '''C'''<sup>''k''</sup>}}<ref>{{harvtxt|Andersen|Højbjerre|Sørensen|Eriksen|1995|loc=Definition 1.20}}</ref>
:: <math>    \varphi_X(t) = \operatorname{E}\left[\exp({i\,\operatorname{Re}(t^*\!X)})\right], </math>
 
*If ''X''(''s'') is a [[stochastic process]], then for all functions ''t''(''s'') such that the integral ∫<sub>'''R'''</sub>''t''(''s'')''X''(''s'')d''s'' converges for almost all realizations of ''X'' <ref>{{harvtxt|Sobczyk|2001|page=20}}</ref>
::<math>\varphi_X(t) = \operatorname{E}\left[\exp \left ({i\int_\mathbf{R} t(s)X(s)ds} \right ) \right].  </math>
 
Here <math>{}^T</math> denotes matrix [[transpose]], tr(·) — the matrix [[trace (linear algebra)|trace]] operator, Re(·) is the [[real part]] of a complex number, <i style="text-decoration:overline">z</i> denotes [[complex conjugate]], and * is [[conjugate transpose]] (that is {{nowrap|''z* {{=}} <span style{{=}}"text-decoration:overline">z</span><sup>T</sup>''}} ).
 
== Examples ==
{|class="wikitable"
|-
! Distribution
! Characteristic function ''φ(t)''
|-
| [[Degenerate distribution|Degenerate]] δ<sub>''a''</sub>
| &nbsp; <math>\! e^{ita}</math>
|-
| [[Bernoulli distribution|Bernoulli]] Bern(''p'')
| &nbsp; <math>\! 1-p+pe^{it}</math>
|-
| [[Binomial distribution|Binomial]] B(''n, p'')
| &nbsp; <math>\! (1-p+pe^{it})^n</math>
|-
| [[Negative binomial distribution|Negative binomial]] NB(''r, p'')
| &nbsp; <math>\biggl(\frac{1-p}{1 - p e^{i\,t}}\biggr)^{\!r} </math>
|-
| [[Poisson distribution|Poisson]] Pois(λ)
| &nbsp; <math>\! e^{\lambda(e^{it}-1)}</math>
|-
| [[Uniform distribution (continuous)|Uniform]] U(''a, b'')
| &nbsp; <math>\! \frac{e^{itb} - e^{ita}}{it(b-a)}</math>
|-
| [[Laplace distribution|Laplace]] L(''μ, b'')
| &nbsp; <math>\! \frac{e^{it\mu}}{1 + b^2t^2}</math>
|-
| [[Normal distribution|Normal]] ''N''(''μ, σ<sup>2</sup>'')
| &nbsp; <math>\! e^{it\mu - \frac{1}{2}\sigma^2t^2}</math>
|-
| [[Chi-squared distribution|Chi-squared]] χ<sup>2</sup><sub style="position:relative;left:-5pt;top:2pt">k</sub>
| &nbsp; <math>\! (1 - 2it)^{-k/2}</math>
|-
| [[Cauchy distribution|Cauchy]] C(''μ, θ'')
| &nbsp; <math>\! e^{it\mu -\theta|t|}</math>
|-
| [[Gamma distribution|Gamma]] Γ(''k, θ'')
| &nbsp; <math>\! (1 - it\theta)^{-k}</math>
|-
| [[Exponential distribution|Exponential]] Exp(''λ'')
| &nbsp; <math>\! (1 - it\lambda^{-1})^{-1}</math>
|-
| [[Multivariate normal distribution|Multivariate normal]] ''N''(''μ'', ''Σ'')
| &nbsp; <math>\! e^{it^T\mu - \frac{1}{2}t^T\Sigma t}</math>
|-
| [[Multivariate Cauchy distribution|Multivariate Cauchy]] ''MultiCauchy''(''μ'', ''Σ'') <ref>Kotz et al. p. 37 using 1 as the number of degree of freedom to recover the Cauchy distribution</ref>
| &nbsp; <math>\! e^{it^T\mu - \sqrt{t^T\Sigma t}}</math>
|-
|}
Oberhettinger (1973) provides extensive tables of characteristic functions.
 
==Properties==
* The characteristic function of a real-valued random variable always exists, since it is an integral of a bounded continuous function over a space whose [[measure (mathematics)|measure]] is finite.
* A characteristic function is [[Uniform continuity|uniformly continuous]] on the entire space
* It is non-vanishing in a region around zero: φ(0) = 1.
* It is bounded: |φ(''t'')| ≤ 1.
* It is [[Hermitian function|Hermitian]]: {{nowrap|φ(''−t'') {{=}} <span style{{=}}"text-decoration:overline">φ(''t'')</span>}}. In particular, the characteristic function of a symmetric (around the origin) random variable is real-valued and even.
* There is a [[bijection]] between [[Cumulative distribution function|distribution functions]] and characteristic functions. That is, for any two random variables ''X''<sub>1</sub>, ''X''<sub>2</sub>
:: <math>F_{X_1}=F_{X_2}\ \Leftrightarrow\ \varphi_{X_1}=\varphi_{X_2}</math>
* If a random variable ''X'' has [[Moment (mathematics)|moments]] up to ''k''-th order, then the characteristic function φ<sub>''X''</sub> is ''k'' times continuously differentiable on the entire real line. In this case
:: <math>\operatorname{E}[X^k] = (-i)^k \varphi_X^{(k)}(0).</math>
* If a characteristic function φ<sub>''X''</sub> has a ''k''-th derivative at zero, then the random variable ''X'' has all moments up to ''k'' if ''k'' is even, but only up to {{nowrap|''k'' – 1}} if ''k'' is odd.<ref>Lukacs (1970), Corollary 1 to Theorem 2.3.1</ref>
:: <math> \varphi_X^{(k)}(0) = i^k \operatorname{E}[X^k] </math>
* If ''X''<sub>1</sub>, …, ''X<sub>n</sub>'' are independent random variables, and ''a''<sub>1</sub>, …, ''a<sub>n</sub>'' are some constants, then the characteristic function of the linear combination of ''X''<sub>''i''</sub>'s is
:: <math>\varphi_{a_1X_1+\ldots+a_nX_n}(t) = \varphi_{X_1}(a_1t)\cdot \ldots \cdot \varphi_{X_n}(a_nt).</math>
One specific case is the sum of two independent random variables ''X''<sub>1</sub> and ''X''<sub>2</sub> in which case one ha <math>\varphi_{X_1+X_2}(t)=\varphi_{X_1}(t)\cdot\varphi_{X_2}(t)</math>.
* The tail behavior of the characteristic function determines the [[smoothness (probability theory)|smoothness]] of the corresponding density function.
 
===Continuity===
The bijection stated above between probability distributions and characteristic functions is ''continuous''. That is, whenever a sequence of distribution functions ''F<sub>j</sub>''(''x'')  converges (weakly) to some distribution ''F''(''x''), the corresponding sequence of characteristic functions φ<sub>''j''</sub>(''t'')  will also converge, and the limit φ(''t'') will correspond to the characteristic function of law ''F''. More formally, this is stated as
 
: '''[[Lévy’s continuity theorem]]:''' A sequence ''X<sub>j</sub>'' of ''n''-variate random variables [[Convergence in distribution|converges in distribution]] to random variable ''X'' if and only if the sequence φ<sub>''X<sub>j</sub>''</sub> converges pointwise to a function φ which is continuous at the origin. Then φ is the characteristic function of ''X''.<ref>{{harvtxt|Cuppens|1975|loc=Theorem 2.6.9}}</ref>
 
This theorem is frequently used to [[Law of large numbers#Proof using convergence of characteristic functions|prove the law of large numbers]], and the [[Central_limit_theorem#Proof|central limit theorem]].
 
===Inversion formulas===
Since there is a [[Bijection|one-to-one correspondence]] between cumulative distribution functions and characteristic functions, it is always possible to find one of these functions if we know the other one. The formula in definition of characteristic function allows us to compute φ when we know the distribution function ''F'' (or density ''f''). If, on the other hand, we know the characteristic function φ and want to find the corresponding distribution function, then one of the following '''inversion theorems''' can be used.
 
'''Theorem'''. If characteristic function ''φ<sub>X</sub>'' is [[Integrable function|integrable]], then ''F<sub>X</sub>'' is absolutely continuous, and therefore ''X'' has the [[probability density function]] given by
: <math>    f_X(x) = F_X'(x) = \frac{1}{2\pi}\int_{\mathbf{R}} e^{-itx}\varphi_X(t)dt,</math> &nbsp; when ''X'' is scalar;
in multivariate case the pdf is understood as the [[Radon–Nikodym derivative]] of the distribution ''μ<sub>X</sub>'' with respect to the [[Lebesgue measure]] ''λ'':
: <math>    f_X(x) = \frac{d\mu_X}{d\lambda}(x) = \frac{1}{(2\pi)^n} \int_{\mathbf{R}^n} e^{-i(t\cdot x)}\varphi_X(t)\lambda(dt).</math>
 
'''Theorem (Lévy)'''.<ref>Named after the French mathematician [[Paul Lévy (mathematician)|Paul Lévy]]</ref> If φ<sub>''X''</sub> is characteristic function of distribution function ''F<sub>X</sub>'', two points ''a&lt;b'' are such that {{nowrap|{''x''{{!}}''a'' < ''x'' < ''b''}}} is a [[continuity set]] of μ<sub>''X''</sub> (in the univariate case this condition is equivalent to continuity of ''F<sub>X</sub>'' at points ''a'' and ''b''), then
* If ''X'' is scalar:
::<math>F_X(b) - F_X(a) = \frac{1} {2\pi} \lim_{T \to \infty} \int_{-T}^{+T} \frac{e^{-ita} - e^{-itb}} {it}\, \varphi_X(t)\, dt,</math>
* If ''X'' is a vector random variable:
::<math>\mu_X\big(\{a<x<b\}\big) = \frac{1}{(2\pi)^n} \lim_{T_1\to\infty}\cdots\lim_{T_n\to\infty} \int\limits_{-T_1\leq t_1\leq T_1} \cdots \int\limits_{-T_n \leq t_n \leq T_n} \prod_{k=1}^n\left(\frac{e^{-it_ka_k}-e^{-it_kb_k}}{it_k}\right)\varphi_X(t)\lambda(dt_1 \times \cdots \times dt_n)</math>
 
'''Theorem'''. If ''a'' is (possibly) an atom of ''X'' (in the univariate case this means a point of discontinuity of ''F<sub>X</sub>'' ) then
* If ''X'' is scalar:
:: <math>F_X(a) - F_X(a-0) = \lim_{T\to\infty}\frac{1}{2T}\int_{-T}^{+T}e^{-ita}\varphi_X(t)dt</math>
* If ''X'' is a vector random variable:
:: <math>\mu_X(\{a\}) = \lim_{T_1\to\infty}\cdots\lim_{T_n\to\infty} \left(\prod_{k=1}^n\frac{1}{2T_k}\right) \int_{-T}^T e^{-i(t\cdot a)}\varphi_X(t)\lambda(dt)</math>
 
'''Theorem (Gil-Pelaez)'''.<ref>Wendel, J.G. (1961)</ref> For a univariate random variable ''X'', if ''x'' is a continuity point of ''F<sub>X</sub>'' then
: <math>F_X(x) = \frac{1}{2} - \frac{1}{\pi}\int_0^\infty \frac{\operatorname{Im}[e^{-itx}\varphi_X(t)]}{t}\,dt.</math>
The integral may be not [[Lebesgue-integrable]]; for example, when ''X'' is the [[discrete random variable]] that is always 0, it becomes the [[Dirichlet integral]].
 
Inversion formulas for multivariate distributions are available.<ref>Shephard (1991a,b)</ref>
 
===Criteria for characteristic functions===
First note that the set of all characteristic functions is closed under certain operations:
*A [[convex combination|convex linear combination]] <math>\scriptstyle \sum_n a_n\varphi_n(t)</math> (with <math>\scriptstyle a_n\geq0,\ \sum_n a_n=1</math>) of a finite or a countable number of characteristic functions is also a characteristic function.
* The product of a finite number of characteristic functions is also a characteristic function. The same holds for an infinite product provided that it converges to a function continuous at the origin.
*If φ is a characteristic function and α is a real number, then <math>\bar{\varphi}</math>, Re(φ), |φ|<sup>2</sup>, and φ(α''t'') are also characteristic functions.
 
It is well known that any non-decreasing [[càdlàg]] function ''F'' with limits ''F''(−∞) = 0, ''F''(+∞) = 1 corresponds to a [[cumulative distribution function]] of some random variable. There is also interest in finding similar simple criteria for when a given function φ could be the characteristic function of some random variable. The central result here is [[Bochner's theorem|Bochner’s theorem]], although its usefulness is limited because the main condition of the theorem, [[positive definite function|non-negative definiteness]], is very hard to verify. Other theorems also exist, such as Khinchine’s, Mathias’s, or Cramér’s, although their application is just as difficult. Pólya’s theorem, on the other hand, provides a very simple convexity condition which is sufficient but not necessary. Characteristic functions which satisfy this condition are called Pólya-type.<ref>Lukacs (1970), p.84</ref>
 
'''[[Bochner's theorem|Bochner’s theorem]]'''. An arbitrary function φ : '''R'''<sup>''n''</sup> → '''C''' is the characteristic function of some random variable if and only if φ is [[positive definite function|positive definite]], continuous at the origin, and if φ(0) = 1.
 
'''Khinchine’s criterion'''. A complex-valued, absolutely continuous function φ, with φ(0) = 1, is a characteristic function if and only if it admits the representation
: <math>\varphi(t) = \int_{\mathbf{R}} g(t+\theta)\overline{g(\theta)} d\theta .</math>
 
'''Mathias’ theorem'''. A real-valued, even, continuous, absolutely integrable function φ, with φ(0) = 1, is a characteristic function if and only if
:<math>(-1)^n \left ( \int_{\mathbf{R}} \varphi(pt)e^{-\frac{t^2}{2}}H_{2n}(t)dt \right ) \geq 0</math>
for ''n'' = 0,1,2,…, and all ''p'' > 0. Here ''H''<sub>2''n''</sub> denotes the [[Hermite polynomials|Hermite polynomial]] of degree 2''n''.
 
[[File:2 cfs coincide over a finite interval.svg|thumb|250px|Pólya’s theorem can be used to construct an example of two random variables whose characteristic functions coincide over a finite interval but are different elsewhere.]]
'''Pólya’s theorem'''. If φ is a real-valued, even, continuous function which satisfies the conditions
* φ(0) = 1,
* φ is [[convex function|convex]] for ''t'' > 0,
* φ(∞) = 0,
then φ(''t'') is the characteristic function of an absolutely continuous symmetric distribution.
 
==Uses==
Because of the [[Lévy continuity theorem|continuity theorem]], characteristic functions are used in the most frequently seen proof of the [[central limit theorem]]. The main trick involved in making calculations with  a characteristic function is recognizing the function as the characteristic function of a particular distribution.
 
===Basic manipulations of distributions===
Characteristic functions are particularly useful for dealing with linear functions of [[statistical independence|independent]] random variables. For example, if ''X''<sub>1</sub>, ''X''<sub>2</sub>, ..., ''X<sub>n</sub>'' is a sequence of independent (and not necessarily identically distributed) random variables, and
 
:<math>S_n = \sum_{i=1}^n a_i X_i,\,\!</math>
 
where the ''a''<sub>''i''</sub> are constants, then the characteristic function for ''S''<sub>''n''</sub> is given by
 
:<math>\varphi_{S_n}(t)=\varphi_{X_1}(a_1t)\varphi_{X_2}(a_2t)\cdots \varphi_{X_n}(a_nt) \,\!</math>
 
In particular, {{nowrap|''φ<sub>X+Y</sub>''(''t'') {{=}} ''φ<sub>X</sub>''(''t'')''φ<sub>Y</sub>''(''t'')}}.  To see this, write out the definition of characteristic function:
 
: <math>\varphi_{X+Y}(t)=  \operatorname{E}\left [e^{it(X+Y)}\right]= \operatorname{E}\left [e^{itX}e^{itY}\right] =  \operatorname{E}\left [e^{itX}\right] E\left [e^{itY}\right] =\varphi_X(t) \varphi_Y(t)</math>
 
Observe that the independence of ''X'' and ''Y'' is required to establish the equality of the third and fourth expressions.
 
Another special case of interest is when {{nowrap|''a<sub>i</sub>'' {{=}} 1/''n''}} and then ''S<sub>n</sub>'' is the sample mean.  In this case, writing <span style="text-decoration:overline;">''X''</span> for the mean,
 
: <math>\varphi_{\overline{X}}(t)= \varphi_X\!\left(\tfrac{t}{n} \right)^n</math>
 
===Moments===
Characteristic functions can also be used to find [[moment (mathematics)|moments]] of a random variable.  Provided that the ''n''<sup>th</sup> moment exists, characteristic function can be differentiated ''n'' times and
 
:<math>\operatorname{E}\left[ X^n\right] = i^{-n}\, \varphi_X^{(n)}(0) = i^{-n}\, \left[\frac{d^n}{dt^n} \varphi_X(t)\right]_{t=0} \,\!</math>
 
For example, suppose ''X'' has a standard [[Cauchy distribution]].  Then {{nowrap|''φ<sub>X</sub>''(''t'') {{=}} ''e''<sup>−{{!}}''t''{{!}}</sup>}}.  See how this is not [[differentiable]] at ''t'' = 0, showing that the Cauchy distribution has no [[expected value|expectation]].  Also see that the characteristic function of the sample mean <span style="text-decoration:overline;">''X''</span> of ''n'' [[Statistical independence|independent]] observations has characteristic function {{nowrap|''φ''<sub><span style{{=}}"text-decoration:overline;">''X''</span></sub>(''t'') {{=}} (''e''<sup>−{{!}}''t''{{!}}/''n''</sup>)<sup>''n''</sup> {{=}} ''e''<sup>−{{!}}''t''{{!}}</sup>}}, using the result from the previous section.  This is the characteristic function of the standard Cauchy distribution: thus, the sample mean has the same distribution as the population itself.
 
The logarithm of a characteristic function is a [[cumulant generating function]], which is useful for finding [[cumulant]]s; note that some instead define the cumulant generating function as the logarithm of the [[moment-generating function]], and call the logarithm of the characteristic function the ''second'' cumulant generating function.
 
===Data analysis===
Characteristic functions can be used as part of procedures for fitting probability distributions to samples of data. Cases where this provides a practicable option compared to other possibilities include fitting the [[stable distribution]] since closed form expressions for the density are not available which makes implementation of [[maximum likelihood]] estimation difficult. Estimation procedures are available which match the theoretical characteristic function to the empirical characteristic function, calculated from the data. Paulson et al. (1975) and Heathcote (1977) provide some theoretical background for such an estimation procedure. In addition, Yu (2004) describes applications of empirical characteristic functions to fit [[time series]] models where likelihood procedures are impractical.
 
===Example===
The [[Gamma distribution]] with scale parameter θ and a shape parameter ''k'' has the characteristic function
: <math>(1 - \theta\,i\,t)^{-k}.</math>
Now suppose that we have
: <math> X ~\sim \Gamma(k_1,\theta) \mbox{ and } Y \sim \Gamma(k_2,\theta) \,</math>
with ''X'' and ''Y'' independent from each other, and we wish to know what the distribution of ''X'' + ''Y'' is. The characteristic functions are
: <math>\varphi_X(t)=(1 - \theta\,i\,t)^{-k_1},\,\qquad \varphi_Y(t)=(1 - \theta\,i\,t)^{-k_2}</math>
which by independence and the basic properties of characteristic function leads to
: <math>\varphi_{X+Y}(t)=\varphi_X(t)\varphi_Y(t)=(1 - \theta\,i\,t)^{-k_1}(1 - \theta\,i\,t)^{-k_2}=\left(1 - \theta\,i\,t\right)^{-(k_1+k_2)}.</math>
This is the characteristic function of the gamma distribution scale parameter ''θ'' and shape parameter ''k''<sub>1</sub> + ''k''<sub>2</sub>, and we therefore conclude
: <math>X+Y \sim \Gamma(k_1+k_2,\theta) \,</math>
The result can be expanded to ''n'' independent gamma distributed random variables with the same scale parameter and we get
: <math>\forall i \in \{1,\ldots, n\} :  X_i \sim \Gamma(k_i,\theta) \qquad \Rightarrow \qquad \sum_{i=1}^n X_i \sim \Gamma\left(\sum_{i=1}^nk_i,\theta\right).</math>
 
==Entire characteristic functions==
{{Expand section|date=December 2009}}
As defined above, the argument of the characteristic function is treated as a real number: however, certain aspects of the theory of characteristic functions are advanced by extending the definition into the complex plane by [[Analytic continuation|analytical continuation]], in cases where this is possible.<ref>{{harvtxt|Lukacs|1970|loc=Chapter 7}}</ref>
 
==Related concepts==
Related concepts include the [[moment-generating function]] and the [[probability-generating function]]. The characteristic function exists for all probability distributions. This is not the case for the moment-generating function.
 
The characteristic function is closely related to the [[Fourier transform]]: the characteristic function of a probability density function ''p''(''x'') is the [[complex conjugate]] of the [[continuous Fourier transform]] of ''p''(''x'') (according to the usual convention; see [[Continuous_Fourier_transform#Other_conventions|continuous Fourier transform – other conventions]]).
 
: <math>\varphi_X(t) = \langle e^{itX} \rangle = \int_{\mathbf{R}} e^{itx}p(x)\, dx = \overline{\left( \int_{\mathbf{R}} e^{-itx}p(x)\, dx \right)} = \overline{P(t)},</math>
 
where ''P''(''t'') denotes the [[continuous Fourier transform]] of the probability density function ''p''(''x''). Likewise, ''p''(''x'') may be recovered from ''φ<sub>X</sub>''(''t'') through the inverse Fourier transform:
 
:<math>p(x) = \frac{1}{2\pi} \int_{\mathbf{R}} e^{itx} P(t)\, dt = \frac{1}{2\pi} \int_{\mathbf{R}} e^{itx} \overline{\varphi_X(t)}\, dt.</math>
 
Indeed, even when the random variable does not have a density, the characteristic function may be seen as the Fourier transform of the measure corresponding to the random variable.
<!------------------------
This is definitely true: the characteristic function is the fourier transf of the PDF, not the other way around.  Unfortunately I don't have a canonical source, but I am a statistical physicist and use this every day.
---->
<!------------------------
Below was lifted from [[generating function]] ... there should be an analog for the characteristic function
 
*Suppose that ''N'' is also an independent, discrete random variable taking values on the non-negative integers, with probability-generating function ''G''<sub>''N''</sub>.  If the ''X''<sub>1</sub>, ''X''<sub>2</sub>, ..., ''X''<sub>''N''</sub> are independent ''and'' identically distributed with common probability-generating function ''G''<sub>''X''</sub>, then
 
::<math>G_{S_N}(z) = G_N(G_X(z)).</math>
---->
 
Another related concept is the representation of probability distributions as elements of a [[reproducing kernel Hilbert space]] via the [[kernel embedding of distributions]].  This framework may be viewed as a generalization of the characteristic function under specific choices of the [[kernel function]].
 
==See also==
*[[Subindependence]], a weaker condition than independence, that is defined in terms of characteristic functions.
 
==References==
{{reflist}}
 
==Notes==
{{More footnotes|date=June 2009}}
{{refbegin}}
* {{cite book
  | title = Linear and graphical models for the multivariate complex normal distribution
  | year = 1995
  | publisher = Springer-Verlag
  | location = New York
  | series = Lecture notes in statistics 101
  | isbn = 0-387-94521-0
  | ref = CITEREFAndersenHøjbjerreSørensenEriksen1995
  | author = Andersen, H.H., M. Højbjerre, D. Sørensen, P.S. Eriksen
  }}
* {{cite book
  | last = Billingsley
  | first = Patrick
  | title = Probability and measure
  | year = 1995
  | edition = 3rd
  | publisher = John Wiley & Sons
  | isbn = 0-471-00710-2
  }}
* {{cite book
  | last = Bisgaard
  | first = T. M.
  | coauthors = Z. Sasvári
  | year = 2000
  | title = Characteristic functions and moment sequences
  | publisher = Nova Science
  }}
* {{cite book
  | last = Bochner
  | first = Salomon
  | title = Harmonic analysis and the theory of probability
  | year = 1955
  | publisher = University of California Press
  }}
* {{cite book
  | last = Cuppens
  | first = R.
  | year = 1975
  | title = Decomposition of multivariate probabilities
  | publisher = Academic Press
  }}
* {{cite journal
  | doi = 10.1093/biomet/64.2.255
  | last = Heathcote
  | first = C.R.
  | year = 1977
  | title = The integrated squared error estimation of parameters
  | journal = [[Biometrika]]
  | volume = 64
  | issue = 2
  | pages = 255–264
  | ref = harv
  }}
* {{cite book
  | last = Lukacs
  | first = E.
  | year = 1970
  | title = Characteristic functions
  | publisher = Griffin
  | location = London
  }}
* {{cite book
  | last1 = Kotz
  | first1 = Samuel
  | last2 = Nadarajah
  | first2 = Saralees
  | year = 2004
  | title = Multivariate T Distributions and Their Applications
  | publisher = Cambridge University Press
  | ref = multitdist
  }}
* {{Cite journal
|last=Oberhettinger
|first=Fritz
|title=Fourier Transforms of Distributions and their Inverses: A Collection of Tables
|year=1973
|publisher=Aciademic Press
|ref=harv
|postscript=<!--None-->}}
* {{cite journal
  | doi = 10.1093/biomet/62.1.163
  | last = Paulson
  | first = A.S.
  | coauthors = E.W. Holcomb, R.A. Leitch
  | year = 1975
  | title = The estimation of the parameters of the stable laws
  | journal = [[Biometrika]]
  | volume = 62
  | issue = 1
  | pages = 163–170
  | ref = harv
  }}
* {{cite book
  | last = Pinsky
  | first = Mark
  | title = Introduction to Fourier analysis and wavelets
  | year = 2002
  | publisher = Brooks/Cole
  | isbn = 0-534-37660-6
  }}
* {{cite book
  | last = Sobczyk
  | first = Kazimierz
  | title = Stochastic differential equations
  | publisher = [[Kluwer Academic Publishers]]
  | year = 2001
  | isbn = 978-1-4020-0345-5
  }}
* {{cite journal
  | doi = 10.1214/aoms/1177705164
  | last = Wendel
  | first = J.G.
  | year = 1961
  | title = The non-absolute convergence of Gil-Pelaez' inversion integral
  | journal = The Annals of Mathematical Statistics
  | volume = 32
  | issue = 1
  | pages = 338–339
  | ref = harv
  }}
* {{cite journal
  | doi = 10.1081/ETC-120039605
  | last = Yu
  | first = J.
  | year = 2004
  | title = Empirical characteristic function estimation and its applications
  | journal = Econometrics Reviews
  | volume = 23
  | issue = 2
  | pages = 93–1223
  | ref = harv
  }}
* Shephard, N. G. (1991a) From characteristic function to distribution function: A simple framework for the theory. ''Econometric Theory'', 7, 519–529.
* Shephard, N. G. (1991b) Numerical integration rules for multivariate inversions. ''J. Statist. Comput. Simul.'', 39, 37–46.
 
==External links==
* {{springer|title=Characteristic function|id=p/c021650}}
 
{{refend}}
 
{{Theory of probability distributions}}
 
{{DEFAULTSORT:Characteristic Function (Probability Theory)}}
[[Category:Probability theory]]
[[Category:Theory of probability distributions]]

Latest revision as of 12:12, 11 January 2015

Nice to satisfy you, my name is Refugia. Hiring is his occupation. One of the issues he loves most is ice skating but he is having difficulties to find time for it. Years in the past he moved to North Dakota and his family enjoys it.

Here is my website ... simple-crafts.com