Synge's world function: Difference between revisions

From formulasearchengine
Jump to navigation Jump to search
<math>\sigma</math> is half the geodesic length squared, <math>\sigma = 1/2 (\Delta \tau)^2</math>, not half the geodesic length
Linkify Synge
Line 1: Line 1:
{{multiple issues|
I am Noella from Ulicoten. І love to play Piano. Օther hobbies arе Auto audiophilia.
{{expert-subject|Statistics|date=February 2012}}
{{primary sources|date=February 2012}}
{{refimprove|date=February 2012}}
}}
 
In [[statistics]] and [[machine learning]], the '''hierarchical Dirichlet process''' (HDP) is a [[Non-parametric statistics|nonparametric]] [[Bayesian probability|Bayesian]] approach to clustering grouped data.<ref name="teh2006">{{cite journal
| last1 = Teh | first1 = Y. W.
| last2 = Jordan |first2 = M. I.
| last3 = Beal | first3 = M. J. | last4 = Blei | first4 = D. M. | title = Hierarchical Dirichlet Processes
| journal = [[Journal of the American Statistical Association]]
| year = 2006
| volume = 101
| pages = ''pp.'' 1566&ndash;1581
| url = http://www.gatsby.ucl.ac.uk/~ywteh/research/npbayes/jasa2006.pdf
}}</ref><ref name="tehjor2010">{{cite journal
| last1 = Teh | first1 = Y. W.
| last2 = Jordan | first2 = M. I.
| title = Hierarchical Bayesian Nonparametric Models with Applications
| journal = Bayesian Nonparametrics
| year = 2010
| url = http://www.gatsby.ucl.ac.uk/~ywteh/research/npbayes/TehJor2010a.pdf
| publisher = [[Cambridge University Press]]
}}</ref> It uses a [[Dirichlet process]] for each group of data, with the Dirichlet processes for all groups sharing a base distribution which is itself drawn from a Dirichlet process. This method allows groups to share statistical strength via sharing of clusters across groups.  The base distribution being drawn from a Dirichlet process is important, because draws from a Dirichlet process are atomic probability measures, and the atoms will appear in all group-level Dirichlet processes.  Since each atom corresponds to a cluster, clusters are shared across all groups.  It was developed by [[Yee Whye Teh]], [[Michael I. Jordan]], [[David Blei]] and [[Matthew Beal]] and published in the [[Journal of the American Statistical Association]] in 2006.<ref name="teh2006" />
 
==Model==
 
This model description is sourced from.<ref name="teh2006" /> The HDP is a model for grouped data.  What this means is that the data items come in multiple distinct groups.  For example, in a [[topic model]] words are organized into documents, with each document formed by a bag (group) of words (data items).  Indexing groups by <math>j=1,...J</math>, suppose each group consist of data items <math>x_{j1},...x_{jn}</math>. 
 
The HDP is parameterized by a base distribution <math>H</math> which governs the a priori distribution over data items, and a number of concentration parameters which govern the a priori number of clusters and amount of sharing across groups.  The <math>j</math>th group is associated with a random probability measure <math>G_j</math> which has distribution given by a Dirichlet process:
 
<math>
\begin{align}
G_j|G_0 &\sim \operatorname{DP}(\alpha_j,G_0)
\end{align}
</math>
 
where <math>\alpha_j</math> is the concentration parameter associated with the group, and <math>G_0</math> is the base distribution shared across all groups.  In turn, the common base distribution is Dirichlet process distributed:
 
<math>
\begin{align}
G_0 &\sim \operatorname{DP}(\alpha_0,H)
\end{align}
</math>
 
with concentration parameter <math>\alpha_0</math> and base distribution <math>H</math>.  Finally, to relate the Dirichlet processes back with the observed data, each data item <math>x_{ji}</math> is associated with a latent parameter <math>\theta_{ji}</math>:
 
<math>
\begin{align}
\theta_{ji}|G_j &\sim G_j \\
x_{ji}|\theta_{ji} &\sim F(\theta_{ji})
\end{align}
</math>
 
The first line states that each parameter has a prior distribution given by <math>G_j</math>, while the second line states that each data item has a distribution <math>F(\theta_{ji})</math> parameterized by its associated parameter.  The resulting model above is called a HDP mixture model, with the HDP referring to the hierarchically linked set of Dirichlet processes, and the mixture model referring to the way the Dirichlet processes are related to the data items.
 
To understand how the HDP implements a clustering model, and how clusters become shared across groups, recall that draws from a [[Dirichlet process]] are atomic probability measures with probability one.  This means that the common base distribution <math>G_0</math> has a form which can be written as:
 
<math>
\begin{align}
G_0 &= \sum_{k=1}^\infty \pi_{0k}\delta_{\theta^*_k}
\end{align}
</math>
 
where there are an infinite number of atoms, <math>\theta^*_k, k=1,2,...</math>, assuming that the overall base distribution <math>H</math> has infinite support.  Each atom is associated with a mass <math>\pi_{0k}</math>.  The masses have to sum to one since <math>G_0</math> is a probability measure. Since <math>G_0</math> is itself the base distribution for the group specific Dirichlet processes, each <math>G_j</math> will have atoms given by the atoms of <math>G_0</math>, and can itself be written in the form:
 
<math>
\begin{align}
G_j &= \sum_{k=1}^\infty \pi_{jk}\delta_{\theta^*_k}
\end{align}
</math>
 
Thus the set of atoms is shared across all groups, with each group having its own group-specific atom masses.  Relating this representation back to the observed data, we see that each data item is described by a mixture model:
 
<math>
\begin{align}
x_{ji}|G_j &\sim \sum_{k=1}^\infty \pi_{jk} F(\theta^*_k)
\end{align}
</math>
 
where the atoms <math>\theta^*_k</math> play the role of the mixture component parameters, while the masses <math>\pi_{jk}</math> play the role of the mixing proportions.  In conclusion, each group of data is modeled using a mixture model, with mixture components shared across all groups but mixing proportions being group-specific.  In clustering terms, we can interpret each mixture component as modeling a cluster of data items, with clusters shared across all groups, and each group, having its own mixing proportions, composed of different combinations of clusters.
 
==Applications==
 
The HDP mixture model is a natural nonparametric generalization of [[Latent Dirichlet allocation]], where the number of topics can be unbounded and learnt from data.<ref name="teh2006" />  Here each group is a document consisting of a bag of words, each cluster is a topic, and each document is a mixture of topics.  The HDP is also a core component of the [[infinite hidden Markov model]],<ref name="teh2006" /> which is a nonparametric generalization of the [[hidden Markov model]] allowing the number of states to be unbounded and learnt from data.
 
==Generalizations==
 
The HDP can be generalized in a number of directions.  The Dirichlet processes can be replaced by [[Pitman-Yor process]]es, resulting in the [[Hierarchical Pitman-Yor process]].  The hierarchy can be deeper, with multiple levels of groups arranged in a hierarchy. Such an arrangement has been exploited in the [[sequence memoizer]], a Bayesian nonparametric model for sequences which has a multi-level hierarchy of Pitman-Yor processes.
 
==References==
<references/>
 
[[Category:Stochastic processes]]
[[Category:Non-parametric Bayesian methods]]

Revision as of 01:46, 15 February 2014

I am Noella from Ulicoten. І love to play Piano. Օther hobbies arе Auto audiophilia.