|
|
Line 1: |
Line 1: |
| In [[probability theory]], a '''continuous-time Markov chain''' ('''CTMC'''<ref>{{cite doi|10.1007/3-540-48320-9_12}}</ref> or '''continuous-time Markov process'''<ref>{{cite doi|10.1287/opre.27.3.616}}</ref>) is a mathematical model which takes values in some finite or countable set and for which the time spent in each state takes non-negative [[real number|real value]]s and has an [[exponential distribution]]. It is a [[continuous-time stochastic process]] with the [[Markov property]] which means that future behaviour of the model (both remaining time in current state and next state) depends only on the current state of the model and not on historical behaviour. The model is a continuous-time version of the [[Markov chain]] model, named because the output from such a process is a sequence (or chain) of states.
| | Paediatrician Broadbent from Buckingham, usually spends time with hobbies which include vehicle, [http://www.dada.wagenseller.on-rev.com/tiki-view_forum_thread.php?comments_parentId=7849&forumId=1 property agent in singapore] developers in singapore and camping. Finds the entire world an amazing place after working 2 days at Athens. |
| | |
| ==Definitions==
| |
| | |
| A continuous-time Markov chain (''X''<sub>''t''</sub>)<sub>''t'' ≥ 0</sub> is defined by a finite or countable state space ''S'', a [[transition rate matrix]] ''Q'' with dimensions equal to that of the state space and initial probability distribution defined on the state space. For ''i'' ≠ ''j'', the elements ''q''<sub>''ij''</sub> are non-negative and describe the rate the process transitions from state ''i'' to state ''j''. The elements ''q''<sub>''ii''</sub> are chosen such that each row of the transition rate matrix sums to zero.
| |
| | |
| There are three equivalent definitions of the process.<ref name="norris1">{{cite doi|10.1017/CBO9780511810633.004}}</ref>
| |
| | |
| ===Infinitesimal definition===
| |
| | |
| Let ''X''<sub>''t''</sub> be the random variable describing the state of the process at time ''t'', and assume that the process is in a state ''i'' at time ''t''. Then ''X''<sub>''t'' + h</sub> is independent of previous values (''X''<sub>''s''</sub> : ''s''≤ ''t'') and as ''h'' → 0 uniformly in ''t'' for all ''j''
| |
| | |
| :<math>\Pr(X(t+h) = j | X(t) = i) = \delta_{ij} + q_{ij}h + o(h)</math> | |
| | |
| using [[little-o notation]]. The ''q''<sub>''ij''</sub> can be seen as measuring how quickly the transition from ''i'' to ''j'' happens
| |
| | |
| ===Jump chain/holding time definition===
| |
| | |
| Define a discrete-time Markov chain ''Y''<sub>''n''</sub> to describe the ''n''th jump of the process and variables ''S''<sub>1</sub>, ''S''<sub>2</sub>, ''S''<sub>3</sub>, ... to describe holding times in each of the states where the distribution of ''S''<sub>''i''</sub> is given by −''q''<sub>''Y''<sub>''i''</sub>''Y''<sub>''i''</sub></sub>.
| |
| | |
| ===Transition probability definition===
| |
| | |
| For any value ''n'' = 0, 1, 2, 3, ... and times indexed up to this value of ''n'': ''t''<sub>0</sub>, ''t''<sub>1</sub>, ''t''<sub>2</sub>, ... and all states recorded at these times ''i''<sub>0</sub>, ''i''<sub>1</sub>, ''i''<sub>2</sub>, ''i''<sub>3</sub>, ... it holds that
| |
| | |
| :<math>\Pr(X_{t_{n+1}} = i_{n+1} | X_{t_0} = i_0 , X_{t_1} = i_1 , \ldots, X_{t_n} = i_n ) = p_{i_n i_{n+1}}( t_{n+1} - t_n)</math>
| |
| | |
| where ''p''<sub>''ij''</sub> is the solution of the [[forward equation]] (a [[first-order differential equation]])
| |
| | |
| :<math>P'(t) = P(t) Q</math>
| |
| | |
| with initial condition P(0) is the [[identity matrix]].
| |
| | |
| ==Properties==
| |
| | |
| ===Irreducibility===
| |
| | |
| The state space ''S'' can be [[partition of a set|partitioned]] into communicating classes. ''i'' and ''j'' are said to communicate (and therefore be in the same communicating class) if it is possible to get to state ''j'' from state ''i'', that is if
| |
| :<math>\operatorname{Pr}_i(X_t=j \text{ for some } t \geq 0)>0.</math>
| |
| | |
| A CTMC is irreducible if there is a single communicating class.<ref>{{cite doi|10.1017/CBO9780511810633.003}}</ref><ref name="norris1" />
| |
| | |
| ===Recurrence and transience===
| |
| | |
| A state ''i'' is recurrent if, starting in state ''i'', the probability the process returns unboundedly many times to the state is 1, that is<ref name="norris2" />
| |
| | |
| :<math>\operatorname{Pr}_i(\{t \geq 0 : X_t = i \} \text{ is unbounded}) = 1</math>
| |
| | |
| and a state ''i'' transient if this quantity has probability zero,<ref name="norris2" />
| |
| | |
| :<math>\operatorname{Pr}_i(\{t \geq 0 : X_t = i \} \text{ is unbounded}) = 0.</math>
| |
| | |
| If the expected return time (the time starting in state ''i'' until the next visit to state ''i'') is finite the state is positive recurrent, otherwise it is null recurrent.
| |
| | |
| ==Transient behaviour==
| |
| | |
| Write P(''t'') for the matrix with entries ''p''<sub>''ij''</sub> = P(''X''<sub>''t''</sub> = ''j'' | ''X''<sub>0</sub> = ''i''). Then the matrix P(''t'') satisfies the forward equation, a [[first-order differential equation]]
| |
| | |
| :<math>P'(t) = P(t) Q</math>
| |
| | |
| where the prime denotes differentiation with respect to ''t''. The solution to this equation is given by a [[matrix exponential]]
| |
| | |
| :<math>P(t) = e^{tQ}</math>
| |
| | |
| In a simple case such as a CTMC on the state space {1,2}. The general ''Q'' matrix for such a process is the following 2 × 2 matrix with ''α'',''β'' > 0
| |
| | |
| :<math>Q = \begin{pmatrix} -\alpha & \alpha \\ \beta & -\beta \end{pmatrix}.</math>
| |
| | |
| The above relation for forward matrix can be solved explicitly in this case to give
| |
| :<math>P(t) = \begin{pmatrix}
| |
| \frac{\beta}{\alpha+\beta} + \frac{\alpha}{\alpha+\beta}e^{-(\alpha+\beta)t} &
| |
| \frac{\alpha}{\alpha+\beta} - \frac{\alpha}{\alpha+\beta}e^{-(\alpha+\beta)t} \\
| |
| \frac{\beta}{\alpha+\beta} - \frac{\beta}{\alpha+\beta}e^{-(\alpha+\beta)t} &
| |
| \frac{\alpha}{\alpha+\beta} + \frac{\beta}{\alpha+\beta}e^{-(\alpha+\beta)t}
| |
| \end{pmatrix}</math>
| |
| | |
| However, direct solutions are complicated to compute for larger matrices. The fact that ''Q'' is the generator for a [[semigroup]] of matrices
| |
| | |
| :<math>P(t+s) = e^{Q(t+s)} = e^{tQ} e^{sQ} = P(t) P(s)</math>
| |
| | |
| is used.
| |
| | |
| ==Stationary distribution==
| |
| | |
| The stationary distribution for an irreducible recurrent CTMC is the probability distribution to which the process converges for large values of ''t''. Observe that for the two-state process considered earlier with P(''t'') given by
| |
| | |
| :<math>P(t) = \begin{pmatrix}
| |
| \frac{\beta}{\alpha+\beta} + \frac{\alpha}{\alpha+\beta}e^{-(\alpha+\beta)t} &
| |
| \frac{\alpha}{\alpha+\beta} - \frac{\alpha}{\alpha+\beta}e^{-(\alpha+\beta)t} \\
| |
| \frac{\beta}{\alpha+\beta} - \frac{\beta}{\alpha+\beta}e^{-(\alpha+\beta)t} &
| |
| \frac{\alpha}{\alpha+\beta} + \frac{\beta}{\alpha+\beta}e^{-(\alpha+\beta)t}
| |
| \end{pmatrix}</math>
| |
| | |
| as ''t'' → ∞ the distribution tends to
| |
| | |
| :<math>P_\pi = \begin{pmatrix}
| |
| \frac{\beta}{\alpha+\beta} &
| |
| \frac{\alpha}{\alpha+\beta} \\
| |
| \frac{\beta}{\alpha+\beta} &
| |
| \frac{\alpha}{\alpha+\beta}
| |
| \end{pmatrix}</math>
| |
| | |
| Observe that each row has the same distribution as this does not depend on starting state. The row vector ''π'' may be found by solving<ref name="norris2">{{cite doi|10.1017/CBO9780511810633.005}}</ref>
| |
| | |
| :<math>\pi Q = 0.</math>
| |
| | |
| with the additional constraint that
| |
| | |
| :<math>\sum_{i \in S} \pi_i = 1.</math>
| |
| | |
| ===Example===
| |
| | |
| [[File:Financial Markov process.svg|thumb|250px|right|Directed graph representation of a continuous-time Markov chain describing the state of financial markets (note: numbers are made-up).]]
| |
| The image to the right describes a continuous-time Markov chain with state-space {Bull market, Bear market, Stagnant market} and [[transition rate matrix]]
| |
| ::<math>Q=\begin{pmatrix}
| |
| -0.025 & 0.02 & 0.005 \\
| |
| 0.3 & -0.5 & 0.2 \\
| |
| 0.02 & 0.4 & -0.42
| |
| \end{pmatrix}.</math>
| |
| | |
| The stationary distribution of this chain can be found by solving ''π'' ''Q'' = 0 subject to the contraint that elements must sum to 1 to obtain
| |
| | |
| :<math>\pi = \begin{pmatrix}0.885 & 0.071 & 0.044 \end{pmatrix}.</math>
| |
| | |
| ==Hitting times==
| |
| | |
| {{Main|phase-type distribution}}
| |
| The hitting time is the time, starting in a given set of states until the chain arrives in a given state or set of states. The distribution of such a time period has a phase type distribution. The simplest such distribution is that of a single exponentially distributed transition.
| |
| | |
| ===Expected hitting times===
| |
| For a subset of states ''A'' ⊆ ''S'', the vector ''k''<sup>''A''</sup> of hitting times (where element ''k''<sup>''A''</sup><sub>''i''</sub> represents the [[expected value]], starting in state ''i'' that the chain enters one of the states in the set ''A'') is the minimal non-negative solution to<ref name="norris2" />
| |
| | |
| :<math>\begin{align}
| |
| k_i^A = 0 & \text{ for } i \in A\\
| |
| -\sum_{j \in S} q_{ij} k_j^A = 1&\text{ for } i \notin A.
| |
| \end{align}</math>
| |
| | |
| ==Time reversal==
| |
| | |
| For a CTMC ''X''<sub>''t''</sub>, the time-reversed process is defined to be <math>\scriptstyle \hat X_t = X_{T-t}</math>.
| |
| By [[Kelly's lemma]] this process has the same stationary distribution as the forward process.
| |
| | |
| A chain is said to be reversible if the reversed process is the same as the forward process. [[Kolmogorov's criterion]] states that the necessary and sufficient condition for a process to be reversible is that the product of transition rates around a closed loop must be the same in both directions.
| |
| | |
| ==Embedded Markov chain==
| |
| <!-- Embedded Markov chain redirects here -->
| |
| One method of finding the [[stationary probability distribution]], ''π'', of an [[ergodic]] continuous-time Markov chain, ''Q'', is by first finding its embedded Markov chain (EMC). Strictly speaking, the EMC is a regular discrete-time Markov chain, sometimes referred to as a '''[[jump process]]'''. Each element of the one-step transition probability matrix of the EMC, ''S'', is denoted by ''s''<sub>''ij''</sub>, and represents the [[conditional probability]] of transitioning from state ''i'' into state ''j''. These conditional probabilities may be found by
| |
| :<math>
| |
| s_{ij} = \begin{cases}
| |
| \frac{q_{ij}}{\sum_{k \neq i} q_{ik}} & \text{if } i \neq j \\
| |
| 0 & \text{otherwise}.
| |
| \end{cases}
| |
| </math>
| |
| | |
| From this, ''S'' may be written as
| |
| | |
| :<math>S = I - \left( \operatorname{diag}(Q) \right)^{-1} Q</math>
| |
| | |
| where ''I'' is the [[identity matrix]] and diag(''Q'') is the [[diagonal matrix]] formed by selecting the [[main diagonal]] from the matrix ''Q'' and setting all other elements to zero.
| |
| | |
| To find the stationary probability distribution vector, we must next find <math>\phi</math> such that
| |
| | |
| :<math>\phi S = \phi, \, </math>
| |
| | |
| with <math>\phi</math> being a row vector, such that all elements in <math>\phi</math> are greater than 0 and [[Norm (mathematics)|<math>||\phi||_1</math>]] = 1. From this, ''π'' may be found as
| |
| | |
| :<math>\pi = {-\phi (\operatorname{diag}(Q))^{-1} \over \left\| \phi (\operatorname{diag}(Q))^{-1} \right\|_1}. </math>
| |
| | |
| Note that ''S'' may be periodic, even if ''Q'' is not. Once ''π'' is found, it must be normalized to a [[unit vector]].
| |
| | |
| Another discrete-time process that may be derived from a continuous-time Markov chain is a '''δ-skeleton'''—the (discrete-time) Markov chain formed by observing ''X''(''t'') at intervals of δ units of time. The random variables ''X''(0), ''X''(δ), ''X''(2δ), ... give the sequence of states visited by the δ-skeleton.
| |
| | |
| ==Applications==
| |
| Markov chains are used to describe physical processes where a system evolves in constant time. Sometimes, rather than a single systems, they are applied to an ensemble of identical, independent systems, and the probabilities are used to find how many members of the ensemble are in a given state. A [[master equation]] treatment is often used to analyse systems that evolve as Markov chains {{citation needed|date=May 2012}}, with [[System size expansion|approximations]] possible for complicated systems {{citation needed|date=May 2012}}.
| |
| | |
| ===Chemical reactions===
| |
| Imagine a large number ''n'' of molecules in solution in state A, each of which can undergo a chemical reaction to state B with a certain average rate. Perhaps the molecule is an enzyme, and the states refer to how it is folded. The state of any single enzyme follows a Markov chains, and since the molecules are essentially independent of each other, the number of molecules in state A or B at a time is ''n'' times the probability a given molecule is in that state.
| |
| | |
| ===Queueing theory===
| |
| {{Main|Queueing theory}}
| |
| Numerous queueing models use continuous-time Markov chains. For example, an [[M/M/1 queue]] is a CTMC on the non-negative integers where upward transitions from ''i'' to ''i'' + 1 occur at rate ''λ'' according to a [[Poisson process]] and describe job arrivals, while transitions from ''i'' to ''i'' – 1 (for ''i'' > 1) occur at rate ''μ'' (job service times are exponentially distributed) and describe completed services (departures) from the queue.
| |
| | |
| ==Extensions==
| |
| | |
| A time dependent (time heterogeneous) CTMC is as above, but with the transition rate matrix a function of time Q(''t'').
| |
| | |
| ==See also==
| |
| * [[Master equation]] (physics)
| |
| * [[Semi-Markov process]]
| |
| * [[Variable-order Markov model]]
| |
| * [[Spectral expansion solution]]
| |
| * [[Matrix geometric solution method]]
| |
| | |
| ==References==
| |
| <references/>
| |
| * {{cite book|title=Introduction to the Numerical Solution of Markov Chains|author=William J. Stewart|year=1994|publisher=Princeton University Press|pages=17–23|isbn=0-691-03699-3}}
| |
| | |
| {{Queueing theory}}
| |
| | |
| {{DEFAULTSORT:Continuous-Time Markov Process}}
| |
| [[Category:Markov processes]]
| |