Main Page: Difference between revisions

From formulasearchengine
Jump to navigation Jump to search
mNo edit summary
No edit summary
 
(441 intermediate revisions by more than 100 users not shown)
Line 1: Line 1:
In [[mathematics]], the '''matrix exponential''' is a [[matrix function]] on [[square matrix|square matrices]] analogous to the ordinary [[exponential function]]. Abstractly, the matrix exponential gives the connection between a matrix [[Lie algebra]] and the corresponding [[Lie group]].
This is a preview for the new '''MathML rendering mode''' (with SVG fallback), which is availble in production for registered users.


Let {{mvar|X}}  be an {{math|''n''×''n''}} [[real number|real]] or [[complex number|complex]] [[matrix (mathematics)|matrix]]. The exponential of {{mvar|X}}, denoted by {{math|''e''<sup>''X''</sup>}} or {{math|exp(''X'')}}, is the {{math|''n''×''n''}} matrix given by the [[power series]]
If you would like use the '''MathML''' rendering mode, you need a wikipedia user account that can be registered here [[https://en.wikipedia.org/wiki/Special:UserLogin/signup]]
* Only registered users will be able to execute this rendering mode.
* Note: you need not enter a email address (nor any other private information). Please do not use a password that you use elsewhere.


:<math>e^X = \sum_{k=0}^\infty{1 \over k!}X^k.</math>
Registered users will be able to choose between the following three rendering modes:  


The above series always converges, so the exponential of {{mvar|X}} is well-defined. Note that if {{mvar|X}} is a 1×1 matrix the matrix exponential of {{mvar|X}} is a 1×1 matrix whose single element is the ordinary [[Exponential function|exponential]] of the single element of {{mvar|X}}.
'''MathML'''
:<math forcemathmode="mathml">E=mc^2</math>


==Properties==
<!--'''PNG''' (currently default in production)
Let {{math|''X''}} and {{math|''Y''}} be {{math|''n''×''n''}} complex matrices and let {{math|''a''}} and {{math|''b''}} be arbitrary complex numbers. We denote the {{math|''n''×''n''}} [[identity matrix]] by {{math|''I''}} and the [[zero matrix]] by 0. The matrix exponential satisfies the following properties:
:<math forcemathmode="png">E=mc^2</math>


* {{math|''e''<sup>0</sup> {{=}} ''I''}}
'''source'''
* {{math|''e<sup>''aX''</sup>''e''<sup>''bX''</sup> {{=}} ''e''<sup>(''a'' + ''b'')''X''</sup>}}
:<math forcemathmode="source">E=mc^2</math> -->
* {{math|''e''<sup>''X''</sup>''e''<sup>&minus;''X''</sup> {{=}} ''I''}}
* If {{math|''XY'' {{=}} ''YX''}} then {{math|''e''<sup>''X''</sup>''e''<sup>''Y''</sup> {{=}} ''e''<sup>''Y''</sup>''e''<sup>''X''</sup> {{=}} ''e''<sup>(''X''&nbsp;+&nbsp;''Y'')</sup>.}}
* If {{math|''Y''}} is [[invertible matrix|invertible]] then {{math|''e''<sup>''YXY''<sup>&minus;1</sup></sup> {{=}}''Ye''<sup>''X''</sup>''Y''<sup>&minus;1</sup>.}}
* {{math|exp(''X''<sup>T</sup>) {{=}} (exp ''X'')<sup>T</sup>}}, where {{math|''X''<sup>T</sup>}} denotes the [[transpose]] of {{math|''X''}}. It follows that if {{math|''X''}} is [[symmetric matrix|symmetric]] then {{math|''e''<sup>''X''</sup>}} is also symmetric, and that if {{math|''X''}} is [[skew-symmetric matrix|skew-symmetric]] then {{math|''e''<sup>''X''</sup>}} is [[orthogonal matrix|orthogonal]].
* {{math|exp(''X''<sup>*</sup>) {{=}} (exp ''X'')<sup>*</sup>}}, where {{math|''X''<sup>*</sup>}} denotes the [[conjugate transpose]] of {{math|''X''}}. It follows that if {{math|''X''}} is [[Hermitian matrix|Hermitian]] then {{math|''e''<sup>''X''</sup>}} is also Hermitian, and that if {{math|''X''}} is [[skew-Hermitian matrix|skew-Hermitian]] then {{math|''e''<sup>''X''</sup>}} is [[unitary matrix|unitary]].


===Linear differential equation systems===
<span style="color: red">Follow this [https://en.wikipedia.org/wiki/Special:Preferences#mw-prefsection-rendering link] to change your Math rendering settings.</span> You can also add a [https://en.wikipedia.org/wiki/Special:Preferences#mw-prefsection-rendering-skin Custom CSS] to force the MathML/SVG rendering or select different font families. See [https://www.mediawiki.org/wiki/Extension:Math#CSS_for_the_MathML_with_SVG_fallback_mode these examples].


{{main|matrix differential equation}}
==Demos==


One of the reasons for the importance of the matrix exponential is that it can be used to solve systems of linear [[ordinary differential equations]]. The solution of
Here are some [https://commons.wikimedia.org/w/index.php?title=Special:ListFiles/Frederic.wang demos]:
: <math> \frac{d}{dt} y(t) = Ay(t), \quad y(0) = y_0, </math>
where {{mvar|A}} is a constant matrix, is given by
: <math> y(t) = e^{At} y_0. \, </math>
The matrix exponential can also be used to solve the inhomogeneous equation
: <math> \frac{d}{dt} y(t) = Ay(t) + z(t), \quad y(0) = y_0. </math>
See the [[#Applications|section on applications below]] for examples.


There is no closed-form solution for differential equations of the form
: <math> \frac{d}{dt} y(t) = A(t) \, y(t), \quad y(0) = y_0, </math>
where {{mvar|A}}  is not constant, but the [[Magnus series]] gives the solution as an infinite sum.


===The exponential of sums===
* accessibility:
** Safari + VoiceOver: [https://commons.wikimedia.org/wiki/File:VoiceOver-Mac-Safari.ogv video only], [[File:Voiceover-mathml-example-1.wav|thumb|Voiceover-mathml-example-1]], [[File:Voiceover-mathml-example-2.wav|thumb|Voiceover-mathml-example-2]], [[File:Voiceover-mathml-example-3.wav|thumb|Voiceover-mathml-example-3]], [[File:Voiceover-mathml-example-4.wav|thumb|Voiceover-mathml-example-4]], [[File:Voiceover-mathml-example-5.wav|thumb|Voiceover-mathml-example-5]], [[File:Voiceover-mathml-example-6.wav|thumb|Voiceover-mathml-example-6]], [[File:Voiceover-mathml-example-7.wav|thumb|Voiceover-mathml-example-7]]
** [https://commons.wikimedia.org/wiki/File:MathPlayer-Audio-Windows7-InternetExplorer.ogg Internet Explorer + MathPlayer (audio)]
** [https://commons.wikimedia.org/wiki/File:MathPlayer-SynchronizedHighlighting-WIndows7-InternetExplorer.png Internet Explorer + MathPlayer (synchronized highlighting)]
** [https://commons.wikimedia.org/wiki/File:MathPlayer-Braille-Windows7-InternetExplorer.png Internet Explorer + MathPlayer (braille)]
** NVDA+MathPlayer: [[File:Nvda-mathml-example-1.wav|thumb|Nvda-mathml-example-1]], [[File:Nvda-mathml-example-2.wav|thumb|Nvda-mathml-example-2]], [[File:Nvda-mathml-example-3.wav|thumb|Nvda-mathml-example-3]], [[File:Nvda-mathml-example-4.wav|thumb|Nvda-mathml-example-4]], [[File:Nvda-mathml-example-5.wav|thumb|Nvda-mathml-example-5]], [[File:Nvda-mathml-example-6.wav|thumb|Nvda-mathml-example-6]], [[File:Nvda-mathml-example-7.wav|thumb|Nvda-mathml-example-7]].
** Orca: There is ongoing work, but no support at all at the moment [[File:Orca-mathml-example-1.wav|thumb|Orca-mathml-example-1]], [[File:Orca-mathml-example-2.wav|thumb|Orca-mathml-example-2]], [[File:Orca-mathml-example-3.wav|thumb|Orca-mathml-example-3]], [[File:Orca-mathml-example-4.wav|thumb|Orca-mathml-example-4]], [[File:Orca-mathml-example-5.wav|thumb|Orca-mathml-example-5]], [[File:Orca-mathml-example-6.wav|thumb|Orca-mathml-example-6]], [[File:Orca-mathml-example-7.wav|thumb|Orca-mathml-example-7]].
** From our testing, ChromeVox and JAWS are not able to read the formulas generated by the MathML mode.


We know that the exponential function satisfies {{math|''e''<sup>''x''+''y''</sup> {{=}} ''e''<sup>''x''</sup> ''e''<sup>''y''</sup>}} for any real numbers (scalars) {{mvar|x}} and {{mvar|y}}. The same goes for commuting matrices: If the matrices {{mvar|X}} and {{mvar|Y}} commute (meaning that {{math|''XY'' {{=}} ''YX''}}), then
==Test pages ==


:<math>e^{X+Y} = e^Xe^Y ~.</math>
To test the '''MathML''', '''PNG''', and '''source''' rendering modes, please go to one of the following test pages:
*[[Displaystyle]]
*[[MathAxisAlignment]]
*[[Styling]]
*[[Linebreaking]]
*[[Unique Ids]]
*[[Help:Formula]]


However, if they do not commute, then the above equality does not necessarily hold, in which case the [[Baker–Campbell–Hausdorff formula]] furnishes {{math|''e''<sup>''X''+''Y''</sup>}}.
*[[Inputtypes|Inputtypes (private Wikis only)]]
 
*[[Url2Image|Url2Image (private Wikis only)]]
The converse is false: the equation  {{math|''e''<sup>''X''+''Y''</sup> {{=}} ''e''<sup>''X''</sup> ''e''<sup>''Y''</sup>}} does not necessarily imply that {{mvar|X}} and {{mvar|Y}} commute.
==Bug reporting==
 
If you find any bugs, please report them at [https://bugzilla.wikimedia.org/enter_bug.cgi?product=MediaWiki%20extensions&component=Math&version=master&short_desc=Math-preview%20rendering%20problem Bugzilla], or write an email to math_bugs (at) ckurs (dot) de .
For [[Hermitian matrix| Hermitian matrices]] there are two notable theorems related to the [[Matrix trace|trace]] of matrix exponentials.
====Golden–Thompson inequality====
{{main|Golden–Thompson inequality}}
 
If {{mvar|A}}  and {{mvar|H}}  are Hermitian matrices, then
:<math>\operatorname{tr}\exp(A+H) \leq \operatorname{tr}(\exp(A)\exp(H)). </math> <ref>{{cite book | author=Bhatia, R. | title=Matrix Analysis |series=Graduate Texts in Mathematics|isbn=978-0-387-94846-1 | year = 1997 | publisher=Springer | volume=169}}</ref>
Note that there is no requirement of commutativity.  There are counterexamples to show that the Golden–Thompson inequality cannot be extended to three matrices−−and, in any event, {{math|tr(exp(''A'')exp(''B'')exp(''C''))}} is not guaranteed to be real for Hermitian {{math|''A'' , ''B'', ''C''}}.  However, the next theorem accomplishes this in a way.
 
====Lieb's theorem====
The '''Lieb's theorem''', named after [[Elliott H. Lieb]], states that, for a fixed [[Hermitian matrix]] {{mvar|H}},  the function
:<math> f(A) = \operatorname{tr} \,\exp \left (H + \log A \right) </math>
is [[Concave function | concave]] on the [[Convex cone | cone]] of [[positive-definite matrix | positive-definite matrices]]. <ref>{{cite journal|doi=10.1016/0001-8708(73)90011-X | author = E. H. Lieb | title=Convex trace functions and the Wigner–Yanase–Dyson conjecture | journal=Adv. Math. | volume=11 | page=267–288 | year=1973|issue=3|ref=harv}}
{{cite journal|doi=10.1007/BF01646492 | author=H. Epstein | title=Remarks on two theorems of E. Lieb | journal=Commun Math. Phys. |volume=31|page=317–325 | year=1973|issue=4|ref=harv}}</ref>
 
===The exponential map===
 
Note that the exponential of a matrix is always an [[invertible matrix]]. The inverse matrix of {{math|''e''<sup>''X''</sup>}} is given by {{math|''e''<sup>&minus;''X''</sup>}}. This is analogous to the fact that the exponential of a complex number is always nonzero. The matrix exponential then gives us a map
:<math>\exp \colon M_n(\mathbb C) \to \mathrm{GL}(n,\mathbb C)</math>
from the space of all ''n''×''n'' matrices to the [[general linear group]] of degree {{mvar|n}}, i.e. the [[group (mathematics)|group]] of all ''n''×''n'' invertible matrices. In fact, this map is [[surjective]] which means that every invertible matrix can be written as the exponential of some other matrix (for this, it is essential to consider the field '''C''' of complex numbers and not '''R''').
 
For any two matrices {{mvar|X}} and {{mvar|Y}},
:<math> \| e^{X+Y} - e^X \| \le \|Y\| e^{\|X\|} e^{\|Y\|}, </math>
 
where ||&nbsp;·&nbsp;|| denotes an arbitrary [[matrix norm]]. It follows that the exponential map is [[continuity (mathematics)|continuous]] and [[Lipschitz continuous]] on [[compact set|compact]] subsets of {{math|''M''<sub>''n''</sub>('''C''')}}.
 
The map
:<math>t \mapsto e^{tX}, \qquad t \in \mathbb R</math>
defines a [[Smooth function#Smoothness|smooth]] curve in the general linear group which passes through the identity element at ''t'' = 0.
 
In fact, this gives a [[one-parameter subgroup]] of the general linear group since
:<math>e^{tX}e^{sX} = e^{(t+s)X}.\,</math>
 
The derivative of this curve (or [[tangent vector]]) at a point ''t'' is given by
:<math>\frac{d}{dt}e^{tX} = Xe^{tX} = e^{tX}X. \qquad (1)</math>
The derivative at ''t'' = 0 is just the matrix ''X'', which is to say that ''X'' generates this one-parameter subgroup.
 
More generally,<ref>{{cite journal|doi=10.1063/1.1705306 | author = R. M. Wilcox | title=Exponential Operators and Parameter Differentiation in Quantum Physics | journal=Journal of Mathematical Physics | volume=8 | page=962–982 | year=1967|ref=harv|issue=4}}</ref> for a generic {{mvar|t}}-dependent exponent, {{math|''X(t)''}},
{{Equation box 1
|indent =:
|equation = <math>\frac{d}{dt}e^{X(t)} = \int_0^1 e^{\alpha X(t)} \frac{dX(t)}{dt} e^{(1-\alpha) X(t)}\,d\alpha ~.  </math> 
|cellpadding= 
|border
|border colour = #0073CF
|bgcolor=#F9FFF7}}
 
Taking the above expression {{math|''e''<sup>''X''(''t'')</sup>}} outside the integral sign and expanding the integrand with the help of  the [[Baker–Campbell–Hausdorff formula|Hadamard lemma]] one can obtain the following useful expression for the derivative of the matrix exponent,
:<math>\left(\frac{d}{dt}e^{X(t)}\right)e^{-X(t)} = \frac{d}{dt}X(t) + \frac{1}{2!}[X(t),\frac{d}{dt}X(t)] + \frac{1}{3!}[X(t),[X(t),\frac{d}{dt}X(t)]]+\cdots </math>
 
===The determinant of the matrix exponential===
 
By [[Jacobi's formula]], for any complex square matrix the following identity holds:
{{Equation box 1
|indent =:
|equation = <math> \det (e^A)= e^{\operatorname{tr}(A)}~.</math>
|cellpadding= 6
|border
|border colour = #0073CF
|bgcolor=#F9FFF7}}
 
In addition to providing a computational tool, this formula demonstrates that a matrix exponential is always an [[invertible matrix]]. This follows from the fact
the right hand side of the above equation is always non-zero, and so {{math|det(''e<sup>A</sup>'')≠ 0}},  which means that {{math|''e<sup>A</sup>''}} must be invertible.
 
In the real-valued case, the formula also exhibits the map
:<math>\exp \colon M_n(\mathbb R) \to \mathrm{GL}(n,\mathbb R)</math>
to not be [[surjective function|surjective]],  in contrast to  the complex case mentioned earlier. This follows from the fact that, for real-valued matrices, the right-hand side of the formula  is always positive, while there exist invertible matrices with a negative determinant.
 
==Computing the matrix exponential==
 
Finding reliable and accurate methods to compute the matrix exponential is difficult,  and this is still a topic of considerable current research in mathematics and numerical analysis. Both [[Matlab]] and [[GNU Octave]] use [[Padé approximant]].<ref>{{cite web|url=http://www.mathworks.de/help/techdoc/ref/expm.html |title=Matrix exponential - MATLAB expm - MathWorks Deutschland |publisher=Mathworks.de |date=2011-04-30 |accessdate=2013-06-05}}</ref><ref>{{cite web|url=http://www.network-theory.co.uk/docs/octave3/octave_200.html |title=GNU Octave - Functions of a Matrix |publisher=Network-theory.co.uk |date=2007-01-11 |accessdate=2013-06-05}}</ref> Several methods are listed below.
 
===Diagonalizable case===
 
If a matrix is [[diagonal matrix|diagonal]]:
 
:<math>A=\begin{bmatrix} a_1 & 0 & \ldots & 0 \\
0 & a_2 & \ldots & 0  \\ \vdots & \vdots & \ddots & \vdots \\
0 & 0 & \ldots & a_n \end{bmatrix} </math>,
 
then its exponential can be obtained by just exponentiating every entry on the main diagonal:
 
:<math>e^A=\begin{bmatrix} e^{a_1} & 0 & \ldots & 0 \\
0 & e^{a_2} & \ldots & 0  \\ \vdots & \vdots & \ddots & \vdots \\
0 & 0 & \ldots & e^{a_n} \end{bmatrix} </math>.
 
This also allows one to exponentiate [[diagonalizable matrix|diagonalizable matrices]]. If {{math|''A'' {{=}} ''UDU''<sup>&minus;1</sup>}} and {{mvar|D}} is diagonal, then {{math|''e''<sup>''A''</sup> {{=}} ''Ue''<sup>''D''</sup>''U''<sup>&minus;1</sup>}}. Application of [[Sylvester's formula]] yields the same result.  The proof behind this is that multiplication between diagonal matrices is equivalent to element wise multiplication; in particular, the "one-dimensional" exponentiation is felt element wise for the diagonal case.
===Projection case===
If the matrix under question is a [[projection matrix]] (idempotent), then the matrix exponential of it is {{math|''e''<sup>''P''</sup> {{=}} I + (''e'' &minus; 1)''P''}},  which is easy to show upon expansion of the definition of the exponential,
:<math>e^P = I + \sum_{k=1}^{\infty} \frac{P^k}{k!}=I+\left(\sum_{k=1}^{\infty} \frac{1}{k!}\right)P=I+(e-1)P      ~.</math>
 
===Nilpotent case===
 
A matrix ''N'' is [[nilpotent matrix|nilpotent]] if ''N''<sup>''q''</sup> = 0 for some integer ''q''. In this case, the matrix exponential ''e''<sup>''N''</sup> can be computed directly from the series expansion, as the series terminates after a finite number of terms:
 
:<math>e^N = I + N + \frac{1}{2}N^2 + \frac{1}{6}N^3 + \cdots + \frac{1}{(q-1)!}N^{q-1} ~.</math>
 
===Generalization===
When the [[Minimal polynomial (linear algebra)|minimal polynomial]] of a matrix ''X'' can be factored into a product of first degree polynomials, it can be expressed as a sum
:<math>X = A + N \,</math>
where
*''A'' is diagonalizable
*''N'' is nilpotent
*''A'' commutes with ''N'' (i.e. ''AN'' = ''NA'')
This is the [[Jordan–Chevalley decomposition]].
 
This means that we can compute the exponential of ''X'' by reducing to the previous two cases:
:<math>e^X = e^{A+N} = e^A e^N. \,</math>
Note that we need the commutativity of ''A'' and ''N'' for the last step to work.
 
Another (closely related) method if the field is [[algebraically closed]] is to work with the [[Jordan form]] of ''X''. Suppose that ''X'' = ''PJP''<sup>&nbsp;&minus;1</sup> where ''J'' is the Jordan form of ''X''. Then
 
:<math>e^{X}=Pe^{J}P^{-1}.\,</math>
 
Also, since
: <math>J=J_{a_1}(\lambda_1)\oplus J_{a_2}(\lambda_2)\oplus\cdots\oplus J_{a_n}(\lambda_n),</math>
 
: <math>
\begin{align}
e^{J} & {} = \exp \big( J_{a_1}(\lambda_1)\oplus J_{a_2}(\lambda_2)\oplus\cdots\oplus J_{a_n}(\lambda_n) \big) \\
& {} = \exp \big( J_{a_1}(\lambda_1) \big) \oplus \exp \big( J_{a_2}(\lambda_2) \big) \oplus\cdots\oplus \exp \big( J_{a_k}(\lambda_k) \big).
\end{align}
</math>
 
Therefore, we need only know how to compute the matrix exponential of a Jordan block. But each Jordan block is of the form
:<math>J_{a}(\lambda) = \lambda I + N \,</math>
where ''N'' is a special nilpotent matrix. The matrix exponential of this block is given by
:<math>e^{\lambda I + N} = e^{\lambda}e^N. \,</math>
 
===Evaluation by Laurent series===
By virtue of the [[Cayley–Hamilton theorem]] the matrix exponential is expressible as a polynomial of order {{mvar|n}}−1.
If {{mvar|P}} and {{math|''Q<sub>t</sub>''}} are nonzero polynomials in one variable, such that {{math|''P''(''A'') {{=}} 0}}, and if the [[meromorphic function]]
:<math>f(z)=\frac{e^{t z}-Q_t(z)}{P(z)}</math>
is [[entire function|entire]], then
:<math>e^{t A} = Q_t(A)</math>.
To prove this, multiply the first of the two above equalities by {{math|''P''(''z'')}} and replace {{mvar|z}} by {{mvar|A}}.
 
Such a polynomial {{math|''Q<sub>t</sub>(z)''}} can be found as follows−−see [[Sylvester's formula]]. Letting {{mvar|a}} be a root of {{mvar|P}},  {{math|''Q<sub>a,t</sub>(z)''}} is solved from the product of {{mvar|P}} by the [[Laurent series#Principal part|principal part]] of the [[Laurent series]] of {{mvar|f}} at {{mvar|a}}: It is proportional to the relevant [[Frobenius covariant]]. Then the sum ''S<sub>t</sub>'' of the ''Q<sub>a,t</sub>'', where {{mvar|a}} runs over all the roots of {{mvar|P}}, can be taken as a particular {{math|''Q<sub>t</sub>''}}. All the other ''Q<sub>t</sub>'' will be obtained by adding a multiple of {{mvar|P}} to {{math|''S<sub>t</sub>(z)''}}. In particular, {{math|''S<sub>t</sub>(z)''}}, the [[Sylvester's formula|Lagrange-Sylvester polynomial]], is the only {{math|''Q<sub>t</sub>''}} whose degree is less than that of {{mvar|P}}.
 
'''Example''': Consider the case of an arbitrary  2-by-2 matrix,
:<math>A:=\begin{bmatrix}
a & b \\
c & d \end{bmatrix}.</math>
 
The exponential matrix {{math|e<sup>''tA''</sup>}}, by virtue of the [[Cayley–Hamilton theorem]], must be of  the form
::<math>e^{tA}=s_0(t)\,I+s_1(t)\,A</math>.
(For any complex number {{mvar|z}} and any '''''C'''''-algebra {{mvar|B}}, we denote again by {{mvar|z}}  the product of {{mvar|z}} by the unit of {{mvar|B}}.) Let {{mvar|α}} and {{mvar|β}}  be the roots of the [[characteristic polynomial]] of {{mvar|A}},
:<math>P(z)=z^2-(a+d)\ z+ ad-bc= (z-\alpha)(z-\beta) ~ .</math>
 
Then we have
:<math>S_t(z)= e^{\alpha t} \frac{z-\beta}{\alpha-\beta}  + e^{\beta t} \frac{z-\alpha}{\beta-\alpha}  ~, </math>
and hence
:<math>s_0(t)=\frac{\alpha\,e^{\beta t}
-\beta\,e^{\alpha t}}{\alpha-\beta},\quad
s_1(t)=\frac{e^{\alpha t}-e^{\beta t}}{\alpha-\beta}\quad</math>
if  {{math|''α'' ≠ ''β''}}; while, if  {{math|''α'' {{=}} ''β''}},
:<math>S_t(z)= e^{\alpha t} ( 1+ t (z-\alpha  ))  ~, </math>
so that
:<math>s_0(t)=(1-\alpha\,t)\,e^{\alpha t},\quad
s_1(t)=t\,e^{\alpha t}~.</math>
 
Defining
:<math>s \equiv \frac{\alpha + \beta}{2}=\frac{\operatorname{tr} A}{2}~, \qquad \qquad  q\equiv \frac{\alpha-\beta}{2}=\pm\sqrt{-\det\left(A-s I\right)},</math>
we have
:<math>s_0(t) = e^{s t}\left(\cosh (q t) - s \frac{\sinh (q t)}{q}\right),\qquad s_1(t) =e^{s t}\frac{\sinh(q t)}{q},</math>
where {{math| sin(''qt'')/''q''}}  is 0 if {{mvar|t}} = 0, and {{mvar|t}} if {{mvar|q}} = 0.
Thus,
{{Equation box 1
|indent =:
|equation = <math>e^{tA}=e^{s t}\left( (\cosh (q t) - s \frac{\sinh (q t)}{q})~I~+\frac{\sinh(q t)}{q} A\right) ~.</math>
|cellpadding= 6
|border
|border colour = #0073CF
|bgcolor=#F9FFF7}}
Thus, as indicated above, the matrix {{mvar|A}} having decomposed into the sum of two mutually commuting pieces, the traceful piece and the traceless piece,
:<math> A= sI + (A-sI)~,</math>
the matrix exponential reduces to a plain product of the exponentials of the two respective pieces. This is a formula often used in physics, as it amounts to the analog of  [[Euler's formula]] for  [[Pauli_spin_matrices#Exponential_of_a_Pauli_vector|Pauli spin matrices]], that is rotations of the doublet representation of the group [[SU(2)]].
 
 
The polynomial {{math|''S<sub>t</sub>''}} can also be given the following "[[interpolation]]" characterization. Define  {{math|''e<sub>t</sub>(z) ≡ e<sup>tz</sup>''}}, and {{mvar|n}} ≡ deg{{mvar|P}}. Then  {{math|''S<sub>t</sub>(z)''}} is the unique degree {{math|< ''n''}}  polynomial which satisfies {{math|''S<sub>t</sub><sup>(k)</sup>(a)'' {{=}} ''e<sub>t</sub><sup>(k)</sup>(a)''}} whenever {{mvar|k}} is less than the multiplicity of {{mvar|a}} as a root of {{mvar|P}}.  We assume, as we obviously can, that {{mvar|P}} is the [[Minimal polynomial (linear algebra)|minimal polynomial]] of {{mvar|A}}. We further assume that {{mvar|A}} is a [[diagonalizable matrix]]. In particular, the roots of {{mvar|P}} are simple, and the "[[interpolation]]" characterization indicates that {{math|''S<sub>t</sub>''}} is given by the [[Lagrange interpolation]] formula, so it is the  [[Sylvester's formula|Lagrange−Sylvester polynomial]] .
 
At the other extreme, if {{math| ''P'' {{=}}  ''(z−a)<sup>n</sup>''}}, then
:<math>S_t=e^{at}\ \sum_{k=0}^{n-1}\ \frac{t^k}{k!}\ (z-a)^k ~.</math>
The simplest case not covered by the above observations is when <math>P=(z-a)^2\,(z-b)</math> with  {{math|''a'' ≠ ''b''}}, which yields
:<math>S_t=e^{at}\ \frac{z-b}{a-b}\ \Bigg(1+\left(t+\frac{1}{b-a}\right)(z-a)\Bigg)+e^{bt}\ \frac{(z-a)^2}{(b-a)^2}\quad.</math>
 
=== Evaluation by implementation of [[Sylvester's formula]]===
A practical, expedited computation of the above reduces to the following rapid steps.
Recall from above that an ''n''-by-''n'' matrix  {{math|exp(''tA'')}} amounts to a linear combination of the first {{mvar|n}}−1 powers of {{mvar|A}}  by the [[Cayley-Hamilton theorem]].  For [[diagonalizable matrix|diagonalizable]] matrices, as illustrated above, e.g. in the 2 by 2 case,  [[Sylvester's formula]] yields  {{math|exp(''tA'') {{=}} ''B<sub>α</sub>'' exp(''tα'')+''B<sub>β</sub>'' exp(''tβ'')}}, where the {{mvar|B}}s are  the [[Frobenius covariant]]s of {{mvar|A}}.  It is easiest, however, to simply solve for these {{mvar|B}}s directly, by evaluating this expression and its first derivative at {{mvar|t}}=0, in terms of {{mvar|A}} and {{mvar|I}},  to find the same answer as above.
 
But this simple procedure also works for [[defective matrix|defective]] matrices, in a generalization due to Buchheim.<ref>Rinehart, R. F. (1955). "The equivalence of definitions of a matric function". ''The American Mathematical Monthly'', '''62''' (6), 395-414.</ref> This is illustrated here for a 4-by-4 example of a matrix which is not diagonalizable, and the {{mvar|B}}s are not projection matrices.
 
Consider
:<math>A =
\begin{pmatrix}
1 & 1 & 0 & 0 \\
0 & 1 & 1 & 0 \\
0 & 0 & 1 & -1/8 \\
0 & 0 & 1/2 & 1/2
\end{pmatrix}  ~,
</math>
with eigenvalues {{math| ''λ''<sub>1</sub>{{=}}3/4}}  and  {{math| ''λ''<sub>2</sub>{{=}}1}}, each with a
multiplicity of two.
 
Consider the exponential of each eigenvalue multiplied by {{mvar|t}},  {{math|exp(''λ<sub>i</sub>t'')}}. Multiply each such by the corresponding undetermined coefficient matrix {{math|''B''<sub>''i''</sub>}}.  If the eigenvalues have an algebraic multiplicity greater than 1, then repeat the process,  but now multiplying by an extra factor of {{mvar|t}}  for each repetition, to ensure linear independence. (If one eigenvalue had a multiplicity of three, then there would be the three terms: <math>B_{i_1} e^{\lambda_i t}, ~ B_{i_2} t e^{\lambda_i t}, ~ B_{i_3} t^2 e^{\lambda_i t} </math>. By contrast, when all eigenvalues are distinct, the {{mvar|B}}s are just the [[Frobenius covariant]]s, and solving for them as below just amounts to the inversion of the [[Vandermonde matrix]] of these 4 eigenvalues.) 
 
Sum all such terms,  here four such:
:<math>
e^{A t} = B_{1_1} e^{\lambda_1 t} + B_{1_2} t e^{\lambda_1 t} + B_{2_1} e^{\lambda_2 t} + B_{2_2} t e^{\lambda_2 t} ,
</math>
:<math>
e^{A t} = B_{1_1} e^{3/4 t} + B_{1_2} t e^{3/4 t} + B_{2_1} e^{1 t} + B_{2_2} t e^{1 t}
</math>.
 
To solve for all of the unknown matrices  {{mvar|B}}  in terms of the first three powers of  {{mvar|A}} and the identity, we need four equations, the above one providing one such at {{mvar|t}} =0. Further, differentiate it with respect to {{mvar|t}},
:<math>
A e^{A t} = 3/4 B_{1_1} e^{3/4 t} + \left( 3/4 t + 1 \right) B_{1_2} e^{3/4 t} + 1 B_{2_1} e^{1 t} + \left(1 t + 1 \right) B_{2_2} e^{1 t}  ~,
</math>
and again,
:<math>
\begin{align}
A^2 e^{A t} =& (3/4)^2 B_{1_1} e^{3/4 t} + \left( (3/4)^2 t + ( 3/4 + 1 \cdot 3/4) \right) B_{1_2} e^{3/4 t} + B_{2_1} e^{1 t}\\ +& \left(1^2 t + (1 + 1 \cdot 1 )\right) B_{2_2} e^{1 t} \\  =& (3/4)^2 B_{1_1} e^{3/4 t} + \left( (3/4)^2 t + 3/2 \right) B_{1_2} e^{3/4 t} + B_{2_1} e^{t} + \left(t + 2\right) B_{2_2} e^{t} ~,
\end{align}
</math>
and once more,
:<math>
\begin{align}
A^3 e^{A t} =& (3/4)^3 B_{1_1} e^{3/4 t} + \left( (3/4)^3 t + ( (3/4)^2 + (3/2) \cdot 3/4) ) \right) B_{1_2} e^{3/4 t}\\ +& B_{2_1} e^{1 t} + \left(1^3 t + (1 + 2) \cdot 1 \right) B_{2_2} e^{1 t} \\ =&  (3/4)^3 B_{1_1} e^{3/4 t}\! + \left( (3/4)^3 t\! + 27/16 ) \right) B_{1_2} e^{3/4 t}\! + B_{2_1} e^{t}\! + \left(t + 3\cdot 1\right) B_{2_2} e^{t}
\end{align} 
</math>.
(In the general case, {{mvar|n}}−1 derivatives need be taken.)
 
Setting {{mvar|t}}=0 in these four equations, the  four coefficient matrices  {{mvar|B}}s may be solved for,
:<math>
\begin{align}
I =& B_{1_1} + B_{2_1} \\
A =& 3/4 B_{1_1} + B_{1_2} + B_{2_1} + B_{2_2} \\
A^2 =& (3/4)^2 B_{1_1} + (3/2) B_{1_2} + B_{2_1} + 2 B_{2_2} \\
A^3 =& (3/4)^3 B_{1_1} + (27/16) B_{1_2} + B_{2_1} + 3 B_{2_2}
\end{align} </math> ,
to yield 
:<math>
\begin{align}
B_{1_1} =& 128 A^3 - 366 A^2 + 288 A - 80 I \\
B_{1_2} =& 16 A^3 - 44 A^2 + 40 A - 12 I \\
B_{2_1} =&-128 A^3 + 366 A^2 - 288 A + 80 I\\
B_{2_2} =& 16 A^3 - 40 A^2 + 33 A - 9 I
\end{align}
</math> .
 
Substituting with  the value for {{mvar|A}} yields the coefficient matrices
:<math>
\begin{align}
B_{1_1} =& \begin{pmatrix}0 & 0 & 48 & -16\\ 0 & 0 & -8 & 2\\ 0 & 0 & 1 & 0\\ 0 & 0 & 0 & 1\end{pmatrix}\\
B_{1_2} =& \begin{pmatrix}0 & 0 & 4 & -2\\ 0 & 0 & -1 & 1/2\\ 0 & 0 & 1/4 & -1/8\\ 0 & 0 & 1/2 & -1/4 \end{pmatrix}\\
B_{2_1} =& \begin{pmatrix}1 & 0 & -48 & 16\\ 0 & 1 & 8 & -2\\ 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0\end{pmatrix}\\
B_{2_2} =& \begin{pmatrix}0 & 1 & 8 & -2\\ 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0\end{pmatrix} 
\end{align}
</math>
so the final answer is
:<math>
{e}^{tA}\!=\!\begin{pmatrix}{e}^{t} & t{e}^{t} & \left( 8t-48\right) {e}^{t}\!+\left( 4t+48\right){e}^{3t/4} & \left( 16-2\,t\right){e}^{t}\!+\left( -2t-16\right){e}^{3t/4}\\ 0 & {e}^{t} & 8{e}^{t}\!+\left( -t-8\right) {e}^{3t/4} & -\frac{4{e}^{t}+\left(-t-4\right){e}^{3t/4}}{2}\\ 0 & 0 & \frac{\left( t+4\right) {e}^{3t/4}}{4} & -\frac{t {e}^{3t/4}}{8}\\ 0 & 0 & \frac{t{e}^{3t/4}}{2} & -\frac{\left( t-4\right) {e}^{3t/4}}{4}\end{pmatrix}
</math>.
 
The procedure is quite shorter than [[Matrix_differential_equation#Putzer Algorithm for computing eAt|Putzer's algorithm]] sometimes utilized in such cases.
 
==Illustrations==
 
Suppose that we want to compute the exponential of
 
: <math>B=\begin{bmatrix}
21 & 17 & 6 \\
-5 & -1 & -6 \\
4 & 4 & 16 \end{bmatrix}.</math>
 
Its Jordan form is
 
: <math>J = P^{-1}BP = \begin{bmatrix}
4 & 0 & 0 \\
0 & 16 & 1 \\
0 & 0 & 16 \end{bmatrix},</math>
 
where the matrix ''P'' is given by
 
: <math>P=\begin{bmatrix}
-\frac14 & 2 & \frac54 \\
\frac14 & -2 & -\frac14 \\
0 & 4 & 0 \end{bmatrix}.</math>
 
Let us first calculate exp(''J''). We have
 
: <math>J=J_1(4)\oplus J_2(16) \, </math>
 
The exponential of a 1×1 matrix is just the exponential of the one entry of the matrix, so exp(''J''<sub>1</sub>(4)) =&nbsp;[''e''<sup>4</sup>]. The exponential of ''J''<sub>2</sub>(16) can be calculated by the formula ''e''<sup>(λ''I''&nbsp;+&nbsp;''N'')</sup> =&nbsp;''e''<sup>λ</sup> ''e''<sup>N</sup> mentioned above; this yields<ref>This can be generalized; in general, the exponential of ''J''<sub>''n''</sub>(''a'') is an upper triangular matrix with ''e''<sup>''a''</sup>/0! on the main diagonal, ''e''<sup>''a''</sup>/1! on the one above, ''e''<sup>''a''</sup>/2! on the next one, and so on.</ref>
 
: <math>
\begin{align}
\exp \left( \begin{bmatrix} 16 & 1 \\ 0 & 16 \end{bmatrix} \right)
& = e^{16} \exp \left( \begin{bmatrix} 0 & 1 \\ 0 & 0 \end{bmatrix} \right) \\[6pt]
& = e^{16} \left(\begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix} + \begin{bmatrix} 0 & 1 \\ 0 & 0 \end{bmatrix} + {1 \over 2!}\begin{bmatrix} 0 & 0 \\ 0 & 0 \end{bmatrix} + \cdots \right)
= \begin{bmatrix} e^{16} & e^{16} \\ 0 & e^{16} \end{bmatrix}.
\end{align}
</math>
 
Therefore, the exponential of the original matrix ''B'' is
 
: <math>
\begin{align}
\exp(B)
& = P \exp(J) P^{-1}
= P \begin{bmatrix} e^4 & 0 & 0 \\ 0 & e^{16} & e^{16} \\ 0 & 0 & e^{16}  \end{bmatrix} P^{-1} \\[6pt]
& = {1\over 4} \begin{bmatrix}
  13e^{16} - e^4 & 13e^{16} - 5e^4 & 2e^{16} - 2e^4 \\
  -9e^{16} + e^4 & -9e^{16} + 5e^4 & -2e^{16} + 2e^4 \\
  16e^{16}      & 16e^{16}        & 4e^{16}
\end{bmatrix}.
\end{align}
</math>
 
==Applications==
===Linear differential equations===
 
The matrix exponential has applications to systems of [[linear differential equation]]s.  (See also [[matrix differential equation]].)  Recall from earlier in this article that a homogeneous differential equation of the form
: <math> \mathbf{y}' = A\mathbf{y} </math>
has solution {{math|''e''<sup>''At''</sup> '''''y'''''(0)}}. If we consider the vector
: <math> \mathbf{y}(t) = \begin{pmatrix} y_1(t) \\ \vdots \\y_n(t) \end{pmatrix}  ~,</math>
we can express a system of inhomogeneous coupled linear differential equations as
: <math> \mathbf{y}'(t) = A\mathbf{y}(t)+\mathbf{b}(t).</math>
If we make an [[ansatz]] to use an integrating factor of {{math|''e''<sup>−''At''</sup>}} and multiply throughout, we obtain
: <math>e^{-At}\mathbf{y}'-e^{-At}A\mathbf{y} = e^{-At}\mathbf{b}</math>
: <math>e^{-At}\mathbf{y}'-Ae^{-At}\mathbf{y} = e^{-At}\mathbf{b}</math>
: <math> \frac{d}{dt} (e^{-At}\mathbf{y}) = e^{-At}\mathbf{b}~.</math>
 
The second step is possible due to the fact that, if {{math|''AB'' {{=}} ''BA''}}, then {{math|''e''<sup>''At''</sup>''B'' {{=}} ''Be''<sup>''At''</sup>}}. So,  calculating {{math|''e''<sup>''At''</sup>}} leads to the solution to the system, by simply integrating the third step in {{mvar|t}}s.
 
====Example (homogeneous)====
Consider  the system
:<math>\begin{matrix}
x' &=& 2x&-y&+z \\
y' &=&  &3y&-1z \\
z' &=& 2x&+y&+3z \end{matrix}</math>
 
We have the associated matrix
:<math>A=\begin{bmatrix}
2 & -1 &  1 \\
0 &  3 & -1 \\
2 &  1 &  3 \end{bmatrix}  ~.</math>
 
The matrix exponential is
:<math>e^{tA}=\begin{bmatrix}
    e^{2t}(1+e^{2t}-2t)  & -2te^{2t}    &  e^{2t}(-1+e^{2t}) \\
  -e^{2t}(-1+e^{2t}-2t) & 2(t+1)e^{2t} & -e^{2t}(-1+e^{2t}) \\
    e^{2t}(-1+e^{2t}+2t) & 2te^{2t}    &  e^{2t}(1+e^{2t})  \end{bmatrix}~,</math>
so the general solution of the homogeneous system is
: <math>\begin{bmatrix}x \\y \\ z\end{bmatrix}=
x(0)\begin{bmatrix}e^{2t}(1+e^{2t}-2t) \\-e^{2t}(-1+e^{2t}-2t)\\e^{2t}(-1+e^{2t}+2t)\end{bmatrix}
+y(0)\begin{bmatrix}-2te^{2t}\\2(t+1)e^{2t}\\2te^{2t}\end{bmatrix}
+z(0)\begin{bmatrix}e^{2t}(-1+e^{2t})\\-e^{2t}(-1+e^{2t})\\e^{2t}(1+e^{2t})\end{bmatrix} ~,</math>
amounting to
:<math>
\begin{align}
x & = x(0)(e^{2t}(1+e^{2t}-2t)) + y(0) (-2te^{2t}) + z(0)(e^{2t}(-1+e^{2t})) \\
y & = x(0)(-e^{2t}(-1+e^{2t}-2t)) + y(0)(2(t+1)e^{2t}) + z(0)(-e^{2t}(-1+e^{2t})) \\
z & = x(0)(e^{2t}(-1+e^{2t}+2t)) + y(0)(2te^{2t}) + z(0)(e^{2t}(1+e^{2t}))  ~.
\end{align}
</math>
 
====Example (inhomogeneous)====
Consider now the inhomogeneous system
:<math>\begin{matrix}
x' &=& 2x & - & y & + & z & + & e^{2t} \\
y' &=&    &  & 3y& - & z & \\
z' &=& 2x & + & y & + & 3z & + & e^{2t} \end{matrix} ~.</math>
 
We again have
:<math>A= \left[ \begin{array}{rrr}
2 & -1 &  1 \\
0 &  3 & -1 \\
2 &  1 &  3 \end{array} \right] ~,</math>
and
:<math>\mathbf{b}=e^{2t}\begin{bmatrix}1 \\0\\1\end{bmatrix}.</math>
 
From before, we already have the general solution to the homogeneous equation. Since the sum of the homogeneous and particular solutions give the general solution to the inhomogeneous problem, we now only need find the particular solution. 
 
We have, by above,
: <math>\mathbf{y}_p = e^{tA}\int_0^t e^{(-u)A}\begin{bmatrix}e^{2u} \\0\\e^{2u}\end{bmatrix}\,du+e^{tA}\mathbf{c}</math>
 
: <math>\mathbf{y}_p = e^{tA}\int_0^t
\begin{bmatrix}
    2e^u - 2ue^{2u} & -2ue^{2u}    & 0 \\  \\
-2e^u + 2(u+1)e^{2u} & 2(u+1)e^{2u} & 0 \\  \\
            2ue^{2u} & 2ue^{2u}    & 2e^u\end{bmatrix}\begin{bmatrix}e^{2u} \\0\\e^{2u}\end{bmatrix}\,du+e^{tA}\mathbf{c}</math>
 
:<math>\mathbf{y}_p = e^{tA}\int_0^t
\begin{bmatrix}
e^{2u}( 2e^u - 2ue^{2u}) \\  \\
  e^{2u}(-2e^u + 2(1 + u)e^{2u}) \\  \\
  2e^{3u} + 2ue^{4u}\end{bmatrix}+e^{tA}\mathbf{c}</math>
 
: <math>\mathbf{y}_p = e^{tA}\begin{bmatrix}
-{1 \over 24}e^{3t}(3e^t(4t-1)-16) \\  \\
{1 \over 24}e^{3t}(3e^t(4t+4)-16) \\  \\
{1 \over 24}e^{3t}(3e^t(4t-1)-16)\end{bmatrix}+
\begin{bmatrix}
    2e^t - 2te^{2t} & -2te^{2t}    & 0 \\  \\
-2e^t + 2(t+1)e^{2t} & 2(t+1)e^{2t} & 0 \\  \\
            2te^{2t} & 2te^{2t}    & 2e^t\end{bmatrix}\begin{bmatrix}c_1 \\c_2 \\c_3\end{bmatrix} ~,</math>
which could be further simplified to get the requisite particular solution determined through variation of parameters.
Note '''c''' = '''y'''<sub>''p''</sub>(0). For more rigor, see the following generalization.
 
===Inhomogeneous case generalization:  variation of parameters===
For the inhomogeneous case, we can use [[integrating factor]]s (a method akin to [[variation of parameters]]). We seek a particular solution of the form {{math|'''y'''<sub>p</sub>(''t'') {{=}} exp(''tA'')&thinsp;'''z'''&thinsp;(''t'')&thinsp;}}, 
: <math>
\begin{align}
\mathbf{y}_p'(t) & = (e^{tA})'\mathbf{z}(t)+e^{tA}\mathbf{z}'(t) \\[6pt]
& = Ae^{tA}\mathbf{z}(t)+e^{tA}\mathbf{z}'(t) \\[6pt]
& = A\mathbf{y}_p(t)+e^{tA}\mathbf{z}'(t)~.
\end{align}
</math>
 
For {{math|'''''y'''''<sub>''p''</sub>}} to be a solution,
: <math>
\begin{align}
e^{tA}\mathbf{z}'(t) & = \mathbf{b}(t) \\[6pt]
\mathbf{z}'(t) & = (e^{tA})^{-1}\mathbf{b}(t) \\[6pt]
\mathbf{z}(t) & = \int_0^t e^{-uA}\mathbf{b}(u)\,du+\mathbf{c} ~.
\end{align}
</math>
 
Thus,
: <math>
\begin{align}
\mathbf{y}_p(t) & {} = e^{tA}\int_0^t e^{-uA}\mathbf{b}(u)\,du+e^{tA}\mathbf{c} \\
& {} = \int_0^t e^{(t-u)A}\mathbf{b}(u)\,du+e^{tA}\mathbf{c}
\end{align} ~,
</math>
where {{math|'''''c'''''}} is determined by the initial conditions of the problem.
 
More precisely, consider the equation
:<math>Y'-A\ Y=F(t)</math>
with the initial condition {{math|''Y(t<sub>0</sub>)'' {{=}} ''Y<sub>0</sub>''}}, where
{{mvar|A}}  is an {{mvar|n}}  by  {{mvar|n}}  complex matrix,
 
{{mvar|F}}  is a continuous function from some open interval {{mvar|I}} to ℂ<sup>''n''</sup>,
 
<math>t_0</math> is a point of {{mvar|I}}, and
 
<math>Y_0</math> is a vector of ℂ<sup>''n''</sup>.
 
Left-multiplying the above displayed equality by {{math|''e<sup>−tA</sup>''}} yields
 
:<math>Y(t)=e^{(t-t_0)A}\ Y_0+\int_{t_0}^t e^{(t-x)A}\ F(x)\ dx  ~.</math>
 
We claim that the solution to the equation
:<math>P(d/dt)\ y = f(t)</math>
with the initial conditions <math>y^{(k)}(t_0)=y_k</math> for 0 ≤ {{math|''k < n''}} is
:<math>y(t)=\sum_{k=0}^{n-1}\ y_k\ s_k(t-t_0)+\int_{t_0}^t s_{n-1}(t-x)\ f(x)\ dx ~,</math>
where the notation is as follows:
 
<math>P\in\mathbb{C}[X]</math> is a monic polynomial of degree {{math|''n'' > 0}},
 
{{mvar|f}}  is a continuous complex valued function defined on some open interval  {{mvar|I}},
 
<math>t_0</math> is a point of  {{mvar|I}},
 
<math>y_k</math> is a complex number, and
 
{{math|''s<sub>k</sub>(t)''}}  is the coefficient of <math>X^k</math> in the polynomial denoted by <math>S_t\in\mathbb{C}[X]</math> in Subsection [[matrix exponential#Evaluation_by_Laurent_series|Evaluation by Laurent series]] above.
 
To justify this claim, we transform our order {{mvar|n}} scalar equation into an order one vector equation by the usual  [[Ordinary differential equation#Reduction to a first order system|reduction to a first order system]]. Our vector equation takes the form
 
:<math>\frac{dY}{dt}-A\ Y=F(t),\quad Y(t_0)=Y_0,</math>
 
where {{mvar|A}} is the [[transpose]] [[companion matrix]] of {{mvar|P}}. We solve this equation as explained above, computing the matrix exponentials by the observation made in Subsection [[matrix exponential#Alternative|Alternative]] above.
 
In the case {{mvar|n}} = 2  we get the following statement. The solution to
:<math>y''-(\alpha+\beta)\ y'
+\alpha\,\beta\ y=f(t),\quad
y(t_0)=y_0,\quad y'(t_0)=y_1</math>
is
:<math>y(t)=y_0\ s_0(t-t_0)+y_1\ s_1(t-t_0)
+\int_{t_0}^t s_1(t-x)\,f(x)\ dx,</math>
where the functions {{math|''s''<sub>0</sub>}}  and {{math|''s''<sub>1</sub>}} are as in  Subsection [[matrix exponential#Evaluation_by_Laurent_series|Evaluation by Laurent series]] above.
 
==See also==
{{Div col}}
*[[Matrix function]]
*[[Matrix logarithm]]
*[[Exponential function]]
*[[Exponential map]]
*[[Magnus expansion]]
*[[Vector flow]]
*[[Golden–Thompson inequality]]
*[[Phase-type distribution]]
*[[Lie product formula]]
*[[Baker–Campbell–Hausdorff formula]]
*[[Frobenius covariant]]
*[[Sylvester's formula]]
{{Div col end}}
 
==References==
{{Reflist}}
 
* {{Cite book | last1=Horn | first1=Roger A. | last2=Johnson | first2=Charles R. | title=Topics in Matrix Analysis | publisher=[[Cambridge University Press]] | isbn=978-0-521-46713-1 | year=1991 | ref=harv}}.
* {{Cite journal | last1=Moler | first1=Cleve | author1-link=Cleve Moler | last2=Van Loan | first2=Charles F. | author2-link=Charles F. Van Loan | title=Nineteen Dubious Ways to Compute the Exponential of a Matrix, Twenty-Five Years Later | year=2003 | journal=SIAM Review | issn=1095-7200 | volume=45 | issue=1 | pages=3–49 | url=http://www.cs.cornell.edu/cv/researchpdf/19ways+.pdf | doi=10.1137/S00361445024180 | ref=harv }}.
 
==External links==
* {{mathworld|urlname=MatrixExponential|title=Matrix Exponential}}
* [http://math.fullerton.edu/mathews/n2003/MatrixExponentialMod.html Module for the Matrix Exponential]
 
{{DEFAULTSORT:Matrix Exponential}}
[[Category:Matrix theory]]
[[Category:Lie groups]]
[[Category:Exponentials]]

Latest revision as of 23:52, 15 September 2019

This is a preview for the new MathML rendering mode (with SVG fallback), which is availble in production for registered users.

If you would like use the MathML rendering mode, you need a wikipedia user account that can be registered here [[1]]

  • Only registered users will be able to execute this rendering mode.
  • Note: you need not enter a email address (nor any other private information). Please do not use a password that you use elsewhere.

Registered users will be able to choose between the following three rendering modes:

MathML


Follow this link to change your Math rendering settings. You can also add a Custom CSS to force the MathML/SVG rendering or select different font families. See these examples.

Demos

Here are some demos:


Test pages

To test the MathML, PNG, and source rendering modes, please go to one of the following test pages:

Bug reporting

If you find any bugs, please report them at Bugzilla, or write an email to math_bugs (at) ckurs (dot) de .