# Lyapunov exponent

In mathematics the Lyapunov exponent or Lyapunov characteristic exponent of a dynamical system is a quantity that characterizes the rate of separation of infinitesimally close trajectories. Quantitatively, two trajectories in phase space with initial separation ${\displaystyle \delta \mathbf {Z} _{0}}$ diverge (provided that the divergence can be treated within the linearized approximation) at a rate given by

${\displaystyle |\delta \mathbf {Z} (t)|\approx e^{\lambda t}|\delta \mathbf {Z} _{0}|\,}$

where ${\displaystyle \lambda }$ is the Lyapunov exponent.

The rate of separation can be different for different orientations of initial separation vector. Thus, there is a spectrum of Lyapunov exponents— equal in number to the dimensionality of the phase space. It is common to refer to the largest one as the Maximal Lyapunov exponent (MLE), because it determines a notion of predictability for a dynamical system. A positive MLE is usually taken as an indication that the system is chaotic (provided some other conditions are met, e.g., phase space compactness). Note that an arbitrary initial separation vector will typically contain some component in the direction associated with the MLE, and because of the exponential growth rate, the effect of the other exponents will be obliterated over time.

The exponent is named after Aleksandr Lyapunov.

## Definition of the maximal Lyapunov exponent

The maximal Lyapunov exponent can be defined as follows:

${\displaystyle \lambda =\lim _{t\to \infty }\lim _{\delta \mathbf {Z} _{0}\to 0}{\frac {1}{t}}\ln {\frac {|\delta \mathbf {Z} (t)|}{|\delta \mathbf {Z} _{0}|}}.}$

The limit ${\displaystyle \delta \mathbf {Z} _{0}\to 0}$ ensures the validity of the linear approximation at any time.[1]

For discrete time system (maps or fixed point iterations) ${\displaystyle x_{n+1}=f(x_{n})}$ , for an orbit starting with ${\displaystyle x_{0}}$ this translates into:

${\displaystyle \lambda (x_{0})=\lim _{n\to \infty }{\frac {1}{n}}\sum _{i=0}^{n-1}\ln |f'(x_{i})|}$

## Definition of the Lyapunov spectrum

For a dynamical system with evolution equation ${\displaystyle f^{t}}$ in an n–dimensional phase space, the spectrum of Lyapunov exponents

${\displaystyle \{\lambda _{1},\lambda _{2},\ldots ,\lambda _{n}\}\,,}$

in general, depends on the starting point ${\displaystyle x_{0}}$. (However, we will usually be interested in the attractor (or attractors) of a dynamical system, and there will normally be one set of exponents associated with each attractor. The choice of starting point may determine which attractor the system ends up on, if there is more than one. Note: Hamiltonian systems do not have attractors, so this particular discussion does not apply to them.) The Lyapunov exponents describe the behavior of vectors in the tangent space of the phase space and are defined from the Jacobian matrix

${\displaystyle J^{t}(x_{0})=\left.{\frac {df^{t}(x)}{dx}}\right|_{x_{0}}}$

The ${\displaystyle J^{t}}$ matrix describes how a small change at the point ${\displaystyle x_{0}}$ propagates to the final point ${\displaystyle f^{t}(x_{0})}$. The limit

${\displaystyle L(x_{0})=\lim _{t\rightarrow \infty }(J^{t}\cdot \mathrm {Transpose} (J^{t}))^{1/2t}}$

defines a matrix ${\displaystyle L(x_{0})}$ (the conditions for the existence of the limit are given by the Oseledec theorem). If ${\displaystyle \Lambda _{i}(x_{0})}$ are the eigenvalues of ${\displaystyle L(x_{0})}$, then the Lyapunov exponents ${\displaystyle \lambda _{i}}$ are defined by

${\displaystyle \lambda _{i}(x_{0})=\log \Lambda _{i}(x_{0})}$

The set of Lyapunov exponents will be the same for almost all starting points of an ergodic component of the dynamical system.

## Lyapunov exponent for time-varying linearization

To introduce Lyapunov exponent let us consider a fundamental matrix ${\displaystyle X(t)}$ (e.g., for linearization along stationary solution ${\displaystyle x_{0}}$ in continuous system the fundamental matrix is ${\displaystyle \exp \left(\left.{\frac {df^{t}(x)}{dx}}\right|_{x_{0}}t\right)}$), consisting of the linear-independent solutions of the first approximation system. The singular values ${\displaystyle \{\alpha _{j}{\big (}X(t){\big )}\}_{1}^{n}}$ of the matrix ${\displaystyle X(t)}$ are the square roots of the eigenvalues of the matrix ${\displaystyle X(t)^{*}X(t)}$. The largest Lyapunov exponent ${\displaystyle \lambda _{max}}$ is as follows [2]

${\displaystyle \lambda _{max}=\max \limits _{j}\limsup _{t\rightarrow \infty }{\frac {1}{t}}\ln \alpha _{j}{\big (}X(t){\big )}.}$

A.M. Lyapunov proved that if the system of the first approximation is regular (e.g., all systems with constant and periodic coefficients are regular) and its largest Lyapunov exponent is negative, then the solution of the original system is asymptotically Lyapunov stable. Later, it was stated by O. Perron that the requirement of regularity of the first approximation is substantial.

In 1930 O. Perron constructed an example of the second-order system, the first approximation of which has negative Lyapunov exponents along a zero solution of the original system but, at the same time, this zero solution of original nonlinear system is Lyapunov unstable. Furthermore, in a certain neighborhood of this zero solution almost all solutions of original system have positive Lyapunov exponents. Also it is possible to construct reverse example when first approximation has positive Lyapunov exponents along a zero solution of the original system but, at the same time, this zero solution of original nonlinear system is Lyapunov stable.[3][4] The effect of sign inversion of Lyapunov exponents of solutions of the original system and the system of first approximation with the same initial data was subsequently called the Perron effect.[3][4]

Perron's counterexample shows that negative largest Lyapunov exponent does not, in general, indicate stability, and that positive largest Lyapunov exponent does not, in general, indicate chaos.

Therefore, time-varying linearization requires additional justification.[4]

## Basic properties

If the system is conservative (i.e. there is no dissipation), a volume element of the phase space will stay the same along a trajectory. Thus the sum of all Lyapunov exponents must be zero. If the system is dissipative, the sum of Lyapunov exponents is negative.

If the system is a flow and the trajectory does not converge to a single point, one exponent is always zero—the Lyapunov exponent corresponding to the eigenvalue of ${\displaystyle L}$ with an eigenvector in the direction of the flow.

## Significance of the Lyapunov spectrum

The Lyapunov spectrum can be used to give an estimate of the rate of entropy production and of the fractal dimension of the considered dynamical system. In particular from the knowledge of the Lyapunov spectrum it is possible to obtain the so-called Kaplan–Yorke dimension ${\displaystyle D_{KY}}$, which is defined as follows:

${\displaystyle D_{KY}=k+\sum _{i=1}^{k}{\frac {\lambda _{i}}{|\lambda _{k+1}|}},}$

where ${\displaystyle k}$ is the maximum integer such that the sum of the ${\displaystyle k}$ largest exponents is still non-negative. ${\displaystyle D_{KY}}$ represents an upper bound for the information dimension of the system.[5] Moreover, the sum of all the positive Lyapunov exponents gives an estimate of the Kolmogorov–Sinai entropy accordingly to Pesin's theorem.[6]

The multiplicative inverse of the largest Lyapunov exponent is sometimes referred in literature as Lyapunov time, and defines the characteristic e-folding time. For chaotic orbits, the Lyapunov time will be finite, whereas for regular orbits it will be infinite.

## Numerical calculation

Generally the calculation of Lyapunov exponents, as defined above, cannot be carried out analytically, and in most cases one must resort to numerical techniques. An early example, which also constituted the first demonstration of the exponential divergence of chaotic trajectories, was carried out by R. H. Miller in 1964.[7] Currently, the most commonly used numerical procedure estimates the ${\displaystyle L}$ matrix based on averaging several finite time approximations of the limit defining ${\displaystyle L}$.

One of the most used and effective numerical techniques to calculate the Lyapunov spectrum for a smooth dynamical system relies on periodic Gram–Schmidt orthonormalization of the Lyapunov vectors to avoid a misalignment of all the vectors along the direction of maximal expansion.[8][9][10][11]

For the calculation of Lyapunov exponents from limited experimental data, various methods have been proposed. However, there are many difficulties with applying these methods and such problems should be approached with care. The main difficulty is that the data does not fully explore the phase space, rather it is confined to the attractor which has very limited (if any) extension along certain directions. These thinner or more singular directions within the data set are the ones associated with the more negative exponents. The use of nonlinear mappings to model the evolution of small displacements from the attractor has been shown to dramatically improve the ability to recover the Lyapunov spectrum,[12][13] provided the data has a very low level of noise. The singular nature of the data and its connection to the more negative exponents has also been explored.[14]

## Local Lyapunov exponent

Whereas the (global) Lyapunov exponent gives a measure for the total predictability of a system, it is sometimes interesting to estimate the local predictability around a point x0 in phase space. This may be done through the eigenvalues of the Jacobian matrix J 0(x0). These eigenvalues are also called local Lyapunov exponents.[15] (A word of caution: unlike the global exponents, these local exponents are not invariant under a nonlinear change of coordinates.)

## Conditional Lyapunov exponent

This term is normally used in regards to the synchronization of chaos, in which there are two systems that are coupled, usually in a unidirectional manner so that there is a drive (or master) system and a response (or slave) system. The conditional exponents are those of the response system with the drive system treated as simply the source of a (chaotic) drive signal. Synchronization occurs when all of the conditional exponents are negative.[16]

## References

1. {{#invoke:Citation/CS1|citation |CitationClass=journal }}
2. {{#invoke:citation/CS1|citation |CitationClass=book }}
3. {{#invoke:Citation/CS1|citation |CitationClass=journal }}
4. {{#invoke:Citation/CS1|citation |CitationClass=journal }}
5. {{#invoke:citation/CS1|citation |CitationClass=book }}
6. {{#invoke:Citation/CS1|citation |CitationClass=journal }}
7. Template:Cite doi
8. Template:Cite doi
9. Template:Cite doi
10. Template:Cite doi
11. Template:Cite doi
12. Template:Cite doi
13. Template:Cite doi
14. Template:Cite doi
15. {{#invoke:Citation/CS1|citation |CitationClass=journal }}
16. See, e.g., Template:Cite doi

• {{#invoke:Citation/CS1|citation

|CitationClass=journal }}

• {{#invoke:Citation/CS1|citation

|CitationClass=journal }}

• {{#invoke:Citation/CS1|citation

|CitationClass=journal }}

• {{#invoke:Citation/CS1|citation

|CitationClass=journal }}

• {{#invoke:Citation/CS1|citation

|CitationClass=journal }}

• {{#invoke:Citation/CS1|citation