# Fixed-point iteration

{{ safesubst:#invoke:Unsubst||$N=Refimprove |date=__DATE__ |$B= {{#invoke:Message box|ambox}} }} {{#invoke:main|main}} In numerical analysis, fixed-point iteration is a method of computing fixed points of iterated functions.

$x_{n+1}=f(x_{n}),\,n=0,1,2,\dots$ $f(x)=x$ .

More generally, the function $f$ can be defined on any metric space with values in that same space.

## Examples The fixed-point iteration xn+1 = sin xn with initial value x0 = 2 converges to 0. This example does not satisfy the assumptions of the Banach fixed point theorem and so its speed of convergence is very slow.
• The requirement that f is continuous is important, as the following example shows. The iteration
$x_{n+1}={\begin{cases}{\frac {x_{n}}{2}},&x_{n}\neq 0\\1,&x_{n}=0\end{cases}}$ converges to 0 for all values of $x_{0}$ . However, 0 is not a fixed point of the function

$f(x)={\begin{cases}{\frac {x}{2}},&x\neq 0\\1,&x=0\end{cases}}$ as this function is not continuous at $x=0$ , and in fact has no fixed points.

## Applications

• Newton's method for finding roots of a given differentiable function f(x) is
$x_{n+1}=x_{n}-{\frac {f(x_{n})}{f'(x_{n})}}$ .
If we write $g(x)=x-{\frac {f(x)}{f'(x)}}$ , we may rewrite the Newton iteration as the fixed-point iteration $x_{n+1}=g(x_{n})$ .
If this iteration converges to a fixed point Template:Mvar of Template:Mvar, then
$x=g(x)=x-{\frac {f(x)}{f'(x)}}$ , so $f(x)/f'(x)=0$ .
The inverse of anything is nonzero, therefore f(x)=0: Template:Mvar is a root of Template:Mvar. Under the assumptions of the Banach fixed point theorem, the Newton iteration, framed as the fixed point method, demonstrates linear convergence. However, a more detailed analysis shows quadratic convergence, i.e.,
$|x_{n}-x| , under certain circumstances.
• The Picard–Lindelöf theorem, which shows that ordinary differential equations have solutions, is essentially an application of the Banach fixed point theorem to a special sequence of functions which forms a fixed point iteration, constructing the solution to the equation. Solving an ODE in this way is called Picard iteration, Picard's method, or the Picard iterative process.

## Properties

Proof of this theorem:
Since $f$ is Lipschitz continuous with Lipschitz constant $L<1$ , then for the sequence $\{x_{n},n=0,1,2...\}$ , we have:
$|x_{2}-x_{1}|=|f(x_{1})-f(x_{0})|\leq L|x_{1}-x_{0}|$ ,
$|x_{3}-x_{2}|=|f(x_{2})-f(x_{1})|\leq L|x_{2}-x_{1}|$ ,
$\cdots$ ,
and
$|x_{n}-x_{n-1}|=|f(x_{n-1})-f(x_{n-2})|\leq L|x_{n-1}-x_{n-2}|$ .
Combining the above inequalities yields:
$|x_{n}-x_{n-1}|\leq L^{n-1}|x_{1}-x_{0}|$ .
Since $L<1$ , $L^{n-1}\rightarrow 0$ as $n\rightarrow \infty$ .
Therefore, we can show $\{x_{n}\}$ is a Cauchy sequence and thus it converges to a point $x^{*}$ .
For the iteration $x_{n}=f(x_{n-1})$ , let $n$ go to infinity on both sides of the equation, we obtain $x^{*}=f(x^{*})$ . This shows that $x^{*}$ is the fixed point for $f$ . So we proved the iteration will eventually converge to a fixed-point.
This property is very useful because not all iterations can arrive at a convergent fixed-point. When constructing a fixed-point iteration, it is very important to make sure it converges. There are several fixed-point theorems to guarantee the existence of the fixed point, but since the iteration function is continuous, we can usually use the above theorem to test if an iteration converges or not. The proof of the generalized theorem to metric space is similar.