Fixed-point iteration

From formulasearchengine
Jump to navigation Jump to search

{{ safesubst:#invoke:Unsubst||$N=Refimprove |date=__DATE__ |$B= {{#invoke:Message box|ambox}} }} {{#invoke:main|main}} In numerical analysis, fixed-point iteration is a method of computing fixed points of iterated functions.

More specifically, given a function defined on the real numbers with real values and given a point in the domain of , the fixed point iteration is

which gives rise to the sequence which is hoped to converge to a point . If is continuous, then one can prove that the obtained is a fixed point of , i.e.,

.

More generally, the function can be defined on any metric space with values in that same space.

Examples

The fixed-point iteration xn+1 = sin xn with initial value x0 = 2 converges to 0. This example does not satisfy the assumptions of the Banach fixed point theorem and so its speed of convergence is very slow.
  • The requirement that f is continuous is important, as the following example shows. The iteration

converges to 0 for all values of . However, 0 is not a fixed point of the function

as this function is not continuous at , and in fact has no fixed points.

Applications

  • Newton's method for finding roots of a given differentiable function f(x) is
.
If we write , we may rewrite the Newton iteration as the fixed-point iteration .
If this iteration converges to a fixed point Template:Mvar of Template:Mvar, then
, so .
The inverse of anything is nonzero, therefore f(x)=0: Template:Mvar is a root of Template:Mvar. Under the assumptions of the Banach fixed point theorem, the Newton iteration, framed as the fixed point method, demonstrates linear convergence. However, a more detailed analysis shows quadratic convergence, i.e.,
, under certain circumstances.
  • The Picard–Lindelöf theorem, which shows that ordinary differential equations have solutions, is essentially an application of the Banach fixed point theorem to a special sequence of functions which forms a fixed point iteration, constructing the solution to the equation. Solving an ODE in this way is called Picard iteration, Picard's method, or the Picard iterative process.

Properties

Proof of this theorem:
Since is Lipschitz continuous with Lipschitz constant , then for the sequence , we have:
,
,
,
and
.
Combining the above inequalities yields:
.
Since , as .
Therefore, we can show is a Cauchy sequence and thus it converges to a point .
For the iteration , let go to infinity on both sides of the equation, we obtain . This shows that is the fixed point for . So we proved the iteration will eventually converge to a fixed-point.
This property is very useful because not all iterations can arrive at a convergent fixed-point. When constructing a fixed-point iteration, it is very important to make sure it converges. There are several fixed-point theorems to guarantee the existence of the fixed point, but since the iteration function is continuous, we can usually use the above theorem to test if an iteration converges or not. The proof of the generalized theorem to metric space is similar.

See also

References

  1. One may also consider certain iterations A-stable if the iterates stay bounded for a long time, which is beyond the scope of this article.
  2. M A Kumar (2010), Solve Implicit Equations (Colebrook) Within Worksheet, Createspace, ISBN 1-4528-1619-0
  3. Bellman, R. (1957). Dynamic programming, Princeton University Press.
  4. Sniedovich, M. (2010). Dynamic Programming: Foundations and Principles, Taylor & Francis.
  • {{#invoke:citation/CS1|citation

|CitationClass=book }}.

External links