Thiele/Small: Difference between revisions
en>ChrisGualtieri m TypoScan Project / General Fixes, typos fixed: etc) → etc.) (2), ie, → i.e., (2) using AWB |
en>BG19bot m WP:CHECKWIKI error fix for #61. Punctuation goes before References. Do general fixes if a problem exists. - using AWB (9838) |
||
Line 1: | Line 1: | ||
In [[numerical analysis]], '''inverse quadratic interpolation''' is a [[root-finding algorithm]], meaning that it is an algorithm for solving equations of the form ''f''(''x'') = 0. The idea is to use [[polynomial interpolation|quadratic interpolation]] to approximate the [[inverse function|inverse]] of ''f''. This algorithm is rarely used on its own, but it is important because it forms part of the popular [[Brent's method]]. | |||
==The method== | |||
The inverse quadratic interpolation algorithm is defined by the [[recurrence relation]] | |||
:<math> x_{n+1} = \frac{f_{n-1}f_n}{(f_{n-2}-f_{n-1})(f_{n-2}-f_n)} x_{n-2} + \frac{f_{n-2}f_n}{(f_{n-1}-f_{n-2})(f_{n-1}-f_n)} x_{n-1} </math> | |||
:::::<math> {} + \frac{f_{n-2}f_{n-1}}{(f_n-f_{n-2})(f_n-f_{n-1})} x_n, </math> | |||
where ''f''<sub>''k''</sub> = ''f''(''x''<sub>''k''</sub>). As can be seen from the recurrence relation, this method requires three initial values, ''x''<sub>0</sub>, ''x''<sub>1</sub> and ''x''<sub>2</sub>. | |||
==Explanation of the method== | |||
We use the three preceding iterates, ''x''<sub>''n''−2</sub>, ''x''<sub>''n''−1</sub> and ''x''<sub>''n''</sub>, with their function values, ''f''<sub>''n''−2</sub>, ''f''<sub>''n''−1</sub> and ''f''<sub>''n''</sub>. Applying the [[Lagrange polynomial|Lagrange interpolation formula]] to do quadratic interpolation on the inverse of ''f'' yields | |||
:<math> f^{-1}(y) = \frac{(y-f_{n-1})(y-f_n)}{(f_{n-2}-f_{n-1})(f_{n-2}-f_n)} x_{n-2} + \frac{(y-f_{n-2})(y-f_n)}{(f_{n-1}-f_{n-2})(f_{n-1}-f_n)} x_{n-1} </math> | |||
:::::<math> {} + \frac{(y-f_{n-2})(y-f_{n-1})}{(f_n-f_{n-2})(f_n-f_{n-1})} x_n. </math> | |||
We are looking for a root of ''f'', so we substitute ''y'' = ''f''(''x'') = 0 in the above equation and this results in the above recursion formula. | |||
==Behaviour== | |||
The asymptotic behaviour is very good: generally, the iterates ''x''<sub>''n''</sub> converge fast to the root once they get close. However, performance is often quite poor if you do not start very close to the actual root. For instance, if by any chance two of the function values ''f''<sub>''n''−2</sub>, ''f''<sub>''n''−1</sub> and ''f''<sub>''n''</sub> coincide, the algorithm fails completely. Thus, inverse quadratic interpolation is seldom used as a stand-alone algorithm. | |||
The order of this convergence is approximately 1.8, it can be proved by the Secant Method analysis. | |||
==Comparison with other root-finding methods== | |||
As noted in the introduction, inverse quadratic interpolation is used in [[Brent's method]]. | |||
Inverse quadratic interpolation is also closely related to some other root-finding methods. | |||
Using [[linear interpolation]] instead of quadratic interpolation gives the [[secant method]]. Interpolating ''f'' instead of the inverse of ''f'' gives [[Muller's method]]. | |||
==See also== | |||
* [[Successive parabolic interpolation]] is a related method that uses parabolas to find extrema rather than roots. | |||
==References== | |||
*[[James F. Epperson]], [http://books.google.com/books?id=Mp8-z5mHptcC&lpg=PP1&pg=PA182#v=onepage&q&f=false An introduction to numerical methods and analysis], pages 182-185, Wiley-Interscience, 2007. ISBN 978-0-470-04963-1 | |||
[[Category:Root-finding algorithms]] |
Revision as of 19:04, 6 January 2014
In numerical analysis, inverse quadratic interpolation is a root-finding algorithm, meaning that it is an algorithm for solving equations of the form f(x) = 0. The idea is to use quadratic interpolation to approximate the inverse of f. This algorithm is rarely used on its own, but it is important because it forms part of the popular Brent's method.
The method
The inverse quadratic interpolation algorithm is defined by the recurrence relation
where fk = f(xk). As can be seen from the recurrence relation, this method requires three initial values, x0, x1 and x2.
Explanation of the method
We use the three preceding iterates, xn−2, xn−1 and xn, with their function values, fn−2, fn−1 and fn. Applying the Lagrange interpolation formula to do quadratic interpolation on the inverse of f yields
We are looking for a root of f, so we substitute y = f(x) = 0 in the above equation and this results in the above recursion formula.
Behaviour
The asymptotic behaviour is very good: generally, the iterates xn converge fast to the root once they get close. However, performance is often quite poor if you do not start very close to the actual root. For instance, if by any chance two of the function values fn−2, fn−1 and fn coincide, the algorithm fails completely. Thus, inverse quadratic interpolation is seldom used as a stand-alone algorithm.
The order of this convergence is approximately 1.8, it can be proved by the Secant Method analysis.
Comparison with other root-finding methods
As noted in the introduction, inverse quadratic interpolation is used in Brent's method.
Inverse quadratic interpolation is also closely related to some other root-finding methods. Using linear interpolation instead of quadratic interpolation gives the secant method. Interpolating f instead of the inverse of f gives Muller's method.
See also
- Successive parabolic interpolation is a related method that uses parabolas to find extrema rather than roots.
References
- James F. Epperson, An introduction to numerical methods and analysis, pages 182-185, Wiley-Interscience, 2007. ISBN 978-0-470-04963-1