Extent of reaction: Difference between revisions

From formulasearchengine
Jump to navigation Jump to search
Created a named reference instead of two refs. to the same source.
en>Secondhand Work
m Reverted 1 edit by 120.146.141.24 (talk) to last revision by 130.238.251.65. (TW)
Line 1: Line 1:
In [[numerical analysis]], the '''Durand–Kerner method''' established 1960–66 and named after E. Durand and Immo Kerner, also called the '''method of [[Karl Weierstrass|Weierstrass]]''', established 1859–91 and named after [[Karl Weierstrass]], is a [[root-finding algorithm]] for solving [[polynomial]] [[equation (mathematics)|equation]]s. In other words, the method can be used to solve numerically the equation
Alyson is the title individuals use to contact me and I believe it sounds quite good when you say it. Doing ballet is something she would never give up. Distributing manufacturing is exactly where my primary earnings comes from and it's some thing I truly enjoy. Her family members life in Ohio.<br><br>my web page; best psychic ([http://modenpeople.co.kr/modn/qna/292291 Read Homepage])
 
: &fnof;(''x'') = 0
 
where &fnof; is a given polynomial, which can be taken to be scaled so that the leading coefficient is 1.
 
==Explanation==
The explanation is for equations of [[Degree of a polynomial|degree]] four. It is easily generalized to other degrees.
 
Let the polynomial &fnof; be defined by
 
:&fnof;(''x'') = ''x''<sup>4</sup> + ''ax''<sup>3</sup> + ''bx''<sup>2</sup> + ''cx'' + ''d''
 
for all ''x''.
 
The known numbers ''a, b, c, d'' are the [[coefficient]]s.
 
Let the (complex) numbers ''P,Q,R,S'' be the roots of this polynomial &fnof;.
 
Then
 
:&fnof;(''x'') = (''x'' &minus; ''P'')(''x'' &minus; ''Q'')(''x'' &minus; ''R'')(''x'' &minus; ''S'')
 
for all ''x''.  One can isolate the value ''P'' from this equation,
 
:<math>P=x-\frac{f(x)}{(x-Q)(x-R)(x-S)}.</math>
 
So if used as a [[fixed point (mathematics)|fixed point]] [[iteration]]
:<math>x_1:=x_0-\frac{f(x_0)}{(x_0-Q)(x_0-R)(x_0-S)},</math>
it is strongly stable in that every initial point ''x<sub>0</sub>'' ≠ ''Q,R,S''
delivers after one iteration the root ''P=x<sub>1</sub>''.
 
Furthermore, if one replaces the zeros ''Q'', ''R'' and ''S''
by approximations ''q'' ≈ ''Q'', ''r'' ≈ ''R'',  ''s'' ≈ ''S'',
such that ''q,r,s'' are not equal to ''P'', then ''P''
is still a fixed point of the perturbed fixed point iteration
 
:<math>x_{k+1}:=x_k-\frac{f(x_k)}{(x_k-q)(x_k-r)(x_k-s)},</math>
since
 
:<math>P-\frac{f(P)}{(P-q)(P-r)(P-s)} = P - 0 = P.</math>
 
Note that the denominator is still different from zero.
This fixed point iteration is a [[contraction mapping]]
for ''x'' around ''P''.
 
The clue to the method now is to combine
the fixed point iteration for ''P'' with similar iterations
for ''Q,R,S'' into a simultaneous iteration for all roots.
 
Initialize ''p, q, r, s'':
 
:''p''<sub>0</sub> := (0.4 + 0.9&nbsp;i)<sup>0</sup> ;
:''q''<sub>0</sub> := (0.4 + 0.9&nbsp;i)<sup>1</sup> ;
:''r''<sub>0</sub> := (0.4 + 0.9&nbsp;i)<sup>2</sup> ;
:''s''<sub>0</sub> := (0.4 + 0.9&nbsp;i)<sup>3</sup> ;
 
There is nothing special about choosing 0.4&nbsp;+&nbsp;0.9&nbsp;i except that it is neither a [[real number]] nor a [[root of unity]].
 
Make the substitutions for ''n'' = 1,2,3,&middot;&middot;&middot;
:{|
|-
|<math> p_n = p_{n-1} - \frac{f(p_{n-1})}{ (p_{n-1}-q_{n-1})(p_{n-1}-r_{n-1})(p_{n-1}-s_{n-1}) }; </math>
|-
|<math> q_n = q_{n-1} - \frac{f(q_{n-1})}{ (q_{n-1}-p_n)(q_{n-1}-r_{n-1})(q_{n-1}-s_{n-1}) }; </math>
|-
|<math> r_n = r_{n-1} - \frac{f(r_{n-1})}{ (r_{n-1}-p_n)(r_{n-1}-q_n)(r_{n-1}-s_{n-1}) }; </math>
|-
|<math> s_n = s_{n-1} - \frac{f(s_{n-1})}{ (s_{n-1}-p_n)(s_{n-1}-q_n)(s_{n-1}-r_n) }. </math>
|}
 
Re-iterate until the numbers ''p, q, r, s''
stop essentially changing in relative to the desired precision.
Then they have the values ''P, Q, R, S'' in some order
and in the chosen precision. So the problem is solved.
 
Note that you must use [[complex number]] arithmetic,
and that the roots are found simultaneously rather than one at a time.
 
==Variations==
This iteration procedure, like the [[Gauss–Seidel method]] for linear equations,
computes one number at a time based on the already computed numbers.
A variant of this procedure, like the [[Jacobi method]],
computes a vector of root approximations at a time.
Both variant are effective root-finding algorithms.
 
One could also choose the initial values for ''p,q,r,s''
by some other procedure, even randomly, but in a way that
*they are inside some not too large circle
containing also the roots of &fnof;(''x''),
e.g. the circle around the origin
with radius <math>1+\max(|a|,|b|,|c|,|d|)</math>,
(where 1,''a,b,c,d'' are the coefficients of &fnof;(''x''))
and that
*they are not too close to each other,
which may increasingly become a concern
as the degree of the polynomial increases.
 
== Example ==
This example is from the reference 1992. The equation solved is {{nowrap|1=''x''<sup>3</sup> − 3''x''<sup>2</sup> + 3''x'' − 5 = 0}}. The first 4 iterations move ''p'', ''q'', ''r'' seemingly chaotically, but then the roots are located to 1 decimal. After iteration number 5 we have 4 correct decimals, and the subsequent iteration number 6 confirms that the computed roots are fixed. This general behaviour is characteristic for the method.
 
::{|class="wikitable"
|----
!it.-no.
!p
!q
!r
|----
!0
| +1.0000+0.0000i
| +0.4000+0.9000i
| &minus;0.6500+0.7200i
|----
!1
| +1.3608+2.0222i
| &minus;0.3658+2.4838i
| &minus;2.3858&minus;0.0284i
|----
!2
| +2.6597+2.7137i
| +0.5977+0.8225i
| &minus;0.6320&minus;1.6716i
|----
! 3
| +2.2704+0.3880i
| +0.1312+1.3128i
| +0.2821&minus;1.5015i
|----
! 4 
| +2.5428&minus;0.0153i
| +0.2044+1.3716i
| +0.2056&minus;1.3721i
|----
! 5 
| +2.5874+0.0000i
| +0.2063+1.3747i
| +0.2063&minus;1.3747i
|----
! 6 
| +2.5874+0.0000i
| +0.2063+1.3747i
| +0.2063&minus;1.3747i
|----
|}
Note that the equation has one real root and one pair of complex conjugate roots, and that the sum of the roots is 3.
 
==Derivation of the method via Newton's method==
 
For every ''n''-tuple of complex numbers, there is exactly one monic polynomial of degree ''n'' that has them as its zeros (keeping multiplicities). This polynomial is given by multiplying all the corresponding linear factors, that is
 
:<math>
  g_{\vec z}(X)=(X-z_1)\cdots(X-z_n).
</math>
 
This polynomial has coefficients that depend on the prescribed zeros,
 
:<math>g_{\vec z}(X)=X^n+g_{n-1}(\vec z)X^{n-1}+\cdots+g_0(\vec z).</math>
 
Those coefficients are, up to a sign, the [[elementary symmetric polynomial]]s <math>\alpha_1(\vec z),\dots,\alpha_n(\vec z)</math> of degrees ''1,...,n''.
 
To find all the roots of a given polynomial <math>f(X)=X^n+c_{n-1}X^{n-1}+\cdots+c_0</math> with coefficient vector <math>(c_{n-1},\dots,c_0)</math> simultaneously is now the same as to find a solution vector to the system
 
:<math>\begin{matrix}
c_0&=&g_0(\vec z)&=&(-1)^n\alpha_n(\vec z)&=&(-1)^nz_1\cdots z_n\\
c_1&=&g_1(\vec z)&=&(-1)^{n-1}\alpha_{n-1}(\vec z)\\
&\vdots&\\
c_{n-1}&=&g_{n-1}(\vec z)&=&-\alpha_1(\vec z)&=&-(z_1+z_2+\cdots+z_n).
\end{matrix}
</math>
 
The Durand–Kerner method is obtained as the multidimensional [[Newton's method]] applied to this system. It is algebraically more comfortable to treat those identities of coefficients as the identity of the corresponding polynomials, <math>g_{\vec z}(X)=f(X)</math>. In the Newton's method one looks, given some initial vector <math>\vec z</math>, for an increment vector <math>\vec w</math> such that <math>g_{\vec z+\vec w}(X)=f(X)</math> is satisfied up to second and higher order terms in the increment. For this one solves the identity
 
:<math>f(X)-g_{\vec z}(X)=\sum_{k=1}^n\frac{\partial g_{\vec z}(X)}{\partial z_k}w_k=-\sum_{k=1}^n w_k\prod_{j\ne k}(X-z_j).</math>
 
If the numbers <math>z_1,\dots,z_n</math> are pairwise different, then the polynomials in the terms of the right hand side form a basis of the ''n''-dimensional space <math>\mathbb C[X]_{n-1}</math> of polynomials with maximal degree ''n''&nbsp;&minus;&nbsp;1. Thus a solution <math>\vec w</math> to the increment equation exists in this case. The coordinates of the increment <math>\vec w</math> are simply obtained by evaluating the increment equation
 
:<math>-\sum_{k=1}^n w_k\prod_{j\ne k}(X-z_j)=f(X)-\prod_{j=1}^n(X-z_j)</math>
 
at the points <math>X=z_k</math>, which results in
 
:<math>
-w_k\prod_{j\ne k}(z_k-z_j)=-w_kg_{\vec z}'(z_k)=f(z_k)
</math>, that is <math>
w_k=-\frac{f(z_k)}{\prod_{j\ne k}(z_k-z_j)}.
</math>
 
==Root inclusion via Gerschgorin's circles==
 
In the [[quotient ring]] (algebra) of [[residue class]]es modulo &fnof;(''X''), the multiplication by ''X'' defines an [[endomorphism]] that has the zeros of &fnof;(''X'') as [[eigenvalue]]s with the corresponding multiplicities. Choosing a basis, the multiplication operator is represented by its coefficient matrix ''A'', the [[companion matrix]] of &fnof;(''X'') for this basis.
 
Since every polynomial can be reduced modulo &fnof;(''X'') to a polynomial of degree ''n''&nbsp;&minus;&nbsp;1 or lower, the space of residue classes can be identified with the space of polynomials of degree bounded by ''n''&nbsp;&minus;&nbsp;1.
A problem specific basis can be taken from [[Lagrange interpolation]] as the set of ''n'' polynomials
 
:<math>b_k(X)=\prod_{1\le j\le n,\;j\ne k}(X-z_j),\quad k=1,\dots,n,</math>
 
where <math>z_1,\dots,z_n\in\mathbb C</math> are pairwise different complex numbers. Note that the kernel functions for the Lagrange interpolation are <math>L_k(X)=\frac{b_k(X)}{b_k(z_k)}</math>.
 
For the multiplication operator applied to the basis polynomials one obtains from the Lagrange interpolation
{|
|-
|<math>X\cdot b_k(X)\mod f(X)=X\cdot b_k(X)-f(X)</math>
|<math>=\sum_{j=1}^n\Big(z_j\cdot b_k(z_j)-f(z_j)\Big)\cdot \frac{b_j(X)}{b_j(z_j)}</math>
|-
|
|<math>=z_k\cdot b_k(X)+\sum_{j=1}^n w_j\cdot b_j(X)</math>,
|}
where <math>w_j=-\frac{f(z_j)}{b_j(z_j)}</math> are again the Weierstrass updates.
 
The companion matrix of &fnof;(''X'') is therefore
: <math> A = \mathrm{diag}(z_1,\dots,z_n)
  +\begin{pmatrix}1\\\vdots\\1\end{pmatrix}\cdot\left(w_1,\dots,w_n\right).
</math>
 
From the transposed matrix case of the [[Gershgorin circle theorem]] it follows that all eigenvalues of ''A'', that is, all roots of &fnof;(''X''), are contained in the union of the disks <math>D(a_{k,k},r_k)</math> with a radius <math>r_k=\sum_{j\ne k}\big|a_{j,k}\big|</math>.
 
Here one has <math>a_{k,k}=z_k+w_k</math>, so the centers are the next iterates of the Weierstrass iteration, and radii <math>r_k=(n-1)\left|w_k\right|</math> that are multiples of the Weierstrass updates. If the roots of &fnof;(''X'') are all well isolated (relative to the computational precision) and the points <math>z_1,\dots,z_n\in\mathbb C</math> are sufficidently close approximations to these roots, then all the disks will become disjoint, so each one contains exactly one zero. The midpoints of the circles will be better approximations of the zeros.
 
Every conjugate matrix <math>TAT^{-1}</math> of ''A'' is as well a companion matrix of &fnof;(''X''). Choosing ''T'' as diagonal matrix leaves the structure of ''A'' invariant. The root close to <math>z_k</math> is contained in any isolated circle with center <math>z_k</math> regardless of ''T''. Choosing the optimal diagonal matrix ''T'' for every index results in better estimates (see ref. Petkovic et al. 1995).
 
==Convergence results==
 
The connection between the Taylor series expansion and Newton's method suggests that the distance from <math>z_k+w_k</math> to the corresponding root is of the order <math>O(|w_k|^2)</math>, if the root is well isolated from nearby roots and the approximation is sufficiently close to the root. So after the approximation is close, Newton's method converges ''quadratically''; that is: the error is squared with every step (which will greatly reduce the error once it is less than 1). In the case of the Durand–Kerner method, convergence is quadratic if the vector <math>\vec z=(z_1,\dots,z_n)</math> is close to some permutation of the vector of the roots of &fnof;.
 
For the conclusion of linear convergence there is a more specific result (see ref. Petkovic et al. 1995). If the initial vector <math>\vec z</math> and its vector of Weierstrass updates <math>\vec w=(w_1,\dots,w_n)</math> satisfies the inequality
 
:<math>\max_{1\le k\le n}\big|w_k\big| \le \frac1{5n} \min_{1\le j<k\le n}\big|z_k-z_j\big|,</math>
 
then this inequality also holds for all iterates, all inclusion disks <math>\textstyle D\left(z_k+w_k,(n-1)|w_k|\right)</math>
are disjoint and linear convergence with a contraction factor of ''1/2'' holds. Further, the inclusion disks can in this case be chosen as
 
:<math>\textstyle D\left(z_k+w_k,\frac14 |w_k|\right)\qquad k = 1,\dots, n,</math>
 
each containing exactly one zero of &fnof;.
 
==References==
 
* {{cite conference|last=Weierstraß|first= Karl|authorlink=Karl Weierstraß|title=Neuer Beweis des Satzes, dass jede ganze rationale Function einer Veränderlichen dargestellt werden kann als ein Product aus linearen Functionen derselben Veränderlichen|booktitle=Sitzungsberichte der königlich preussischen Akademie der Wissenschaften zu Berlin|year=1891|url=http://bibliothek.bbaw.de/bibliothek-digital/digitalequellen/schriften/anzeige?band=10-sitz/1891-2&seite:int=00000565}}
* {{cite conference|last=Durand|first=E.|booktitle=Solutions Numériques des Equations Algébriques, vol. 1|editors=Masson et al|title=Equations du type ''F''(''x'')&nbsp;=&nbsp;0: Racines d'un polynome|year= 1960}}
* {{cite journal|last= Kerner|first= Immo O.|title=Ein Gesamtschrittverfahren zur Berechnung der Nullstellen von Polynomen|journal=Numerische Mathematik|volume=8|year= 1966|pages= 290–294|url= http://www.springerlink.com/content/q5p055l61pm63206|doi=10.1007/BF02162564 }}
* {{cite journal|author=Prešić, Marica|title=A convergence theorem for a method for simultaneous determination of all zeros of a polynomial|journal=Publications de l'institut mathematique (Beograd) (N.S.)|volume=28|pages=158–168 |year=1980 | issue=42}}
* {{cite journal|author=Petkovic, M.S., Carstensen, C. and Trajkovic, M.|title=Weierstrass formula and zero-finding methods|journal=Numerische Mathematik|volume=69|year=1995|pages=353–372|url=http://www.springerlink.com/content/x467nejrv3c8hq8j|doi=10.1007/s002110050097}}
*  Bo Jacoby, ''Nulpunkter for polynomier'', CAE-nyt (a periodical for Dansk CAE Gruppe [Danish CAE Group]), 1988.
*  Agnethe Knudsen, ''Numeriske Metoder'' (lecture notes), Københavns Teknikum.
*  Bo Jacoby, ''Numerisk løsning af ligninger'', Bygningsstatiske meddelelser (Published by Danish Society for Structural Science and Engineering) volume 63 no. 3-4, 1992, pp.&nbsp;83–105.
* {{cite book|last=Gourdon|first=Xavier|title=Combinatoire, Algorithmique et Geometrie des Polynomes|publisher=Ecole Polytechnique|location= Paris|year=1996|url=http://algo.inria.fr/gourdon/thesis.html}}
* [[Victor Pan]] (May 2002): [http://www.cs.gc.cuny.edu/tr/techreport.php?id=26 ''Univariate Polynomial Root-Finding with Lower Computational Precision and Higher Convergence Rates'']. Tech-Report, City University of New York
* {{cite journal|first= Arnold|last= Neumaier|title= Enclosing clusters of zeros of polynomials|journal= Journal of Computational and Applied Mathematics|volume= 156 |year=2003|url=http://www.mat.univie.ac.at/~neum/papers.html#polzer|doi= 10.1016/S0377-0427(03)00380-7|pages= 389}}
* Jan Verschelde, ''[http://www2.math.uic.edu/~jan/mcs471f03/Project_Two/proj2/node2.html The method of Weierstrass (also known as the Durand-Kerner method)]'', 2003.
 
==External links==
* ''[http://home.roadrunner.com/~jbmatthews/misc/groots.html Ada Generic_Roots using the Durand-Kerner Method]'' &mdash; an [[Open-Source|open-source]] implementation in [[Ada programming language|Ada]]
 
* ''[http://sites.google.com/site/drjohnbmatthews/polyroots Polynomial Roots]'' &mdash; an [[Open-Source|open-source]] implementation in [[Java programming language|Java]]
 
* ''[http://www.cpc.wmin.ac.uk/~spiesf/Solve/solve.html Roots Extraction from Polynomials : The Durand-Kerner Method]'' &mdash; contains a [[Java applet]] demonstration
 
{{DEFAULTSORT:Durand-Kerner method}}
[[Category:Root-finding algorithms]]

Revision as of 03:57, 1 March 2014

Alyson is the title individuals use to contact me and I believe it sounds quite good when you say it. Doing ballet is something she would never give up. Distributing manufacturing is exactly where my primary earnings comes from and it's some thing I truly enjoy. Her family members life in Ohio.

my web page; best psychic (Read Homepage)