Durand–Kerner method
In numerical analysis, the Durand–Kerner method, discovered by Karl Weierstrass in 1891 and rediscovered independently by Durand in 1960 and Kerner in 1966, is a root-finding algorithm for solving polynomial equations. In other words, the method can be used to solve numerically the equation
where ƒ is a given polynomial, which can be taken to be scaled so that the leading coefficient is 1.
Explanation
This explanation considers equations of degree four. It is easily generalized to other degrees.Let the polynomial ƒ be defined by
for all x.
The known numbers a, b, c, d are the coefficients.
Let the numbers P,Q,R,S be the roots of this polynomial ƒ.
Then
for all x. One can isolate the value P from this equation,
So if used as a fixed point iteration
it is strongly stable in that every initial point x0 ≠ Q,R,S
delivers after one iteration the root P=x1.
Furthermore, if one replaces the zeros Q, R and S
by approximations q ≈ Q, r ≈ R, s ≈ S,
such that q,r,s are not equal to P, then P
is still a fixed point of the perturbed fixed point iteration
since
Note that the denominator is still different from zero.
This fixed point iteration is a contraction mapping
for x around P.
The clue to the method now is to combine
the fixed point iteration for P with similar iterations
for Q,R,S into a simultaneous iteration for all roots.
Initialize p, q, r, s:
There is nothing special about choosing 0.4 + 0.9 i except that it is neither a real number nor a root of unity.
Make the substitutions for n = 1,2,3,···
Re-iterate until the numbers p, q, r, s
essentially stop changing relative to the desired precision.
They then have the values P, Q, R, S in some order
and in the chosen precision. So the problem is solved.
Note that you must use complex number arithmetic,
and that the roots are found simultaneously rather than one at a time.
Variations
This iteration procedure, like the Gauss–Seidel method for linear equations,computes one number at a time based on the already computed numbers.
A variant of this procedure, like the Jacobi method,
computes a vector of root approximations at a time.
Both variant are effective root-finding algorithms.
One could also choose the initial values for p,q,r,s
by some other procedure, even randomly, but in a way that
- they are inside some not-too-large circle containing also the roots of ƒ, e.g. the circle around the origin with radius,
- they are not too close to each other,
as the degree of the polynomial increases.
If the coefficients are real and the polynomial has odd degree, then it must have at least one real root. To find this, use a real value of p0 as the initial guess and make q0 and r0, etc, complex conjugate pairs. Then the iteration will preserve these properties; that is, pn will always be real, and qn and rn, etc, will always be conjugate. In this way, the pn will converge to a real root P. Alternatively, make all of the initial guesses real; they will remain so.
Example
This example is from the reference 1992. The equation solved is. The first 4 iterations move p, q, r seemingly chaotically, but then the roots are located to 1 decimal. After iteration number 5 we have 4 correct decimals, and the subsequent iteration number 6 confirms that the computed roots are fixed. This general behaviour is characteristic for the method. Also notice that, in this example, the roots are used as soon as they are computed in each iteration. In other words, the computation of each second column uses the value of the previous computed columns.Note that the equation has one real root and one pair of complex conjugate roots, and that the sum of the roots is 3.
Derivation of the method via Newton's method
For every n-tuple of complex numbers, there is exactly one monic polynomial of degree n that has them as its zeros. This polynomial is given by multiplying all the corresponding linear factors, that isThis polynomial has coefficients that depend on the prescribed zeros,
Those coefficients are, up to a sign, the elementary symmetric polynomials of degrees 1,...,n.
To find all the roots of a given polynomial with coefficient vector simultaneously is now the same as to find a solution vector to the system
The Durand–Kerner method is obtained as the multidimensional Newton's method applied to this system. It is algebraically more comfortable to treat those identities of coefficients as the identity of the corresponding polynomials,. In the Newton's method one looks, given some initial vector, for an increment vector such that is satisfied up to second and higher order terms in the increment. For this one solves the identity
If the numbers are pairwise different, then the polynomials in the terms of the right hand side form a basis of the n-dimensional space of polynomials with maximal degree n − 1. Thus a solution to the increment equation exists in this case. The coordinates of the increment are simply obtained by evaluating the increment equation
at the points, which results in
Root inclusion via Gerschgorin's circles
In the quotient ring of residue classes modulo ƒ, the multiplication by X defines an endomorphism that has the zeros of ƒ as eigenvalues with the corresponding multiplicities. Choosing a basis, the multiplication operator is represented by its coefficient matrix A, the companion matrix of ƒ for this basis.Since every polynomial can be reduced modulo ƒ to a polynomial of degree n − 1 or lower, the space of residue classes can be identified with the space of polynomials of degree bounded by n − 1.
A problem specific basis can be taken from Lagrange interpolation as the set of n polynomials
where are pairwise different complex numbers. Note that the kernel functions for the Lagrange interpolation are.
For the multiplication operator applied to the basis polynomials one obtains from the Lagrange interpolation
where are again the Weierstrass updates.
The companion matrix of ƒ is therefore
From the transposed matrix case of the Gershgorin circle theorem it follows that all eigenvalues of A, that is, all roots of ƒ, are contained in the union of the disks with a radius.
Here one has, so the centers are the next iterates of the Weierstrass iteration, and radii that are multiples of the Weierstrass updates. If the roots of ƒ are all well isolated and the points are sufficidently close approximations to these roots, then all the disks will become disjoint, so each one contains exactly one zero. The midpoints of the circles will be better approximations of the zeros.
Every conjugate matrix of A is as well a companion matrix of ƒ. Choosing T as diagonal matrix leaves the structure of A invariant. The root close to is contained in any isolated circle with center regardless of T. Choosing the optimal diagonal matrix T for every index results in better estimates.
Convergence results
The connection between the Taylor series expansion and Newton's method suggests that the distance from to the corresponding root is of the order, if the root is well isolated from nearby roots and the approximation is sufficiently close to the root. So after the approximation is close, Newton's method converges quadratically; that is: the error is squared with every step. In the case of the Durand–Kerner method, convergence is quadratic if the vector is close to some permutation of the vector of the roots of ƒ.For the conclusion of linear convergence there is a more specific result. If the initial vector and its vector of Weierstrass updates satisfies the inequality
then this inequality also holds for all iterates, all inclusion disks
are disjoint and linear convergence with a contraction factor of 1/2 holds. Further, the inclusion disks can in this case be chosen as
each containing exactly one zero of ƒ.