High-Order Iterative Methods Repeating Roots a Constructive Recapitulation ()
1. Introduction
This note is mainly concerned with the numerical approximations of function roots by various iterative methods [1] - [8]. A multiplicity greater than one of a function root may greatly impede the convergence method used to iteratively locate it. The multiplicity index m of the sought root is often unknown before hand. It may well be a latent property of the root, not cursorily revealed, nor readily available. In this note, we examine and assess computational procedures to evaluate the multiplicity index m of the iteratively approached root. We further constructively review the iterative methods [9] [10] [11] for approaching a root of a known or unknown multiplicity. Then we suggest several other useful, alternatingly converging and contrarily converging iterative methods to bound the sought root.
2. The Taylor Representation of an Osculating Function
Approximation by polynomials is fundamental to numerical analysis. Taylor’s theorem plays in a crucial role.
The monomial power function
(1)
is ever smaller, as n is ever bigger, near
. Function
in Equation (1) is actually such that
(2)
Such function is said to be osculating of degree n.
In its most concise form, Taylor’s theorem generalizes this claim to any differential function. It asserts that if Equation (2) holds true for any other differential function
, then this function is expressible as
(3)
for x to the right of zero. Equation (3) is said to be the Taylor form of osculating function
of Equation (2). Power n is said to be the degree of osculation of
at, here,
.
We take in Equation (3)
(4)
and obtain from it the asymptotic form of intermediate value
(5)
forx close to zero.
If the point of osculation of function
is generally a, then
(6)
See also [12] [13].
To observe a higher order approximation to
we start with
(7)
We presume that
(8)
and readily have from Equation (7) that
(9)
3. Repeated Taylor Expansion Examples
Example one. We start with
(10)
and fix parameters
so as to have the osculation
(11)
and then
(12)
In view of Equation (11) the Taylor form of
is here
(13)
or further, in view of Equation (11), approximately, if x is small
(14)
Here
(15)
Nearly. We assemble Equations (12), (13) and (14) to gain the better polynomial approximation
(16)
becoming with
of Equation (15)
(17)
Example two. We write
(18)
and fix parameters
so that
(19)
and have the osculating remainder or residual
(20)
The Taylor form of this osculating residual
is then
(21)
Here
, nearly, with which we have
(22)
Example three. We start with
(23)
and fix parameters
and
so as to have both
and
,
(24)
The Taylor form of osculating function
in the above Equation (24) is
(25)
and we write
(26)
Returning to the approximation
(27)
we finally reach
(28)
Example four. The osculating-approximating function need not be a polynomial. Indeed, we write
(29)
and fix parameters
so as to have
,
,
. The Taylor form of function
is here
(30)
and
(31)
4. Estimation of the Root Multiplicity Index
Assuming that the power series expansion of function
, whose root a we are seeking to iteratively locate, is of the form
(32)
we obtain from it the first-order estimate for the root multiplicity index m
(33)
and
(34)
Then, the second-order estimation
(35)
For example, for
, we compute, at
, the first-order and second-order approximations
(36)
respectively. For
and
we compute from the above Equation (34)
(37)
respectively, both for
.
We have also that, whatever k
(38)
from which we have for
(39)
By the Padé rational approximation
(40)
we have the first-order estimate
(41)
For example, for
we obtain from the above Equation (41), for
the corresponding root multiplicity index approximations:
for the exact
.
We notice that, whatever m, root a of
is isolated, namely always of multiplicity
. Indeed
(42)
5. Second Order Iterative Methods
Our prevailing task is, at first, to stepwise and steadily, approach isolated root a of function
,
,
. Being at point
,
, we write the Taylor expansion
(43)
and readily obtain from it, by the linearization
,
, the eminent Newton method
(44)
The Newton method is a quadratic iterative method to an isolated root
(45)
where
.
6. Rectification for a Multiple Root by Undetermined Coefficients
We recast Newton’s method in the open-ended form
(46)
with the undetermined coefficient P, and have by Equation (32) that at a root of multiplicity
(47)
Quadratic convergence is restored to the method, as is well known, with
. In the equation above, A and B are the coefficients in the power series expansion of
in Equation (32). But, for this, we need to know in advance the multiplicity index m of root a.
To have a quadratic method not needing the a-priori knowledge of multiplicity index m of the searched root we take the m approximation of Equation (33) and obtain, the quadratic for any m, method
(48)
Method (48) is also obtained by applying Newton’s method to
. See Equation (42).
The advantage of method (48) is that it is quadratic; its drawback is that it requires
.
For example, for
(49)
we have by method (48) that
(50)
for root
of any multiplicity index m.
7. Two-Step Method
To avoid the computation of
in the modified Newton method (48), we suggest the two-step method
(51)
To observe its working we apply the method to the trial function
of Equation (49) and have for the two-step method (51)
(52)
and then
(53)
8. Super-Linear Method
With
the modified Newton’s method of Equation (46) becomes super-linear and ultimately of alternating convergence [8]. For example, for
,
,
, we generate, starting with
the sequence
(54)
9. Third Order Iterative Methods-and Higher
We propose to replace now
in Newton’s method by the better
(55)
to have the mid-point method
(56)
which now rises in order to become cubic
(57)
still with no need for
. See also Traub [1] page 164 Equations (8-12).
The linearization
(58)
reproduces out of Equation (56) the classical method of Halley
(59)
which is likewise cubic
(60)
but requires the, possibly costly, second derivative
of
.
Power series expansion turns the rational Halley’s method into a polynomial in
method
(61)
which is still cubic
(62)
provided that
.
Moreover,
(63)
is quartic
provided that
.
We generalize the method in Equation (63) as
(65)
and recover a one-sided cubic convergence method to a root, even of multiplicity m, for
(66)
(67)
10. Correction of Halley’s Method for Multiple Roots
We rewrite Halley’s method of Equation (59) with the open coefficients P and Q as
(68)
and determine by power series expansion that if (See also [11] )
(69)
then convergence remains cubic for a root of any known multiplicity
(70)
11. A Third-Order, One-Sided, Chord Method
Having secured
(71)
we pass through the two points
the chord, or secant, line
(72)
We take the root of
, as the next
, and have
(73)
which are, all three, cubic,
(74)
respectively. See also Traub [1] page 180 Equations (8-55).
Convergence of this method is one sided, namely, if
, then also
, and if
, then also
.
For example, for
we obtain by the method of the first Equation (73) the two oppositely converging sequences
(75)
by which root
is bounded, or bracketed, as
(76)
12. Quartic and Quintic Methods
Once
is computed by Equation (73), just one additional computation of
makes the pseudo-Newton’s method
(77)
for
in Equation (72), quartic
(78)
From a parabola passing through the data
and
we obtain a better approximation for
, and with it the method
(79)
which we verify to be quintic
(80)
13. Fourth-Order, Parabolic Interpolant Method
We can do still better with a parabolic interpolation that includes, the already evaluated,
. We pass through the data
and
(81)
the parabola
(82)
where
, and
(83)
We take the nearest root of
, as
and have, by expansion in powers of r, the ultimate quartic method
(84)
(85)
Method (84) is the expanded and ultimately truncated form of several published fourth-order chord methods, among them the rational method of Ostrowski [3], appendix G.
(86)
(87)
See in particular Traub [1] page 184 Equations (8-78).
14. From a Cubic to a Quartic Method
We write the second order,
, version of the Taylor-Lagrange formula
(88)
and take
to have the iterative method
(89)
We approximate the solution of the increment equation
(90)
by the power series
(91)
and have, by substitution and collection that
(92)
from which we have, with
, that
(93)
and so on.
The methods
(94)
or
(95)
are both (the first is a Chebyshev method) cubic
(96)
respectively, provided that
.
The method
(97)
converges cubically to a root of any known multiplicity
(98)
Since here
we propose to try
(99)
We recalculate
, and verify that the second method in Equation (94) above, rises by this choice of
to a quartic
(100)
15. Oppositely Converging Super-Quadratic Methods
We start here with
(101)
for undetermined coefficient P, and have
(102)
We ask that
(103)
for parameter k, or, in view of Equation (38), that
(104)
by which the iterative method in Equation (101) turns into
(105)
for any k, and
(106)
This super-quadratic method converges from above if
, and from below if
.
The interest in the method
(107)
is that it ultimately converges oppositely to Newton’s method,
(108)
as seen by comparing Equation (108) with Equation (45).
For example, for
we compute, starting at
, from Newton’s method and from method (107), respectively,
(109)
and root a is bounded, or bracketed, thereby as
(110)
The average of Newton’s method and the method of Equation (107) is cubic,
(111)
16. Alternatingly Converging Super-Linear and Super-Cubic Methods
We start by modifying Newton’s method into
(112)
for anyk, and for which
(113)
indicating that, invariably, the method converges, at least asymptotically, alternatingly, if
.
For example, for
,
,
, we compute by method (112) the super-linearly converging sequence
(114)
and root
is bracketed as
(115)
For a higher order method we start with, the originally quartic
(116)
of Equation (63), and have that
(117)
indicating that this super cubic method converges alternatingly if parameter
.
For example, for
we generate by method (116) with
the alternating sequence
(118)
and root
is bracketed as
(119)
17. Still Higher Order Methods
Starting with
(120)
we have the suggested iterative method
(121)
and we take
(122)
with
(123)
The resulting methods
(124)
are both quartic
(125)
provided that
.
Recalculating
at
(126)
the method rises to quintic
(127)
18. Higher-Order Methods, Multiple Root
We reconsider the typical third-order method in Equation (61), but with the arbitrary coefficients P and Q
(128)
By power series expansion we have that
(129)
and cubic convergence is maintained with
(130)
See also Petković [9], Osada [10], Traub [1], Changbum Chun [11], and in particular Osada, who suggested the simpler remarkable cubic method
(131)
A still higher order method is
(132)
For
, for example, we determine that
(133)
makes the method quartic
(134)
To have a cubic method that does not require an a-priori knowledge of the root multiplicity, we replace f by u to have
(135)
for which
(136)
Still higher order methods are readily thus generated. See also [14] [15] [16] and [17].
19. Conclusions
We have demonstrated in this paper the usefulness of a number of practical, high order methods for the iterative location of the roots of single-variable nonlinear equations. We have presented here algorithms to estimate the multiplicity of a considered root. Special attention was given here to algorithms thriftily applicable to multiple roots of initially known and unknown multiplicity.
We have further demonstrated here the advantage of super-linear and super-cubic methods that converge contrarily or alternatingly, enabling us, not only to confidently approach the root, but to also actually bound and bracket it as we progress.