High-Order Iterative Methods Repeating Roots a Constructive Recapitulation

Abstract

This paper considers practical, high-order methods for the iterative location of the roots of nonlinear equations, one at a time. Special attention is being paid to algorithms also applicable to multiple roots of initially known and unknown multiplicity. Efficient methods are presented in this note for the evaluation of the multiplicity index of the root being sought. Also reviewed here are super-linear and super-cubic methods that converge contrarily or alternatingly, enabling us, not only to approach the root briskly and confidently but also to actually bound and bracket it as we progress.

Share and Cite:

Fried, I. (2022) High-Order Iterative Methods Repeating Roots a Constructive Recapitulation. Applied Mathematics, 13, 131-146. doi: 10.4236/am.2022.132011.

1. Introduction

This note is mainly concerned with the numerical approximations of function roots by various iterative methods [1] - [8]. A multiplicity greater than one of a function root may greatly impede the convergence method used to iteratively locate it. The multiplicity index m of the sought root is often unknown before hand. It may well be a latent property of the root, not cursorily revealed, nor readily available. In this note, we examine and assess computational procedures to evaluate the multiplicity index m of the iteratively approached root. We further constructively review the iterative methods [9] [10] [11] for approaching a root of a known or unknown multiplicity. Then we suggest several other useful, alternatingly converging and contrarily converging iterative methods to bound the sought root.

2. The Taylor Representation of an Osculating Function

Approximation by polynomials is fundamental to numerical analysis. Taylor’s theorem plays in a crucial role.

The monomial power function

f ( x ) = x n , n 1 (1)

is ever smaller, as n is ever bigger, near x = 0 . Function f ( x ) in Equation (1) is actually such that

f ( 0 ) = f ( 0 ) = f ( 0 ) = = f ( n 1 ) ( 0 ) = 0 , f ( n ) ( 0 ) 0. (2)

Such function is said to be osculating of degree n.

In its most concise form, Taylor’s theorem generalizes this claim to any differential function. It asserts that if Equation (2) holds true for any other differential function f ( x ) , then this function is expressible as

f ( x ) = 1 n ! x n f ( n ) ( ξ ) or f ( x ) = ( 1 n ! f ( n ) ( ξ ) ) x n , 0 < ξ < x (3)

for x to the right of zero. Equation (3) is said to be the Taylor form of osculating function f ( x ) of Equation (2). Power n is said to be the degree of osculation of f ( x ) at, here, x = 0 .

We take in Equation (3)

f ( x ) = A x n + B x n + 1 , A 0 (4)

and obtain from it the asymptotic form of intermediate value ξ

ξ = x n + 1 (5)

forx close to zero.

If the point of osculation of function f ( x ) is generally a, then

lim x a + ξ a x a = 1 n + 1 . (6)

See also [12] [13].

To observe a higher order approximation to ξ we start with

f ( x ) = x ( P + Q x + R x 2 + S x 3 ) = x f ( ξ ) = x ( P + 2 Q ξ + 3 R ξ 2 + 4 S ξ 3 ) . (7)

We presume that

ξ = A x + B x 2 (8)

and readily have from Equation (7) that

ξ = 1 2 x + 1 8 R Q x 2 . (9)

3. Repeated Taylor Expansion Examples

Example one. We start with

r ( x ) = e x ( c 0 + c 1 x + c 2 x 2 ) (10)

and fix parameters c 0 , c 1 , c 2 so as to have the osculation

r ( 0 ) = 0 , r ( 0 ) = 0 , r ( 0 ) = 0 , r ( 0 ) = e 0 = 1 0 (11)

and then

e x = 1 + x + 1 2 x 2 + r ( x ) . (12)

In view of Equation (11) the Taylor form of r ( x ) is here

r ( x ) = 1 3 ! x 3 r ( ξ ) or r ( x ) = 1 3 ! x 3 e ξ , 0 < ξ < x (13)

or further, in view of Equation (11), approximately, if x is small

r ( x ) = 1 6 x 3 ( 1 + ξ + 1 2 ξ 2 ) for e ξ = 1 + ξ + 1 2 ξ 2 . (14)

Here

ξ = 1 4 x (15)

Nearly. We assemble Equations (12), (13) and (14) to gain the better polynomial approximation

e x = 1 + x + 1 2 x 2 + 1 6 x 3 ( 1 + ξ + 1 2 ξ 2 ) (16)

becoming with ξ of Equation (15)

e x = 1 + x + 1 2 x 2 + 1 6 x 3 + 1 24 x 4 + 1 192 x 5 + r ( x ) , r ( x 0 ) = 1 320 x 5 . (17)

Example two. We write

r ( x ) = ln ( 1 + x ) ( c 0 + c 1 x ) (18)

and fix parameters c 0 , c 1 so that

r ( 0 ) = 0 , r ( 0 ) = 0 , r ( 0 ) = 1 0 (19)

and have the osculating remainder or residual

r ( x ) = ln ( 1 + x ) x . (20)

The Taylor form of this osculating residual r ( x ) is then

r ( x ) = 1 2 x 2 r ( ξ ) or r ( x ) = 1 2 x 2 ( 1 + ξ ) 2 . (21)

Here ξ = x / 3 , nearly, with which we have

ln ( 1 + x ) = x 9 2 ( x 3 + x ) 2 + r ( x ) , r ( x 0 ) = 1 12 x 4 . (22)

Example three. We start with

r ( x ) = 1 + x ( c 0 + c 1 x ) (23)

and fix parameters c 0 and c 1 so as to have both r ( 0 ) = 0 and r ( 0 ) = 0 ,

r ( x ) = 1 + x 1 1 2 x , r ( 0 ) = 0 , r ( 0 ) = 0 , r ( 0 ) = 1 4 0. (24)

The Taylor form of osculating function r ( x ) in the above Equation (24) is

r ( x ) = 1 2 x 2 r ( ξ ) , 0 < ξ < x (25)

and we write

1 + x = 1 + 1 2 x + r ( x ) = 1 + 1 2 x 1 8 x 2 ( 1 + ξ ) 3 2 = 1 + 1 2 x 1 8 x 2 ( 1 + ξ ) 2 1 + ξ . (26)

Returning to the approximation

1 + ξ = 1 + 1 2 ξ , ξ = 1 3 x (27)

we finally reach

1 + x = 1 + 1 2 x 3 16 x 2 6 + x ( 3 + x ) 2 + r ( x ) , r ( x 0 ) = 13 1152 x 4 . (28)

Example four. The osculating-approximating function need not be a polynomial. Indeed, we write

r ( x ) = e x ( c 1 cos x + c 2 sin x ) (29)

and fix parameters c 1 , c 2 so as to have r ( 0 ) = 0 , r ( 0 ) = 0 , r ( 0 ) = 2 0 . The Taylor form of function r ( x ) is here

r ( x ) = 1 2 x 2 r ( ξ ) = 1 2 x 2 ( e ξ + cos ξ + sin ξ ) , ξ = 1 3 x (30)

and

e x = cos x + sin x + r ( x ) , r ( x 0 ) = x 2 . (31)

4. Estimation of the Root Multiplicity Index

Assuming that the power series expansion of function f ( x ) , whose root a we are seeking to iteratively locate, is of the form

f ( x ) = ( x a ) m ( A + B ( x a ) + C ( x a ) 2 + ) , m 1 (32)

we obtain from it the first-order estimate for the root multiplicity index m

u = ( f f ) = f 2 f f f 2 = 1 m 2 m 2 B A ( x a ) + O ( ( x a ) 2 ) , u = f / f (33)

and

1 2 u u 2 = B A + O ( x a ) , u = f / f . (34)

Then, the second-order estimation

u 2 2 u u = 1 m 2 + 6 m 4 B 2 ( 1 + m ) + 2 A C m A 2 ( x a ) 2 + O ( ( x a ) 3 ) , u = f / f . (35)

For example, for f = x 2 + x 3 , m = 2 , we compute, at x = 0.1 , the first-order and second-order approximations

m = 2.18 and m = 2.03 (36)

respectively. For x = 0.1 and x = 0.01 we compute from the above Equation (34)

B A = 0.78 and B A = 0.9753 (37)

respectively, both for B / A = 1 .

We have also that, whatever k

f ( x 0 k u 0 ) f ( x 0 ) = ( 1 k m ) m + O ( x 0 a ) (38)

from which we have for k = 1

ln r = m ln ( 1 1 m ) , r = f ( x 0 k u 0 ) f ( x 0 ) . (39)

By the Padé rational approximation

ln ( 1 + x ) = x x + 6 4 x + 6 + r ( x ) , r ( x 0 ) = 1 36 x 4 (40)

we have the first-order estimate

m = 1 6 1 + 4 ln r 1 + ln r , r = f ( x 0 u 0 ) f ( x 0 ) , u 0 = f 0 f 0 . (41)

For example, for f ( x ) = x 3 + x 4 we obtain from the above Equation (41), for x = { 1.0,0.5,0.1 } the corresponding root multiplicity index approximations: m = { 3.72,3.51,3.14 } for the exact m = 3 .

We notice that, whatever m, root a of u = f / f is isolated, namely always of multiplicity m = 1 . Indeed

u = f f = 1 m ( x a ) 1 m 2 B A ( x a ) 2 + O ( ( x a ) 3 ) . (42)

5. Second Order Iterative Methods

Our prevailing task is, at first, to stepwise and steadily, approach isolated root a of function f ( x ) , f ( a ) = 0 , f ( a ) 0 . Being at point x = x 0 , f ( x 0 ) 0 , we write the Taylor expansion

f ( x 0 + δ x ) = f ( x 0 ) + δ x f ( ξ ) , x 0 < ξ < x 0 + δ x (43)

and readily obtain from it, by the linearization f ( x 0 + δ x ) = 0 , ξ = x 0 , the eminent Newton method

x 1 = x 0 + δ x = x 0 u 0 , δ x = u 0 , u 0 = f 0 f 0 , f 0 = f ( x 0 ) , f 0 = f ( x 0 ) . (44)

The Newton method is a quadratic iterative method to an isolated root

x 1 a = ( 1 2 f f ) ( x 0 a ) 2 + O ( ( x 0 a ) 3 ) (45)

where f = f ( a ) 0 , f = f ( a ) < .

6. Rectification for a Multiple Root by Undetermined Coefficients

We recast Newton’s method in the open-ended form

x 1 = x P u 0 , u 0 = f 0 f 0 (46)

with the undetermined coefficient P, and have by Equation (32) that at a root of multiplicity m 1

x 1 a = ( 1 P m ) ( x 0 a ) + P m 2 B A ( x 0 a ) 2 + O ( ( x 0 a ) 3 ) . (47)

Quadratic convergence is restored to the method, as is well known, with P = m . In the equation above, A and B are the coefficients in the power series expansion of f ( x ) in Equation (32). But, for this, we need to know in advance the multiplicity index m of root a.

To have a quadratic method not needing the a-priori knowledge of multiplicity index m of the searched root we take the m approximation of Equation (33) and obtain, the quadratic for any m, method

x 1 = x 0 f 0 f 0 2 f 0 f 0 f 0 (48)

Method (48) is also obtained by applying Newton’s method to u = f / f . See Equation (42).

The advantage of method (48) is that it is quadratic; its drawback is that it requires f .

For example, for f ( x )

f = x m ( A + B x ) (49)

we have by method (48) that

x 1 = A B A 2 m + 2 A B m x 0 + B 2 x 0 2 ( 1 + m ) x 0 2 (50)

for root a = 0 of any multiplicity index m.

7. Two-Step Method

To avoid the computation of f in the modified Newton method (48), we suggest the two-step method

x 1 = x 0 m 0 u 0 , u 0 = f 0 f 0 , u 1 = f 1 f 1 , m 1 = x 1 x 0 u 1 u 0 = m 0 u 0 u 0 u 1 , x 2 = x 1 m 1 u 1 . (51)

To observe its working we apply the method to the trial function f ( x ) of Equation (49) and have for the two-step method (51)

m 1 = m + B A 2 m m 0 m x 0 + O ( x 0 2 ) (52)

and then

x 2 = B A m 0 m m 2 x 0 2 + O ( x 0 3 ) . (53)

8. Super-Linear Method

With P = m ( 1 k ) , k < 0 the modified Newton’s method of Equation (46) becomes super-linear and ultimately of alternating convergence [8]. For example, for f ( x ) = x 2 + x 3 , m = 2 , k = 1 / 8 , we generate, starting with x 0 = 1 the sequence

x 1 = { 1 , 1.0 × 10 1 , 7.6 × 10 3 , 9.8 × 10 4 , 1.2 × 10 4 , 1.5 × 10 5 , 1.9 × 10 6 } . (54)

9. Third Order Iterative Methods-and Higher

We propose to replace now ξ = x 0 in Newton’s method by the better

ξ = x 0 + x 1 2 = x 0 + 1 2 δ x , δ x = u 0 = f 0 f 0 (55)

to have the mid-point method

x 2 = x 0 f ( x 0 ) f ( x 0 + 1 2 δ x ) , δ x = u 0 = f 0 f 0 (56)

which now rises in order to become cubic

x 2 a = ( 1 24 f f 2 f f f 2 ) ( x 0 a ) 3 + O ( ( x 0 a ) 4 ) (57)

still with no need for f ( x 0 ) . See also Traub [1] page 164 Equations (8-12).

The linearization

f ( x 0 + 1 2 δ x ) = f ( x 0 ) + 1 2 δ x f ( ξ ) , ξ = x 0 , δ x = u 0 = f ( x 0 ) f ( x 0 ) = f 0 f 0 (58)

reproduces out of Equation (56) the classical method of Halley

x 1 = x 0 + det [ 1 f 0 0 2 f 0 ] det [ f 0 f 0 f 0 2 f 0 ] f 0 = x 0 2 f 0 2 f 0 2 f 0 f 0 f 0 = x 0 1 1 1 2 ( f 0 / f 0 ) u 0 u 0 (59)

which is likewise cubic

x 1 a = ( 1 12 3 f 2 2 f f f 2 ) ( x 0 a ) 3 + O ( ( x 0 a ) 4 ) (60)

but requires the, possibly costly, second derivative f ( x ) of f ( x ) .

Power series expansion turns the rational Halley’s method into a polynomial in u 0 method

x 1 = x 0 u 0 f 0 2 f 0 u 0 2 , u 0 = f 0 f 0 (61)

which is still cubic

x 1 a = 3 f 2 f f 6 f 2 ( x 0 a ) 3 + O ( ( x 0 a ) 4 ) (62)

provided that f = f ( a ) 0 .

Moreover,

x 1 = x 0 u 0 f 0 2 f 0 u 0 2 f 0 3 f 0 u 0 3 , u 0 = f 0 f 0 (63)

is quartic

x 1 a = f 3 + 2 f f f + f 2 f 8 f 3 ( x 0 a ) 4 + O ( ( x 0 a ) 5 )

provided that f ( a ) 0 .

We generalize the method in Equation (63) as

x 1 = x 0 P u 0 Q f 0 2 f 0 u 0 2 R f 0 3 f 0 u 0 3 , u 0 = f 0 f 0 (65)

and recover a one-sided cubic convergence method to a root, even of multiplicity m, for

P = 1 3 m ( 4 m 2 ) , Q = m 3 , R = 1 2 m 3 , (66)

x 1 a = 2 m 2 ( B A ) 2 ( x 0 a ) 3 + O ( ( x 0 a ) 4 ) . (67)

10. Correction of Halley’s Method for Multiple Roots

We rewrite Halley’s method of Equation (59) with the open coefficients P and Q as

x 1 = x 0 P f 0 Q f 0 2 f 0 f 0 f 0 (68)

and determine by power series expansion that if (See also [11] )

P = 2 , Q = 1 + 1 m (69)

then convergence remains cubic for a root of any known multiplicity m 1

x 1 a = ( m + 1 ) B 2 2 m A C 2 m 2 A 2 ( x 0 a ) 3 + O ( ( x 0 a ) 4 ) . (70)

11. A Third-Order, One-Sided, Chord Method

Having secured

x 0 , f 0 = f ( x 0 ) , f 0 = f ( x 0 ) , x 1 = x 0 u 0 , u 0 = f 0 f 0 , f 1 = f ( x 1 ) (71)

we pass through the two points ( x 0 , f 0 ) , ( x 1 , f 1 ) the chord, or secant, line

g ( x ) = P x + Q , P = f 1 f 0 x 1 x 0 , Q = x 1 f 0 x 0 f 1 x 1 x 0 . (72)

We take the root of g ( x ) = 0 , as the next x 2 , and have

x 2 = x 0 1 1 r u 0 or x 2 = x 0 ( 1 + r ) u 0 or x 2 = x 0 ( 1 + r + r 2 ) u 0 , r = f 1 f 0 (73)

which are, all three, cubic,

x 2 a = 1 4 ( f f ) 2 ( x 0 a ) 3 , x 2 a = 1 2 ( f f ) 2 ( x 0 a ) 3 ,

x 2 a = 1 4 ( f f ) 2 ( x 0 a ) 3 + O ( ( x 0 a ) 3 ) (74)

respectively. See also Traub [1] page 180 Equations (8-55).

Convergence of this method is one sided, namely, if x 0 a > 0 , then also x 2 a > 0 , and if x 0 a < 0 , then also x 2 a < 0 .

For example, for f ( x ) = x + x 2 we obtain by the method of the first Equation (73) the two oppositely converging sequences

x 2 = { 1 6 , 1 126 , 1 1953126 } and x 2 = { 1 4 , 1 28 , 1 1953124 } (75)

by which root a = 0 is bounded, or bracketed, as

1 1953126 < a < 1 1953124 . (76)

12. Quartic and Quintic Methods

Once x 2 is computed by Equation (73), just one additional computation of f ( x ) makes the pseudo-Newton’s method

x 3 = x 2 f ( x 2 ) g ( x 2 ) = x 2 f ( x 2 ) A , A = f 1 f 0 x 1 x 0 = ( 1 r ) f 0 , r = f 1 f 0 (77)

for g ( x ) in Equation (72), quartic

x 3 a = 1 4 ( f f ) 3 ( x 0 a ) 4 + O ( ( x 0 a ) 5 ) . (78)

From a parabola passing through the data ( x 0 , f 0 , f 0 ) and ( x 1 , f 1 ) we obtain a better approximation for f ( x 2 ) , and with it the method

x 3 = x 2 f ( x 2 ) 2 A f 0 = x 2 1 1 2 r f 2 f 0 , r = f 1 f 0 (79)

which we verify to be quintic

x 3 a = ( 1 12 f 2 ( 3 f 2 f f ) f 4 ) ( x 0 a ) 5 + O ( ( x 0 a ) 6 ) . (80)

13. Fourth-Order, Parabolic Interpolant Method

We can do still better with a parabolic interpolation that includes, the already evaluated, f 0 . We pass through the data ( x 0 , f 0 , f 0 ) and ( x 1 , f 1 )

x 0 , f 0 = f ( x 0 ) , f 0 = f ( x 0 ) , x 1 = x 0 u 0 , u 0 = f 0 f 0 , f 1 = f ( x 1 ) (81)

the parabola

q ( x ) = A x 2 + B x + C , A = r f 0 2 f 0 , B = f 0 2 r f 0 2 f 0 x 0 , C = f 0 f 0 x 0 + r f 0 2 f 0 x 0 2 (82)

where x 1 = x 0 u 0 , f 1 = f ( x 1 ) , r = f 1 / f 0 , and

r = f 1 f 0 = ( 1 2 f f ) ( x 0 a ) + O ( ( x 0 a ) 2 ) . (83)

We take the nearest root of q ( x ) = 0 , as x 2 and have, by expansion in powers of r, the ultimate quartic method

x 2 = B + B 2 4 A C 2 A or x 2 = x 0 ( 1 + r + 2 r 2 ) u 0 , u 0 = f 0 f 0 , (84)

x 2 a = 1 24 f f 3 ( 15 f 2 2 f f ) ( x 0 a ) 4 + O ( ( x 0 a ) 5 ) . (85)

Method (84) is the expanded and ultimately truncated form of several published fourth-order chord methods, among them the rational method of Ostrowski [3], appendix G.

x 2 = x 0 1 r 1 2 r u 0 = x 0 ( 1 + r + 2 r 2 + 4 r 3 + ) u 0 , f 1 = f ( x 0 u 0 ) , r = f 1 f 0 (86)

x 2 a = ( 1 24 f ( 3 f 2 2 f f ) f 3 ) ( x 0 a ) 4 + O ( ( x 0 a ) 5 ) . (87)

See in particular Traub [1] page 184 Equations (8-78).

14. From a Cubic to a Quartic Method

We write the second order, n = 2 , version of the Taylor-Lagrange formula

f ( x ) = f ( x 0 ) + δ x f ( x 0 ) + 1 2 δ x 2 f ( ξ ) , δ x = x x 0 (88)

and take f ( x ) = 0 , ξ = x 0 to have the iterative method

x 1 = x 0 + δ x , 0 = f ( x 0 ) + δ x f ( x 0 ) + 1 2 δ x 2 f ( x 0 ) . (89)

We approximate the solution of the increment equation

f 0 + δ x f 0 + 1 2 δ x 2 f 0 = 0 (90)

by the power series

δ x = ( P + Q f 0 + R f 0 2 + S f 0 3 + T f 0 4 + ) f 0 (91)

and have, by substitution and collection that

( 1 + P f 0 ) + ( Q f 0 + 1 2 P 2 f 0 ) f 0 + ( R f 0 + P Q f 0 ) f 0 2 + ( S f 0 + 1 2 Q 2 f 0 + P R f 0 ) f 0 3 + = 0 (92)

from which we have, with s 0 = f 0 / f 0 , that

P = 1 f 0 , Q = 1 2 P 2 s 0 , R = P Q s 0 , S = ( 1 2 Q 2 + P R ) s 0 , T = ( Q R + P S ) s 0 (93)

and so on.

The methods

x 1 = x 0 + P f 0 + Q f 0 2 and x 2 = x 0 + P f 0 + Q f 0 2 + R f 0 3 (94)

or

x 1 = x 0 ( 1 + 1 2 v 0 ) u 0 , x 2 = x 0 ( 1 + 1 2 v 0 + 1 2 v 0 2 ) u 0 , u = f f , s = f f , v = u s (95)

are both (the first is a Chebyshev method) cubic

x 1 a = 1 6 3 f 2 f f f 2 ( x 0 a ) 3 + O ( ( x 0 a ) 4 ) , x 2 a = ( 1 3 f f ) ( x a 0 ) 3 + O ( ( x 0 a ) 4 ) , (96)

respectively, provided that f ( a ) 0 .

The method

x 1 = x 0 ( P + Q v 0 ) u 0 , u 0 = f 0 f 0 , v 0 = f 0 f 0 f 0 2 , P = m ( 3 m ) 2 , Q = 1 2 m 2 (97)

converges cubically to a root of any known multiplicity m 1

x 1 a = ( 3 + m ) B 2 2 m A C 2 A 2 m 2 ( x 0 a ) 3 + O ( ( x 0 a ) 4 ) . (98)

Since here n = 2 we propose to try

ξ = 2 3 x 0 + 1 3 x 1 . (99)

We recalculate f ( ξ ) , and verify that the second method in Equation (94) above, rises by this choice of ξ to a quartic

x 1 a = 1 72 ( 45 f 3 f 3 + f f ) ( x 0 a ) 4 + O ( ( x 0 a ) 5 ) . (100)

15. Oppositely Converging Super-Quadratic Methods

We start here with

x 2 = x 0 ( 1 + P r ) u 0 , u 0 = f 0 f 0 , r = f 1 f 0 , f 1 = f ( x 1 ) , x 1 = x 0 u 0 (101)

for undetermined coefficient P, and have

x 2 a = 1 2 f f ( 1 P ) ( x 0 a ) 2 + O ( ( x 0 a ) 3 ) . (102)

We ask that

f f ( 1 P ) = 2 k ( f f ) 2 (103)

for parameter k, or, in view of Equation (38), that

P = 1 4 k r u 0 (104)

by which the iterative method in Equation (101) turns into

x 2 = x 0 ( 1 + r ) u 0 + 4 k r 2 (105)

for any k, and

x 2 a = k ( f f ) 2 ( x 0 a ) 2 + O ( ( x 0 a ) 3 ) . (106)

This super-quadratic method converges from above if k > 0 , and from below if k < 0 .

The interest in the method

x 2 = x 0 2 f 0 f 0 f 1 u 0 , x 1 = x 0 2 u 0 , f 1 = f ( x 1 ) , u 0 = f 0 f 0 (107)

is that it ultimately converges oppositely to Newton’s method,

x 2 a = 1 2 f f ( x 0 a ) 2 + O ( ( x 0 a ) 3 ) (108)

as seen by comparing Equation (108) with Equation (45).

For example, for f ( x ) = x + x 2 we compute, starting at x 0 = 1 / 2 , from Newton’s method and from method (107), respectively,

x 1 = { 1 2 , 1 8 , 1 80 , 1 6560 , 1 43046720 } ,

x 2 = { 1 2 , 1 10 , 1 82 , 1 6562 , 1 43046722 }

1 2 ( x 1 + x 2 ) = 1 2 { 1 1 , 1 40 , 1 3280 , 1 21523360 , 1 926510094425920 } (109)

and root a is bounded, or bracketed, thereby as

1 43046722 < a < 1 43046720 . (110)

The average of Newton’s method and the method of Equation (107) is cubic,

1 2 ( x 1 + x 2 ) a = ( 1 6 f f ) ( x 0 a ) 3 + O ( ( x 0 a ) 4 ) . (111)

16. Alternatingly Converging Super-Linear and Super-Cubic Methods

We start by modifying Newton’s method into

x 1 = x 0 ( 1 + k ) f 0 f 0 (112)

for anyk, and for which

x 1 a = k ( x 0 a ) + O ( ( x 0 a ) 2 ) (113)

indicating that, invariably, the method converges, at least asymptotically, alternatingly, if k > 0 .

For example, for f ( x ) = x + x 2 , k = 1 / 8 , x 0 = 1 , we compute by method (112) the super-linearly converging sequence

x 1 = { 1,2.5 × 10 1 ,1.6 × 10 2 , 1.7 × 10 3 ,2.1 × 10 4 , 2.7 × 10 5 } (114)

and root a = 0 is bracketed as

2.6706 × 10 5 < a < 2.1406 × 10 4 . (115)

For a higher order method we start with, the originally quartic

x 2 = x 0 ( 1 + r + Q r 2 ) u 0 , u 0 = f 0 f 0 , r = f 1 f 0 , f 1 = f ( x 0 u 0 ) (116)

of Equation (63), and have that

x 2 a = k ( f f ) 2 ( x 0 a ) 3 + O ( ( x 0 a ) 4 ) , k = 1 4 ( Q 2 ) (117)

indicating that this super cubic method converges alternatingly if parameter k > 0 .

For example, for f ( x ) = x + x 2 we generate by method (116) with k = 1 the alternating sequence

x 2 = { 1 , 0.012 , 8.34 × 10 6 , 2.32 × 10 15 } (118)

and root a = 0 is bracketed as

2.32 × 10 15 < a < 8.34 × 10 6 . (119)

17. Still Higher Order Methods

Starting with

f ( x ) = f 0 + δ x f 0 + 1 2 δ x 2 f 0 + 1 6 δ x 3 f ( ξ ) (120)

we have the suggested iterative method

x 1 = x 0 + δ x , f 0 + δ x f 0 + 1 2 δ x 2 f 0 + 1 6 δ x 3 f 0 = 0 (121)

and we take

δ x = ( P + Q f 0 + R f 0 2 + S f 0 3 + ) f 0 (122)

with

P = 1 f , Q = 1 2 P 3 f , R = P 2 ( Q f + 1 6 P 2 f ) , S = P ( 1 2 Q 2 f + P R f + 1 2 P 2 Q f ) . (123)

The resulting methods

x 1 = x 0 + ( P + Q f 0 + R f 0 2 ) f 0 and x 2 = x 0 + ( P + Q f 0 + R f 0 2 + S f 0 3 ) f 0 (124)

are both quartic

x 2 a = ( 1 24 f f ) ( x 0 a ) 4 + O ( ( x 0 a ) 5 ) (125)

provided that f ( a ) 0 .

Recalculating f ( ξ ) at

ξ = 3 4 x 0 + 1 4 x 1 (126)

the method rises to quintic

x 2 a = 1 960 f 4 ( 840 f 4 840 f f f + 80 f f 3 f f 3 f ( 5 ) ) ( x 0 a ) 5 + O ( ( x 0 a ) 6 ) . (127)

18. Higher-Order Methods, Multiple Root

We reconsider the typical third-order method in Equation (61), but with the arbitrary coefficients P and Q

x 1 = x 0 P u 0 Q f 0 f 0 u 0 2 , u 0 = f 0 f 0 . (128)

By power series expansion we have that

x 1 a = 1 m 2 ( m 2 m P + Q m Q ) ( x 0 a ) + B A m 3 ( m P 3 Q + m Q ) ( x 0 a ) 2 + O ( ( x 0 a ) 3 ) (129)

and cubic convergence is maintained with

P = 1 2 m ( 3 m ) , Q = 1 2 m 2 . (130)

See also Petković [9], Osada [10], Traub [1], Changbum Chun [11], and in particular Osada, who suggested the simpler remarkable cubic method

x 1 = x 0 1 2 m ( m + 1 ) f 0 f 0 + 1 2 ( m 1 ) 2 f 0 f 0 , m > 1. (131)

A still higher order method is

x 1 = x 0 + P f 0 + Q ( m 1 ) f 0 + R ( m 1 ) ( m 2 ) f 0 , m > 2. (132)

For m = 3 , for example, we determine that

R = 1 f 0 , Q = f 0 6 f 0 / p r i m e 2 , P = 2 f 0 2 3 f 0 3 + 5 f 0 9 f 0 2 (133)

makes the method quartic

x 1 a = 8 B 3 + 20 A B C 15 A 2 D 3 A 3 ( x 0 a ) 4 + O ( ( x 0 a ) 5 ) . (134)

To have a cubic method that does not require an a-priori knowledge of the root multiplicity, we replace f by u to have

x 1 = x 0 v 0 1 2 u 0 u 0 v 0 2 , v = u u , u = f f (135)

for which

x 1 a = ( 1 m ) B 2 + 2 m A C A 2 m 2 ( x 0 a ) 3 + O ( ( x 0 a ) 4 ) . (136)

Still higher order methods are readily thus generated. See also [14] [15] [16] and [17].

19. Conclusions

We have demonstrated in this paper the usefulness of a number of practical, high order methods for the iterative location of the roots of single-variable nonlinear equations. We have presented here algorithms to estimate the multiplicity of a considered root. Special attention was given here to algorithms thriftily applicable to multiple roots of initially known and unknown multiplicity.

We have further demonstrated here the advantage of super-linear and super-cubic methods that converge contrarily or alternatingly, enabling us, not only to confidently approach the root, but to also actually bound and bracket it as we progress.

Conflicts of Interest

The authors declare no conflicts of interest regarding the publication of this paper.

References

[1] Traub, J.F. (1977) Iterative Methods for the Solution of Equations. Chelsea Publishing Company, New York.
[2] Sharma, J.R. (2005) A Composite Third Order Newton-Steffenson Method for Solving Nonlinear Equations. Applied Mathematics and Computation, 169, 242-246.
https://doi.org/10.1016/j.amc.2004.10.040
[3] Ostrowski, A. (1960) Solution of Equations and Systems of Equations. Academic Press, New York.
[4] Petković, M. (2009) On Optimal Multipoint Methods for Solving Nonlinear Equations. Novi Sad Journal of Mathematics, 39, 123-130.
[5] Neta, B. (1979) A Sixth-Order Family of Methods for Nonlinear Equations. International Journal of Computer Mathematics, 7, 157-161.
https://doi.org/10.1080/00207167908803166
[6] Khattri, S.K. and Argyros, I.K. (2010) How to Develop Fourth and Seventh Order Iterative Methods. Novi Sad Journal of Mathematics, 40, 61-67.
[7] Bi, W.H., Ren, H.M. and Wu, Q.B. (2009) Three-Step Iterative Methods with Eighth-Order Convergence for Solving Nonlinear Equations. Journal of Computational and Applied Mathematics, 225, 105-112.
https://doi.org/10.1016/j.cam.2008.07.004
[8] Fried, I. (2013, May) High-Order Iterative Bracketing Methods. International Journal for Numerical Methods in Engineering, 94, 708-714.
https://doi.org/10.1002/nme.4467
[9] Petković, M.S., Petković, L.D. and Džunić, J. (2010) Accelerating Generators of Iterative Methods for Finding Multiple Roots of Nonlinear Equations. Computers and Mathematics with Applications, 59, 2784-2793.
https://doi.org/10.1016/j.camwa.2010.01.048
[10] Osada, N. (1994) An Optimal Multiple Root-Finding Method of Order Three. Journal of Computational and Applied Mathematics, 51, 131-133.
https://doi.org/10.1016/0377-0427(94)00044-1
[11] Chun, C., Bae, H.J. and Neta, B. (2009) New Families of Nonlinear Third-Order Solvers for Finding Multiple Roots. Computers and Mathematics with Applications, 57, 1574-1582.
https://doi.org/10.1016/j.camwa.2008.10.070
[12] Beesack, P.R. (1966) A General Form of the Remainder in Taylor’s Theorem. The American Mathematical Monthly, 73, 131-133.
https://doi.org/10.2307/2313928
[13] Poffald, E.I. (1990) The Remainder in Taylor’s Formula. The American Mathematical Monthly, 97, 205-213.
https://doi.org/10.1080/00029890.1990.11995575
[14] Fried, I. (2014) Effective High-Order Iterative Methods via the Asymptotic Form of the Taylor-Lagrange Remainder. Journal of Applied Mathematics, Article ID: 108976.
https://doi.org/10.1155/2014/108976
[15] Fried, I. (2014) The Systematic Formation of High-Order Iterative Methods. Journal of Applied and computational Mathematics, 3, 1000165.
https://doi.org/10.4172/2168-9679.1000165
[16] Liu, X. and Pan, Y. (2020) The Numerical Solutions of Systems of Nonlinear Integral Equations with the Spline Functions. Journal of Applied Mathematics and Physics, 8, 470-480.
https://doi.org/10.4236/jamp.2020.83037
[17] Lin, G., Gao, Y. and Sun, Y. (2017) On Local Existence and Blow-Up of Solutions for Nonlinear Wave Equations of Higher-Order Kirchhoff Type with Strong Dissipation. International Journal of Modern Nonlinear Theory and Application, 6, 11-25.
https://doi.org/10.4236/ijmnta.2017.61002

Copyright © 2024 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.