Two Families of Multipoint Root-Solvers Using Inverse Interpolation with Memory

Abstract

In this paper, a general family of derivative-free n + 1-point iterative methods using n + 1 evaluations of the function and a general family of n-point iterative methods using n evaluations of the function and only one evaluation of its derivative are constructed by the inverse interpolation with the memory on the previous step for solving the simple root of a nonlinear equation. The order and order of convergence of them are proved respectively. Finally, the proposed methods and the basins of attraction are demonstrated by the numerical examples.

Share and Cite:

Liu, Z. and Zheng, Q. (2023) Two Families of Multipoint Root-Solvers Using Inverse Interpolation with Memory. Journal of Applied Mathematics and Physics, 11, 746-759. doi: 10.4236/jamp.2023.113050.

1. Introduction

Newton’s method is a well-known iterative method to solve nonlinear problems in scientific computation. For a nonlinear equation f ( x ) = 0 , Newton’s method is as the following (see [1] ):

x k + 1 = x k f ( x k ) f ( x k ) , k = 0 , 1 , .

Furthermore, Steffensen’s method is a derivative-free iterative method, and a self-accelerating Steffensen’s method is introduced in Traub’s book ( [2] ) as the following:

x k + 1 = x k f ( x k ) f [ x k , x k + β k f ( x k ) ] , k = 0 , 1 , , (1)

where f [ u , v ] : = f ( u ) f ( v ) u v and β k = 1 f [ x k 1 , x k 1 + β k 1 f ( x k 1 ) ] by using

recursively the memory on the previous step without any new functional evaluation.

The efficiency index of an iterative method (IM) is defined as E = p 1 / d where p is the order of IM and d is the number of function evaluations of IM per step. Kung and Traub conjectured in 1974 that a multipoint iteration based on n + 1 evaluations without memory has optimal order 2n of convergence [3]. Thus, Newton’s method and Steffensen’s method are methods of optimal second-order, and their efficiency indices are both 2 = 1.4142 . Self-accelerating Steffensen’s method achieves super convergence of order 1 + 2 = 2.4142 with memory, and its efficiency index is 1 + 2 = 1.5538 . A one-parameter multipoint iteration of optimal order 16 in [4] [5] consists of successively a Newton substep, a modified Newton substep and two substeps of inverse interpolation, requires four evaluations of f and only one evaluation of f per step, and reaches the efficiency index 16 1 / 5 = 1.7411 . General multipoint iterations of optimal order have been constructed by using inverse interpolation in [3] [6] [7] and direct

interpolation in [5] [7] - [12], and reach the efficiency index 2 n n + 1 by n + 1

evaluations without memory. Furthermore, self-accelerations of general multipoint iterations with memory from the current and previous iterations can achieve better convergence and efficiency [5] [7] [10] [11] [12].

Recently, the family of n + 1-point iterative methods of optimal order 2n with n + 1 self-accelerating parameters was proposed by using Newton’s interpolation in [12] as follows:

{ x k , 1 = x k , 0 + β 0 f ( x k , 0 ) , x k , 0 = x k , x k , 2 = x k , 1 f ( x k , 1 ) f [ x k , 1 , x k , 0 ] + β 1 f ( x k , 0 ) , x k , j + 1 = x k , j f ( x k , j ) f [ x k , j , x k , j 1 ] + + f [ x k , j , , x k , 0 ] ( x k , j x k , j 1 ) ( x k , j x k , 1 ) + β j ( x k , j x k , j 1 ) ( x k , j x k , 0 ) , j = 2 , , n , x k + 1 = x k , n + 1 , k = 0 , 1 , , (2)

where f [ u , v , w ] : = f [ u , v ] f [ v , w ] u w , . This family generalized the two-point

two-parameter Steffensen’s method in [13] and the general parametric families in [7] [9] [10] [11] by using n + 1 parameters, and achieved the convergence of

order ( 2 n + 1 1 + 2 2 ( n + 1 ) + 1 ) / 2 by using the parameters with the memory on all points in the previous step as the following:

{ β 0 = 1 / N n + 1 ( x k , 0 ) , β 1 = N n + 2 ( x k , 1 ) / ( 2 N n + 2 ( x k , 1 ) ) , β i = N n + j + 1 ( j + 1 ) ( x k , j ) / ( j + 1 ) ! , j = 2 , , n , (3)

where N n + j + 1 ( t ) is a Newton’s interpolating polynomial of order n + j + 1 for j = 0 , , n , such that N n + j + 1 ( x k , i ) = f ( x k , i ) ( 0 i j ) and N n + j + 1 ( x k 1, i ) = f ( x k 1, i ) ( 0 i n ) . When n = 4 , the efficiency index of (2) without memory is also 16 1 / 5 = 1.7411 and the efficiency index of (2) and (3) with memory is { ( 31 + 1025 ) / 2 } 1 / 5 = 1.9938 . The topic on basins of attraction was addressed in [14] - [20] for derivative-free methods and various other methods.

In this paper, a general family of n + 1-point iterative methods derivative-free and another general family of n-point iterative methods using a first derivative are constructed by the inverse interpolation with the memory on points in the previous step in Sections 2 and 3 respectively, the proposed families are tested by numerical examples for solving nonlinear equations and the basins of attraction of the methods are illustrated in Section 4, and finally conclusions are made.

2. General n + 1-Point Iteration Derivative-Free with Memory

Let x k , 0 = x k be an approximation of the simple root of f ( x ) and x k ,1 = x k ,0 + γ 0 f ( x k ,0 ) be a further approximation. Let us recognize the mapping f previously from the independent variable to the dependent variable inversely as a mapping f 1 now from the dependent variable to the independent variable in the obtained discrete information. We can have an inverse Newton’s interpolating polynomial of degree one (see, e.g., [2] [3] ):

Q 1 ( t ) = x k ,1 + f 1 [ f ( x k ,1 ) , f ( x k ,0 ) ] ( t f ( x k ,1 ) ) ,

such that Q 1 ( f ( x k ,1 ) ) = x k ,1 and Q 1 ( f ( x k ,0 ) ) = x k ,0 , where f 1 [ f ( u ) , f ( v ) ] = u v f ( u ) f ( v ) . The next approximation of the root could be obtained by Q 1 ( 0 ) as the following:

x k + 1 = Q 1 ( 0 ) = x k , 0 + γ 0 f ( x k , 0 ) γ 0 f ( x k , 0 ) f ( x k , 1 ) f ( x k , 1 ) f ( x k , 0 ) = x k , 0 f ( x k , 0 ) f [ x k , 0 , x k , 0 + γ 0 f ( x k , 0 ) ] ,

which can give Steffensen’s method and self-accelerating Steffensen’s method obviously.

However, by using the most information up to the previous step, i.e., using the known discrete information of f 1 in Table 1 when n = 1 .

We have the inverse Newton’s interpolating polynomial of degree three as the following:

Table 1. The known discrete information of f 1 for j = 1 , , n and k > 0 .

Q 3 ( t ) = x k ,1 + f 1 [ f ( x k ,1 ) , f ( x k ,0 ) ] ( t f ( x k ,1 ) ) + f 1 [ f ( x k ,1 ) , f ( x k ,0 ) , f ( x k 1,1 ) ] ( t f ( x k ,1 ) ) ( t f ( x k ,0 ) ) + f 1 [ f ( x k ,1 ) , f ( x k ,0 ) , f ( x k 1,1 ) , f ( x k 1,0 ) ] ( t f ( x k ,1 ) ) × ( t f ( x k ,0 ) ) ( t f ( x k 1,1 ) ) ,

such that Q 3 ( f ( x k , 1 ) ) = x k , 1 , Q 3 ( f ( x k , 0 ) ) = x k , 0 , Q 3 ( f ( x k 1 , 1 ) ) = x k 1 , 1 and Q 3 ( f ( x k 1,0 ) ) = x k 1,0 , where f 1 [ f ( u ) , f ( v ) ] = u v f ( u ) f ( v ) and

f 1 [ f ( u ) , f ( v ) , f ( w ) ] = f 1 [ f ( u ) , f ( v ) ] f 1 [ f ( u ) , f ( w ) ] f ( v ) f ( w ) and so forth.

Q 3 ( 0 ) could be better than Q 1 ( 0 ) to approximate the root of f ( x ) . We suggest that x k + 1 = Q 3 ( 0 ) and propose a derivative-free iteration as the following:

{ x k , 0 = x k , x k , 1 = x k , 0 + γ 0 f ( x k , 0 ) , x k + 1 = x k , 2 = x k , 1 + f 1 [ f ( x k , 1 ) , f ( x k , 0 ) ] ( f ( x k , 1 ) ) + f 1 [ f ( x k , 1 ) , f ( x k , 0 ) , f ( x k 1 , 1 ) ] ( f ( x k , 1 ) ) ( f ( x k , 0 ) ) + f 1 [ f ( x k , 1 ) , f ( x k , 0 ) , f ( x k 1 , 1 ) , f ( x k 1 , 0 ) ] ( f ( x k , 1 ) ) × ( f ( x k , 0 ) ) ( f ( x k 1 , 1 ) ) .

Furthermore, by the inverse Newton’s interpolating polynomial of degree n + j + 1 satisfying Table 1, we construct an optimal family of n + 1-point iterations with memory as the following:

{ x k , 0 = x k , x k , 1 = x k , 0 + γ 0 f ( x k , 0 ) , x k , j + 1 = x k , j + f 1 [ f ( x k , j ) , f ( x k , j 1 ) ] ( f ( x k , j ) ) + + f 1 [ f ( x k , j ) , , f ( x k , 0 ) ] ( f ( x k , j ) ) ( f ( x k , 1 ) ) + f 1 [ f ( x k , j ) , , f ( x k , 0 ) , f ( x k 1 , n ) ] ( f ( x k , j ) ) ( f ( x k , 0 ) ) + + f 1 [ f ( x k , j ) , , f ( x k , 0 ) , f ( x k 1 , n ) , , f ( x k 1 , 0 ) ] × ( f ( x k , j ) ) ( f ( x k , 0 ) ) ( f ( x k 1 , n ) ) ( f ( x k 1 , 1 ) ) , j = 1 , , n , x k + 1 = x k , n + 1 , k = 0 , 1 , , (4)

where γ 0 is a constant.

Theorem 1. Let f : D be a sufficiently differentiable function with a simple root a D , D be an open set, x k be close enough to a, then the family (4) satisfies the error equation

e k + 1 = ( f 1 ) ( 2 n + 2 ) ( 0 ) ( 2 n + 2 ) ! ( f ( a ) ) 2 n + 2 i = 2 n [ ( ( f 1 ) ( n + i + 1 ) ( 0 ) ( n + i + 1 ) ! ) ( f ( a ) ) n + i + 1 ] 2 n i × ( 1 + γ 0 f ( a ) ) 2 n 1 ( e k 2 e k 1 , n e k 1 , 0 ) 2 n 1 + o ( ( e k 2 e k 1 , n e k 1 , 0 ) 2 n 1 ) , (5)

where e k = x k a and e k , j = x k , j a , k = 0 , 1 , , and achieves the convergence of order at least ( 3 2 n 1 1 + 9 2 2 n 2 + 2 n + 1 ) / 2 .

Proof. Supposed that e k , j = C j e k p j + o ( e k p j ) and e k + 1 = C e k r + o ( e k r ) . Then,

e k , j = C j C p j e k 1 r p j + o ( e k 1 r p j ) , e k + 1 = C r + 1 e k 1 r 2 + o ( e k 1 r 2 ) ,

e k , 1 = x k , 1 a = e k , 0 + γ 0 f [ x k , 0 , a ] e k , 0 = ( 1 + γ 0 f [ x k , 0 , a ] ) e k = ( 1 + γ 0 f ( a ) ) C e k 1 r + o ( e k 1 r ) .

Noticing the definition of divided difference, for j = 1 , , n , we have

e k , j + 1 = f 1 [ f ( a ) , f ( x k , j ) , , f ( x k , 0 ) , f ( x k 1 , n ) , , f ( x k 1 , 0 ) ] × ( 1 ) n + j + 2 f [ x k , j , a ] f [ x k , 0 , a ] f [ x k 1 , n , a ] f [ x k 1 , 0 , a ] e k , j e k , 0 e k 1 , n e k 1 , 0 f 1 [ f ( a ) , f ( x k , j ) , , f ( x k , 0 ) , f ( x k 1 , n ) , , f ( x k 1 , 0 ) ] × ( 1 ) n + j + 2 f [ x k , j , a ] f [ x k , 0 , a ] f [ x k 1 , n , a ] f [ x k 1 , 0 , a ]

× ( f 1 [ f ( a ) , f ( x k , j 1 ) , , f ( x k , 0 ) , f ( x k 1 , n ) , , f ( x k 1 , 0 ) ] ) × ( 1 ) n + j + 1 f [ x k , j 1 , a ] f [ x k , 0 , a ] f [ x k 1 , n , a ] f [ x k 1 , 0 , a ] × × ( f 1 [ f ( a ) , f ( x k , 1 ) , f ( x k , 0 ) , f ( x k 1 , n ) , , f ( x k 1 , 0 ) ] ) 2 j 2 × ( 1 ) n + 3 ( f [ x k , 1 , a ] f [ x k , 0 , a ] f [ x k 1 , n , a ] f [ x k 1 , 0 , a ] ) 2 j 2 ( e k , 1 e k , 0 e k 1 , n e k 1 , 0 ) 2 j 1

= f 1 [ f ( a ) , f ( x k , j ) , , f ( x k , 0 ) , f ( x k 1 , n ) , , f ( x k 1 , 0 ) ] × ( 1 ) n + j + 2 f [ x k , j , a ] f [ x k , 0 , a ] f [ x k 1 , n , a ] f [ x k 1 , 0 , a ] × × ( f 1 [ f ( a ) , f ( x k , 1 ) , f ( x k , 0 ) , f ( x k 1 , n ) , , f ( x k 1 , 0 ) ] ) 2 j 2 × ( ( 1 ) n + 3 f [ x k , 1 , a ] f [ x k , 0 , a ] f [ x k 1 , n , a ] f [ x k 1 , 0 , a ] ) 2 j 2 × ( 1 + γ 0 f [ x k , 0 , a ] ) 2 j 1 e k 2 j ( e k 1 , n e k 1 , 0 ) 2 j 1

= ( f 1 ) ( n + j + 2 ) ( 0 ) ( n + j + 2 ) ! ( f ( a ) ) n + j + 2 i = 2 j [ ( ( f 1 ) ( n + i + 1 ) ( 0 ) ( n + i + 1 ) ! ) ( f ( a ) ) n + i + 1 ] 2 j i × ( 1 + γ 0 f ( a ) ) 2 j 1 e k 2 j ( e k 1 , n e k 1 , 0 ) 2 j 1 + o ( e k 2 j ( e k 1 , n e k 1 , 0 ) 2 j 1 ) = ( f 1 ) ( n + j + 2 ) ( 0 ) ( n + j + 2 ) ! ( f ( a ) ) n + j + 2 i = 2 j [ ( ( f 1 ) ( n + i + 1 ) ( 0 ) ( n + i + 1 ) ! ) ( f ( a ) ) n + i + 1 ] 2 j i × ( 1 + γ 0 f ( a ) ) 2 j 1 C 2 j e k 1 2 j r ( C n C 1 e k 1 p n + + p 1 + 1 ) 2 j 1 + o ( e k 1 2 j r + 2 j 1 ( p n + + p 1 + 1 ) ) .

So,

{ r p 1 = r , r p j + 1 = 2 j r + 2 j 1 ( p n + + p 1 + 1 ) , j = 1 , , n , r = p n + 1 .

Thus, r 2 = ( 3 2 n 1 1 ) r + 2 n , and r = ( 3 2 n 1 1 + 9 2 2 n 2 + 2 n + 1 ) / 2 .

The parameter in the multipoint iteration (4) should be expressed by using memory as good as possible. According to the asymptotic convergence constant in (5), besides others such as

γ 0 = f 1 [ f ( x k , 0 ) , f ( x k 1 , n ) ] ,

we choose the expression of the parameter here to be the following:

γ 0 = f 1 [ f ( x k , 0 ) , f ( x k 1 , n ) ] f 1 [ f ( x k , 0 ) , f ( x k 1 , n ) , f ( x k 1 , n 1 ) ] ( f ( x k 1 , n ) ) f 1 [ f ( x k , 0 ) , f ( x k 1 , n ) , , f ( x k 1 , 0 ) ] ( f ( x k 1 , n ) ) ( f ( x k 1 , n 1 ) ) ( f ( x k 1 , 1 ) ) . (6)

Theorem 2. Let f : D be a sufficiently differentiable function with a simple root a D , D be an open set, x 0 be close enough to a, then the family (4) with the self-acceleration (6) satisfies the error equation:

e k + 1 = ( f 1 ) ( 2 n + 2 ) ( 0 ) ( 2 n + 2 ) ! ( f ( a ) ) 2 n + 2 i = 1 n [ ( ( f 1 ) ( n + i + 1 ) ( 0 ) ( n + i + 1 ) ! ) ( f ( a ) ) n + i + 1 ] 2 n i × ( e k e k 1 , n e k 1 , 0 ) 2 n + o ( ( e k e k 1 , n e k 1 , 0 ) 2 n ) , (7)

where e k = x k a and e k , j = x k , j a , k = 0 , 1 , , and achieves convergence of order at least ( 2 n + 1 1 + 2 2 n + 2 + 1 ) / 2 .

Proof. By the proof of Theorem 1, for j = 0 , we have

e k , 1 = ( 1 + γ 0 f [ x k , 0 , a ] ) e k = ( f 1 [ f ( x k , 0 ) , f ( a ) ] + γ 0 ) f [ x k , 0 , a ] e k = f 1 [ f ( x k , 0 ) , f ( x k 1 , n ) , , f ( x k 1 , 0 ) , f ( a ) ] × ( f [ x k 1 , n , a ] e k 1 , n ) ( f [ x k 1 , 0 , a ] e k 1 , 0 ) f [ x k , 0 , a ] e k = ( ( f 1 ) ( n + 2 ) ( 0 ) ( n + 2 ) ! ( f ( a ) ) n + 2 ) C C n C 1 e k 1 r + p n + + p 1 + 1 + o ( e k 1 r + p n + + p 1 + 1 ) .

For j > 0 , we have

e k , j + 1 = f 1 [ f ( a ) , f ( x k , j ) , , f ( x k , 0 ) , f ( x k 1 , n ) , , f ( x k 1 , 0 ) ] × ( 1 ) n + j + 2 f [ x k , j , a ] f [ x k , 0 , a ] f [ x k 1 , n , a ] f [ x k 1 , 0 , a ] × × ( f 1 [ f ( a ) , f ( x k , 1 ) , f ( x k , 0 ) , f ( x k 1 , n ) , , f ( x k 1 , 0 ) ] ) 2 j 2 × ( ( 1 ) n + 3 f [ x k , 1 , a ] f [ x k , 0 , a ] f [ x k 1 , n , a ] f [ x k 1 , 0 , a ] ) 2 j 2 × ( ( f 1 [ f ( x k , 0 ) , f ( a ) ] + γ 0 ) f [ x k , 0 , a ] ) 2 j 1 e k 2 j ( e k 1 , n e k 1 , 0 ) 2 j 1

= ( f 1 ) ( n + j + 2 ) ( 0 ) ( n + j + 2 ) ! ( f ( a ) ) n + j + 2 i = 1 j [ ( ( f 1 ) ( n + i + 1 ) ( 0 ) ( n + i + 1 ) ! ) ( f ( a ) ) n + i + 1 ] 2 j i × e k 2 j ( e k 1 , n e k 1 , 0 ) 2 j + o ( ( e k e k 1 , n e k 1 , 0 ) 2 j ) = ( f 1 ) ( n + j + 2 ) ( 0 ) ( n + j + 2 ) ! ( f ( a ) ) n + j + 2 i = 1 j [ ( ( f 1 ) ( n + i + 1 ) ( 0 ) ( n + i + 1 ) ! ) ( f ( a ) ) n + i + 1 ] 2 j i × C 2 j e k 1 2 j r ( C n C 1 e k 1 p n + + p 1 + 1 ) 2 j + o ( e k 1 2 j ( r + p n + + p 1 + 1 ) ) .

So,

{ r p j + 1 = 2 j ( r + p n + + p 1 + 1 ) , j = 0 , , n , r = p n + 1 .

Thus, r 2 = ( 2 n + 1 1 ) r + 2 n , and r = ( 2 n + 1 1 + 2 2 n + 2 + 1 ) / 2 .

3. General n-Point Iteration with Memory and a First Derivative

Let x k ,0 be an approximation of the simple root of f ( x ) = 0 . By an inverse interpolation on this one point of degree one, we have

f 1 ( t ) Q 1 ( t ; f ( x k , 0 ) , f ( x k , 0 ) ) = x k , 0 + f 1 [ f ( x k , 0 ) , f ( x k , 0 ) ] ( t f ( x k , 0 ) ) ,

where f 1 [ f ( x k , 0 ) , f ( x k , 0 ) ] = ( f 1 ) ( f ( x k , 0 ) ) = 1 f ( x k , 0 ) as usual and so forth. Therefore, by using Q 1 ( 0 ; f ( x k ,0 ) , f ( x k ,0 ) ) to approximate the root, we have

x k + 1 = x k , 0 + f 1 [ f ( x k , 0 ) , f ( x k , 0 ) ] ( f ( x k , 0 ) ) ,

which is Newton’s method. However, by using the most information up to the previous step, we have the inverse interpolation of degree two:

Q 2 ( t ; f ( x k ,0 ) , f ( x k ,0 ) , f ( x k 1,0 ) ) = x k ,0 + f 1 [ f ( x k ,0 ) , f ( x k ,0 ) ] ( t f ( x k ,0 ) ) + f 1 [ f ( x k ,0 ) , f ( x k ,0 ) , f ( x k 1,0 ) ] ( t f ( x k ,0 ) ) 2 .

We suggest a one-point method with memory by Q 2 ( 0 ; f ( x k ,0 ) , f ( x k ,0 ) , f ( x k 1,0 ) ) as follows:

x k + 1 = x k , 0 + f 1 [ f ( x k , 0 ) , f ( x k , 0 ) ] ( f ( x k , 0 ) ) + f 1 [ f ( x k , 0 ) , f ( x k , 0 ) , f ( x k 1 , 0 ) ] ( f ( x k , 0 ) ) 2 . (8)

Furthermore, we construct a family of n-point iterations with the memory on the whole previous step by using the inverse interpolation as follows:

{ x k , 0 = x k , x k , 1 = x k , 0 + f 1 [ f ( x k , 0 ) , f ( x k , 0 ) ] ( f ( x k , 0 ) ) + f 1 [ f ( x k , 0 ) , f ( x k , 0 ) , f ( x k 1 , n 1 ) ] ( f ( x k , 0 ) ) 2 + + f 1 [ f ( x k , 0 ) , f ( x k , 0 ) , f ( x k 1 , n 1 ) , , f ( x k 1 , 0 ) ] ( f ( x k , 0 ) ) 2 ( f ( x k 1 , n 1 ) ) ( f ( x k 1 , 1 ) ) , x k , j + 1 = x k , j + f 1 [ f ( x k , j ) , f ( x k , j 1 ) ] ( f ( x k , j ) ) + + f 1 [ f ( x k , j ) , , f ( x k , 1 ) , f ( x k , 0 ) , f ( x k , 0 ) ] ( f ( x k , j ) ) ( f ( x k , 1 ) ) ( f ( x k , 0 ) ) + f 1 [ f ( x k , j ) , , f ( x k , 0 ) , f ( x k , 0 ) , f ( x k 1 , n 1 ) ] ( f ( x k , j ) ) ( f ( x k , 1 ) ) ( f ( x k , 0 ) ) 2 + + f 1 [ f ( x k , j ) , , f ( x k , 0 ) , f ( x k , 0 ) , f ( x k 1 , n 1 ) , , f ( x k 1 , 0 ) ] × ( f ( x k , j ) ) ( f ( x k , 1 ) ) ( f ( x k , 0 ) ) 2 ( f ( x k 1 , n 1 ) ) ( f ( x k 1 , 1 ) ) , j = 1 , , n 1 , x k + 1 = x k , n , k = 0 , 1 , . (9)

Theorem 3. Let f : D be a sufficiently differentiable function with a simple root a D , D be an open set, x 0 be close enough to a, then the family of n-point iterations (9) satisfies the error equation:

e k + 1 = ( f 1 ) ( 2 n + 3 ) ( 0 ) ( 2 n + 3 ) ! ( f ( a ) ) 2 n + 3 i = 1 n [ ( ( f 1 ) ( n + i + 2 ) ( 0 ) ( n + i + 2 ) ! ) ( f ( a ) ) n + i + 2 ] 2 n i × ( e k 2 e k 1 , n e k 1 , 0 ) 2 n + o ( ( e k 2 e k 1 , n e k 1 , 0 ) 2 n ) , (10)

where e k = x k a and e k , j = x k , j a , k = 0 , 1 , , and achieves convergence of order at least ( 3 2 n 1 1 + 9 2 2 n 2 2 n + 1 ) / 2 .

Proof. Denote e k = x k a and e k , j = x k , j a , j = 0 , , n . Supposed that e k , j = C j e k p j + o ( e k p j ) and e k + 1 = C e k r + o ( e k r ) . Then,

e k , j = C j C p j e k 1 r p j + o ( e k 1 r p j ) and e k + 1 = C r + 1 e k 1 r 2 + o ( e k 1 r 2 ) .

When n = 1 , we have

e k , 1 = x k , 0 a + f 1 [ f ( x k , 0 ) , f ( x k , 0 ) ] ( f ( x k , 0 ) ) + f 1 [ f ( x k , 0 ) , f ( x k , 0 ) , f ( x k 1 , 0 ) ] ( f ( x k , 0 ) ) 2 = f 1 [ f ( x k , 0 ) , f ( x k , 0 ) , f ( x k 1 , 0 ) , f ( a ) ] ( f [ x k , 0 , a ] e k , 0 ) 2 ( f [ x k 1 , 0 , a ] e k 1 , 0 ) = ( f 1 ) ( 3 ) ( 0 ) 3 ! ( f ( a ) ) 3 e k 2 e k 1 , 0 + o ( e k 2 e k 1 , 0 ) = ( f 1 ) ( 3 ) ( 0 ) 3 ! ( f ( a ) ) 3 C 2 e k 1 2 r + 1 + o ( e k 1 2 r + 1 ) .

So, we have

{ r p 1 = 2 r + 1 , r = p 1 .

Thus, r 2 = 2 r + 1 and r = 1 + 2 .

When n > 1 , for j = 0 , , n 1 , we have

e k , j + 1 = f 1 [ f ( a ) , f ( x k , j ) , , f ( x k , 1 ) , f ( x k , 0 ) f ( x k , 0 ) , f ( x k 1 , n 1 ) , , f ( x k 1 , 0 ) ] × ( 1 ) n + j + 2 f [ x k , j , a ] f [ x k ,1 , a ] f [ x k ,0 , a ] 2 f [ x k 1, n 1 , a ] f [ x k 1,0 , a ] × e k , j e k ,1 e k ,0 2 e k 1, n 1 e k 1,0

= f 1 [ f ( a ) , f ( x k , j ) , , f ( x k , 1 ) , f ( x k , 0 ) f ( x k , 0 ) , f ( x k 1 , n 1 ) , , f ( x k 1 , 0 ) ] × ( 1 ) n + j + 2 f [ x k , j , a ] f [ x k , 1 , a ] f [ x k , 0 , a ] 2 f [ x k 1 , n 1 , a ] f [ x k 1 , 0 , a ] × ( f 1 [ f ( a ) , f ( x k , j 1 ) , , f ( x k , 1 ) , f ( x k , 0 ) f ( x k , 0 ) , f ( x k 1 , n 1 ) , , f ( x k 1 , 0 ) ] ) × ( 1 ) n + j + 1 f [ x k , j 1 , a ] f [ x k , 1 , a ] f [ x k , 0 , a ] 2 f [ x k 1 , n 1 , a ] f [ x k 1 , 0 , a ] × ( e k , j 1 e k , 1 e k , 0 2 e k 1 , n 1 e k 1 , 0 ) 2

= f 1 [ f ( a ) , f ( x k , j ) , , f ( x k , 1 ) , f ( x k , 0 ) f ( x k , 0 ) , f ( x k 1 , n 1 ) , , f ( x k 1 , 0 ) ] × ( 1 ) n + j + 2 f [ x k , j , a ] f [ x k , 1 , a ] f [ x k , 0 , a ] 2 f [ x k 1 , n 1 , a ] f [ x k 1 , 0 , a ] × × ( f 1 [ f ( a ) , f ( x k , 0 ) , f ( x k , 0 ) , f ( x k 1 , n 1 ) , , f ( x k 1 , 0 ) ] ) 2 j 1 × ( ( 1 ) n + 2 f [ x k , 0 , a ] 2 f [ x k 1 , n 1 , a ] f [ x k 1 , 0 , a ] ) 2 j 1 ( e k 2 e k 1 , n 1 e k 1 , 0 ) 2 j

= ( f 1 ) ( n + j + 2 ) ( 0 ) ( n + j + 2 ) ! ( f ( a ) ) n + j + 2 i = 1 j [ ( ( f 1 ) ( n + i + 1 ) ( 0 ) ( n + i + 1 ) ! ) ( f ( a ) ) n + i + 1 ] 2 j i × ( e k 2 e k 1 , n 1 e k 1 , 0 ) 2 j + o ( ( e k 2 e k 1 , n 1 e k 1 , 0 ) 2 j ) = ( f 1 ) ( n + j + 2 ) ( 0 ) ( n + j + 2 ) ! ( f ( a ) ) n + j + 2 i = 1 j [ ( ( f 1 ) ( n + i + 1 ) ( 0 ) ( n + i + 1 ) ! ) ( f ( a ) ) n + i + 1 ] 2 j i × ( C 2 C n 1 C 1 e k 1 2 r + p n 1 + + p 1 + 1 ) 2 j + o ( e k 1 2 j ( 2 r + p n 1 + + p 1 + 1 ) ) .

So,

{ r p j + 1 = 2 j ( 2 r + p n 1 + + p 1 + 1 ) , j = 0 , , n 1 , r = p n .

Thus, r 2 = ( 3 2 n 1 1 ) r + 2 n 1 , and r = ( 3 2 n 1 1 + 9 2 2 n 2 2 n + 1 ) / 2 .

4. Numerical Examples

The proposed families (4), (4) with (6), (9), as well as the existing family (2) with and without memory are demonstrated to solve the nonlinear equations in the examples. For general families of biparametric multipoint iterations with and without memory as well as other related discussions, please refer to, e.g., [10] [11]. The computational order of convergence is defined by:

COC = log ( | x n a | / | x n 1 a | ) log ( | x n 1 a | / | x n 2 a | ) .

Example 1. The numerical results on f ( x ) = e x 2 1 in Table 2 agree with the convergence rates in Theorems 1 - 3.

Example 2. The numerical results in Table 3 are for nonlinear functions:

f 1 ( x ) = x 2 e x 3 x + 1 , a = 0 , x 0 = 0.2 , f 2 ( x ) = e x 2 + sin x 1 , a = 0 , x 0 = 0.25 , f 3 ( x ) = e x arctan x 1 , a = 0 , x 0 = 0.2.

Example 3. The basins of attraction of the existing family (2) with (3) by using direct Newton interpolation, the family (4) with (6) and family (9) by using inverse interpolation to solve f ( z ) = z 3 1 = 0 with the criterion min { | e n | , | f ( z n ) | } < 10 6 in are shown in Figure 1. The colors “r”, “g”, “b”, “c”, “w”, “y”, “m”, “k” are assigned for the number of iteration {0, 1, 2}, 3, 4, {5, 6}, {7, 8}, {9, 10, 11}, {12, 13, 14, 15} and default. The basin of attraction of the method based on inverse interpolation may be a little smaller, but not too much worse than that based on direct interpolation, since the first substep is a Steffensen’s method for (4) with (6) and a Newton’s method for (9) respectively in the first step when k = 0 .

Figure 1. (2) with (3), (4) with (6), (9) in 1st, 2nd, 3rd rows (left n = 1, middle n = 2 and right n = 3).

Table 2. The numerical results of f ( x ) = e x 2 1 , a = 2 , x 0 = 1.8 .

Table 3. Numerical results for f i ( x ) , i = 1 , 2 , 3 .

5. Conclusion

In this paper, we construct a family (4) of n + 1-point iterations derivative-free and another family (9) of n-point iterations using a first derivative by the inverse interpolatory polynomial with memory to solve the simple root of a nonlinear equation. The general families (4) and (9) use n + 1 functional evaluations with the memory in the previous step to achieve the super convergence of order ( 3 2 n 1 1 + 9 2 2 n 2 + 2 n + 1 ) / 2 and ( 3 2 n 1 1 + 9 2 2 n 2 2 n + 1 ) / 2 respectively. The general family (4) with (6) achieves the super convergence of order ( 2 n + 1 1 + 2 2 n + 2 + 1 ) / 2 , which is the same as that of the existing family (2) with (3). When n = 4 , as special case, both of them achieve the super convergence of order ( 31 + 1025 ) / 2 = 31.5078 and have the efficiency index

{ ( 31 + 1025 ) / 2 } 1 / 5 = 1.9938 . The application of the memory is more handy in

the proposed families than that of (3) in (2). The basins of attraction of the related multipoint iterations with memory are also demonstrated. The advantage of effectiveness and convenience in practice of the proposed families is confirmed by numerical examples.

Conflicts of Interest

The authors declare no conflicts of interest regarding the publication of this paper.

References

[1] Ortega, J.M. and Rheinboldt, W.G. (1970) Iterative Solution of Nonlinear Equations in Several Variables. Academic Press, New York.
[2] Traub, J.F. (1964) Iterative Methods for the Solution of Equations. Prentice-Hall, Englewood Cliffs, New Jersey.
[3] Kung, H.T. and Traub, J.F. (1974) Optimal Order of One-Point and Multipoint Iteration. Journal of the ACM, 21, 634-651.
https://doi.org/10.1145/321850.321860
[4] Neta, B. (1981) On a Family of Multipoint Methods for Non-Linear Equations. International Journal of Computer Mathematics, 9, 353-361.
https://doi.org/10.1080/00207168108803257
[5] Petković, M.S., Neta, B., Petković, L.D. and Džunić, J. (2013) Chapter 2—Two-Point Methods. Multipoint Methods for Solving Nonlinear Equations, Academic Press, Cambridge, 27-83.
https://doi.org/10.1016/B978-0-12-397013-8.00002-9
[6] Neta, B. and Petković, M.S. (2010) Construction of Optimal Order Nonlinear Solvers Using Inverse Interpolation. Applied Mathematics and Computation, 217, 2448-2455.
https://doi.org/10.1016/j.amc.2010.07.045
[7] Džunić, J. and Petković, M.S. (2012) On Generalized Multipoint Root-Solvers with Memory. Journal of Computational and Applied Mathematics, 236, 2909-2920.
https://doi.org/10.1016/j.cam.2012.01.035
[8] Petković, M.S. (2010) On a General Class of Multipoint Root-Finding Methods of High Computational Efficiency. SIAM Journal on Numerical Analysis, 47, 4402-4414.
https://doi.org/10.1137/090758763
[9] Zheng, Q., Li, J. and Huang, F. (2011) An Optimal Steffensen-Type Family for Solving Nonlinear Equations. Applied Mathematics and Computation, 217, 9592-9597.
https://doi.org/10.1016/j.amc.2011.04.035
[10] Džunić, J. and Petković, M.S. (2014) On Generalized Biparametric Multipoint Root Finding Methods with Memory. Journal of Computational and Applied Mathematics, 255, 362-375.
https://doi.org/10.1016/j.cam.2013.05.013
[11] Zheng, Q., Zhao, X. and Liu, Y.F. (2015) An Optimal Biparametric Multipoint Family and Its Self-Acceleration with Memory for Solving. Algorithms, 8, 1111-1120.
https://doi.org/10.3390/a8041111
[12] Wang, X.F. and Zhang, T. (2015) Efficient N-Point Iterative Methods with Memory for Solving Nonlinear Equations. Numerical Algorithms, 70, 357-375.
https://doi.org/10.1007/s11075-014-9951-8
[13] Zheng, Q., Huang, F.X., Guo, X.H. and Feng, X.L. (2013) Doubly-Accelerated Steffensens Methods with Memory and Their Applications on Solving Nonlinear ODEs. Journal of Computational Analysis and Applications, 15, 886-891.
[14] Zheng, Q., Wang, J., Zhao, P. and Zhang, L. (2009) A Steffensen-Like Method and Its Higher-Order Variants. Applied Mathematics and Computation, 214, 10-16.
https://doi.org/10.1016/j.amc.2009.03.053
[15] Chicharro, F., Cordero, A., Gutirrez, J.M. and Torregrosa, J.R. (2013) Complex Dynamics of Derivative-Free Methods for Nonlinear Equations. Applied Mathematics and Computation, 219, 7023-7035.
https://doi.org/10.1016/j.amc.2012.12.075
[16] Chicharro, F., Cordero, A. and Torregrosa, J. (2015) Dynamics and Fractal Dimension of Steffensen-Type Methods. Algorithms, 8, 271-279.
https://doi.org/10.3390/a8020271
[17] Amat, S., Busquier, S. and Plaza, S. (2004) Dynamics of a Family of Third-Order Iterative Methods That Do Not Require Using Second Derivatives. Applied Mathematics and Computation, 154, 735-746.
https://doi.org/10.1016/S0096-3003(03)00747-1
[18] Amat, S., Busquier, S., Bermúdez, C. and Plaza, S. (2012) On Zilies of High Order Newton Type Methods. Applied Mathematics Letters, 25, 2209-2217.
https://doi.org/10.1016/j.aml.2012.06.004
[19] Scott, M., Neta, B. and Chun, C. (2011) Basin Attractors for Various Methods. Applied Mathematics and Computation, 218, 2584-2599.
https://doi.org/10.1016/j.amc.2011.07.076
[20] Neta, B., Scott, M. and Chun, C. (2012) Basins of Attraction for Several Methods to Find Simple Roots of Nonlinear Equations. Applied Mathematics and Computation, 218, 10548-10556.
https://doi.org/10.1016/j.amc.2012.04.017

Copyright © 2024 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.