Application of the Economization of Power Series to Solving the Schrödinger Equation for the Gaussian Potential via the Asymptotic Iteration Method

Abstract

This paper presents economized power series for the Gaussian function. The economization is accomplished by utilizing the “usual” and the “shifted” Chebyshev polynomials of the first kind. The resulting economized series are applied to the solution of the radial Schrödinger equation with the attractive Gaussian potential via the asymptotic iteration method (AIM). The obtained bound state energies are compared with those given by the same method when the Taylor expansion is used to approximate the Gaussian potential. We also compare them with those obtained from the exact Hamiltonian diagonalization on a finite basis of Coulomb Sturmian functions.

Share and Cite:

Nyengeri, H. , Manariyo, B. , Nizigiyimana, R. and Mugisha, S. (2020) Application of the Economization of Power Series to Solving the Schrödinger Equation for the Gaussian Potential via the Asymptotic Iteration Method. Open Access Library Journal, 7, 1-17. doi: 10.4236/oalib.1106505.

1. Introduction

The easiest and most obvious way to obtain a polynomial approximation to a

given function f ( x ) is to use a truncated Taylor series of the form j = 0 N a j x j or more generally j = 0 N a j ( x x 0 ) j [1] [2]. In this truncated method, the more the number of the retained terms, the higher the accuracy of the approximation.

However, this method suffers from the uneven distribution of errors in the approximation. The closer the evaluated point to the origin of expansion, the higher the accuracy and vice versa. This means that for a desired level of accuracy, the points far from the origin will need substantially more terms than those close to the origin of expansion. For computational purposes, however, it may be undesirable to require as many as N + 1 terms when N is large. Indeed, it may be unnecessary to use more than a few terms, especially if interest in the function f ( x ) is restricted to a small range x 0 x x 1 of the argument.

The powers of a variable x appeared originally purely in algebraic problems [2]. With the development of calculus, the great importance of power expansions became evident. The expansion discovered by Taylor in 1715 and by Maclaurin in 1742 allows predicting the evolution of a function from its value and all its derivatives in one particular point [3]. The “Taylor series” thus became one of the cornerstones of analytical research and was particularly useful in establishing the existence of solutions of differential equations [2]. It should be recalled, however, that the Taylor expansion suffers from slow convergence speed for points far from the origin of expansion. This problem could be alleviated by using minimization methods such as least square (LS) algorithm [3] [4]. In this case, the function f ( x ) is approximated with a finite degree polynomial

k = 0 N c k x k whose coefficients c k are selected such that

J a b w ( x ) ( f ( x ) k = 0 N c k x k ) 2 d x (1)

is minimum, where w ( x ) is an arbitrary weighting function and [ a , b ] is the interval in which the function is approximated. The minimization in Equation (1) yields [1]

k = 0 N c k a b w ( x ) x k x n d x = a b w ( x ) f ( x ) x n d x , n = 0 , 1 , 2 , (2)

We have to indicate that the system of Equation (2) is difficult to solve because it requires the computation of a full two-dimensional matrix. The reason is that the function f ( x ) is approximated with non-orthogonal power series basis ( 1 , x , x 2 , ) . This could be avoided if the function is approximated with an orthogonal basis. That is, if the orthogonal basis is given by P 0 ( x ) , P 1 ( x ) , P 2 ( x ) , , then the coefficients c k are determined by

c k a b w ( x ) [ P k ( x ) ] 2 d x = a b w ( x ) f ( x ) P k ( x ) d x , k = 0 , 1 , 2 , (3)

Using an orthogonal basis will cause the off diagonal terms to be null, and can occasionally lead to the so-called “economized power series”. As a side note, we should indicate that much attention has also been paid to the problem of inventing methods of summing a series in such a way that it shall become convergent, although the original series, if added term by term, increases to infinity [5].

Economization of power series is a procedure that replaces a very accurate (or even exact) polynomial approximation j = 0 N a j x j of degree N by an “economized” polynomial j = 0 n e j x j of a smaller degree n such that, in the range of

interest, the absolute error introduced by the replacement is less than some acceptable value E [6]:

| j = 0 N a j x j j = 1 n e j x j | < E , x 0 x x 1 (4)

The procedure of economization, or telescoping as it is sometimes called [2] [6], is accomplished by utilizing the properties of Chebyshev polynomials of the first kind [2] [6] [7] [8], among which the minimax property [9] [10]. According to the minimax principle, Chebyshev approximations are associated with the approximations which minimize the maximum error.

We have to emphasize that the economization algorithm has many distinct phases [6] [11] [12]. More precisely, the economization of power series has four basic steps:

Step 1. Expand f ( x ) in a Taylor series valid on the interval [ 1 , 1 ] . Truncate this series to obtain a polynomial

P N ( x ) = a 0 + a 1 x + + a N x N , (5)

which approximates f ( x ) within a prescribed tolerance error E for all x in [ 1 , 1 ] .

Step 2. Expand P N ( x ) in a Chebyshev series,

P N ( x ) = 1 2 c 0 + c 1 T 1 ( x ) + + c N T N ( x ) , (6)

making use of the following matrix equation [8]:

[ 1 / 2 2 0 x 2 1 x 2 2 2 x 3 2 3 x 4 2 4 x 5 2 5 x 6 2 6 x 7 2 7 x 8 ] = [ 1 0 1 2 0 1 0 3 0 1 6 0 4 0 1 0 10 0 5 0 1 20 0 15 0 6 0 1 0 35 0 21 0 7 0 1 70 0 56 0 28 0 8 0 1 ] [ 1 / 2 T 0 ( x ) T 1 ( x ) T 2 ( x ) T 3 ( x ) T 4 ( x ) T 5 ( x ) T 6 ( x ) T 7 ( x ) T 8 ( x ) ] (7)

Step 3. Truncate this Chebyshev series to a smaller number of terms by retaining the first n terms, choosing n so that the maximum error given by

| f ( x ) M n ( x ) | E + | c n + 1 | + + | c N | (8)

is acceptable, where M n ( x ) designs the resulting small Chebyshev series:

M n ( x ) = 1 2 c 0 + c 1 T 1 + + c n T n . (9)

Step 4. Replace T j ( x ) ( j = 0 , 1 , , n ) by its polynomial form, which leads to

f ( x ) e 0 + e 1 x + + e n x n , (10)

using the following matrix equation [8]:

[ 1 2 T 0 T 1 ( x ) T 2 ( x ) T 3 ( x ) T 4 ( x ) T 5 ( x ) T 6 ( x ) T 7 ( x ) T 8 ( x ) ] = [ 1 0 1 2 0 1 0 3 0 1 2 0 4 0 1 0 5 0 5 0 1 2 0 9 0 6 0 1 0 7 0 14 0 7 0 1 2 0 16 0 20 0 8 0 1 ] [ 2 1 2 0 x 2 1 x 2 2 2 x 3 2 3 x 4 2 4 x 5 2 5 x 6 2 6 x 7 2 7 x 8 ] (11)

If necessary in step 1, i.e., when we have an interval [ a , b ] other than [ 1 , 1 ] , make a transformation of independent variables so that the expansion is valid on that interval, by means of the expression [6] [11]

y = x ( b + a ) / 2 ( b a ) / 2 . (12)

In this case, it is necessary to change variable back to x after step 4, making use of the expression [13]

x = 1 2 ( b a ) y + 1 2 ( b + a ) . (13)

For the special domain 0 x 1 , we can write

x = ( y + 1 ) / 2 , y = 2 x 1 (14)

In this domain, the Chebyshev polynomials are denoted T n ( x ) and defined by [14]: T 0 ( x ) = 1 2 and T n * ( x ) = T n ( 2 x 1 ) for n 1 , 0 x 1 . They are called shifted Chebyshev polynomials of the first kind.

Note that Equations (7) and (11) can, in general, be summarized as [8]

x ˜ = C T ˜ (15)

and

T ˜ = P x ˜ (16)

respectively, where:

T ˜ and x ˜ are the ( n + 1 ) -element vectors, i.e.,

T ˜ t = [ 1 / 2 T 0 ( x ) , T 1 ( x ) , T 2 ( x ) , , T n ( x ) ] (17)

and

x ˜ t = [ 1 / 2 , x , 2 x 2 , , 2 n 1 x n ] . (18)

・ P and C are lower triangle matrices such that [8]

P = [ P i j ] i , j = 0 n , C = [ C i j ] i , j = 0 n , P 00 = 1 , P 10 = 0 , P 11 = 1 ; (19)

P 20 = 2 , P 21 = 0 , P 22 = 1 ; (20)

P i , 0 = P i 2 , 0 P i , j = P i 1 , j 1 P i 2 , j , j = 1 , , i } i = 3 , , n ; (21)

C i i = 1 , i = 0 , , n ; (22)

C i , 0 = 2 C i 1 , 1 C i , j = C i 1 , j 1 + C i 1 , j + 1 , j = 1 , , i } i = 1 , , n 1 ; (23)

C n , 0 = 2 C n 1 , 1 ; C n , j = C n 1 , j 1 + C n 1 , j + 1 , j = 1 , , n 1. (24)

The main purpose of this paper is to develop a technique for generating a polynomial approximation for the Gaussian function which, among all polynomial approximations with the same degree, has a very small maximum error. This technique is based on the telescoping procedure of power series proposed by Lanczos [2], and the polynomial fitting of the error in the approximation, with the objective of economizing a sufficiently accurate truncated Maclaurin series of the Gaussian function. The resulting economized series will be used to compute bound state energies associated with the attractive Gaussian potential via the Asymptotic Iteration Method (AIM). Similar computations have been made by Mutuk [15] who applied the AIM to the Gaussian potential using a truncated Maclaurin series to approximate the function exp ( r 2 ) .

The rest of this paper is organized as follows. In Section 2, we apply the procedure of economization to the Gaussian function by using firstly Chebyshev polynomials of the first kind, and secondly the T j ( x ) polynomials. For each economized series, the exact error is calculated and fitted by a power series having the same degree as the initial non-economized finite power series. The new finite series obtained by adding the approximate error to the associated economized series in turn undergoes the procedure of economization, which leads to a much more efficient economized power series. The originality of our work is precisely the multiple application of the economization method, which alleviates one of the most harmful aspects of the telescoping method, i.e., the low accuracy of the economized series around the origin of expansion [16]. Section 3 contains a brief introduction to the AIM for the Gaussian potential using the economized series obtained in Section 2 to approximate the Gaussian function. We also present and comment our results concerning bound state energies of the attractive Gaussian potential for a given well depth. We compare them with those given by the exact Hamiltonian diagonalization on a finite basis of Coulomb Sturmian functions. The conclusion is given in Section 4.

2. Gaussian Function Economization

We here consider the Gaussian function of the form

f G ( x ) = exp ( x 2 ) (25)

and the interval [−1, 1] for the independent variable x. The Maclaurin series expansion of this function is given by

exp ( x 2 ) = j = 0 ( 1 ) j j ! x 2 j . (26)

We denote by f G { N } ( x ) the Nth-degree truncated Maclaurin series of f G ( x ) and we choose N = 14 . We have:

f G { 14 } ( x ) = j = 0 7 ( 1 ) j j ! x 2 j (27)

Expanding f G { 14 } ( x ) in a Chebyshev series, we obtain:

f G { 14 } ( x ) = 739773 1146880 205029 655360 T 2 ( x ) + 114127 2949120 T 4 ( x ) 18943 5898240 T 6 ( x ) + 293 1474560 T 8 ( x ) 61 5898240 T 10 ( x ) + 1 2949120 T 12 ( x ) 1 41287680 T 14 ( x ) (28)

where we have used relations given in Equation (7). Of course if we expand the Chebyshev polynomials again in terms of power series of x, we obtain the same polynomial back.

Let us truncate the Chebyshev series (28) by neglecting the last two terms, and denote by f ˜ G { 10 } ( x ) the resulting expression, i.e.,

f ˜ G { 10 } ( x ) = 739773 1146880 205029 655360 T 2 ( x ) + 114127 2949120 T 4 ( x ) 18943 5898240 T 6 ( x ) + 293 1474560 T 8 ( x ) 61 5898240 T 10 ( x ) (29)

Replacing T j ( x ) ( j = 2 , 4 , , 10 ) by its polynomial form (see Equation (11)), we obtain the tenth-degree economized power series of exp ( x 2 ) associated with the finite series f G { 14 } ( x ) which is a polynomial of degree 14:

exp ( x 2 ) f ˜ G { 10 } ( x ) = 2752511 2752512 2949041 2949120 x 2 + 184201 368640 x 4 15227 92160 x 6 + 99 2560 x 8 61 11520 x 10 (30)

Figure 1 shows four errors E G { 10 } ( x ) , E G { 12 } ( x ) , E G { 14 } ( x ) and E ˜ G { 10 } ( x ) in the approximation of exp ( x 2 ) calculated as the differences between the Gaussian function and the truncated power series f G { 10 } ( x ) , f G { 12 } ( x ) , f G { 14 } ( x ) and

f ˜ G { 10 } ( x ) , i.e., E G { 10 } ( x ) = exp ( x 2 ) f G { 10 } ( x ) , E G { 12 } ( x ) = exp ( x 2 ) f G { 12 } ( x ) , E G { 14 } ( x ) = exp ( x 2 ) f G { 14 } ( x ) and E ˜ G { 10 } ( x ) = exp ( x 2 ) f ˜ G { 10 } ( x ) . The definition of the function E G e { 10 } ( x ) whose graph is shown in Figure 1 will be given below.

We see that the tenth-degree economized power series f ˜ G { 10 } ( x ) approximates

Figure 1. Plots of different errors in the approximation of exp ( x 2 ) . (a) Graph of E G { 10 } ( x ) ; (b) Graph of E G { 12 } ( x ) ; (c) Curves of E ˜ G { 10 } ( x ) (solid line) and E G { 14 } ( x ) (symbol ); (d) Graph of E G e { 10 } ( x ) .

exp ( x 2 ) on [−1, 1] better than the tenth-degree Maclaurin series and nearly as well as the twelfth and fourteenth-degree Maclaurin series f G { 12 } ( x ) and f G { 14 } ( x ) .

Indeed, its maximum error (at x = ± 1 ) is 2.26131782 × 10 5 whereas the error of approximation equals 1.21277450477565 × 10 3 for f G { 10 } ( x ) , 1.7611438411323396 × 10 4 for f G { 12 } ( x ) and 2.229831429946445 × 10 5 for f G { 14 } ( x ) . We “economize” in sense that we get about the same precision with a lower-degree polynomial.

We have to add that we can get a much more efficient economized power series by first adding to the series f ˜ G { 10 } ( x ) the associated error fitted by a high-degree polynomial, and then applying the procedure of economization to the resulting polynomial. To this end, we discretize the problem in the interval [0, 1] and evaluate the function E ˜ G { 10 } ( x ) at X k = k h (for k = 0 , 1 , , p 1 ) where p is the number of mesh points and h the step size, thus creating two p-components real vectors X and Y such that X k = k h and Y k = E ˜ G { 10 } ( X k ) , k = 0 , 1 , 2 , , p 1 , where X k and Y k design the k-th components of the vectors X and Y respectively. We then appeal to the maple 18 software (the Fit command) to construct the (2K)th-degree polynomial P 2 K ( x ) of type j = 0 K B j x 2 j , K > 5 , that best fits the above set of data points, i.e., ( X k , Y k ) , k = 0 , 1 , 2 , , p 1 . It is worth noting that in maple, the Fit command fits a model function to given data by minimizing the least-square error. In the case we are concerned with, the calling sequence is: F i t ( P 2 K ( x ) , X , Y , x ) where P 2 K ( x ) is to be replaced by

j = 0 K B j x 2 j and B 0 , B 1 , , B K are adjustable parameters to be computed. With K = 7 and p = 101, we find:

{ B 0 = 3.629622950984207597588 × 10 7 B 1 = 2.673950577493475324288 × 10 5 B 2 = 3.21731897337007 × 10 4 B 3 = 1.4340314605314449260836 × 10 3 B 4 = 2.95693533428559792089 × 10 3 B 5 = 2.95236680605017687949 × 10 3 B 6 = 1.279508864767511361964 × 10 3 B 7 = 1.22788990618675818847685 × 10 4 (31)

Applying the procedure of economization to the fourteen-degree polynomial f ˜ G { 10 } ( x ) + P 14 ( x ) , we find a new tenth-degree economized series, which we denote by f G e { 10 } ( x ) :

f G e { 10 } ( x ) = 0.9999995697531816685 0.9999686090106535367159 x 2 + 0.4996268919513314237 x 4 0.1650294823388696094779 x 6 + 0.03835801149191082515380786 x 8 0.00510734148478025040218 x 10 (32)

The function E G e { 10 } ( x ) defined by the expression

E G e { 10 } ( x ) = exp ( x 2 ) f G e { 10 } ( x ) (33)

is shown in Figure 1. It is clear that the series f G e { 10 } ( x ) is more accurate than all the above power series.

So far in this section, we have used the T j ( x ) polynomials to economize the fourteenth-degree Maclaurin series of the Gaussian function on the domain 1 x 1 . In what follows the economization will be done on the interval [0, 1] using shifted Chebyshev polynomials of the first kind T j * ( x ) . We get

f G { 14 } ( x ) = f G ( * ) { 14 } ( x ) = 24725514565 33822867546 671360027 2013265920 T 1 * ( x ) 1528406863 32212254720 T 2 * ( x ) + 431317481 24159191040 T 3 * ( x ) + 5406881 16106127360 T 4 * ( x ) 747037 1610612736 T 5 * ( x ) + 1148881 96636764160 T 6 * ( x ) + 70579 9395240960 T 7 * ( x ) 4397 8053063680 T 8 * ( x ) 1547 12079595520 T 9 * ( x ) 1 2147483648 T 10 * ( x ) 7 8053063680 T 11 * ( x ) 19 48318382080 T 12 * ( x ) 1 24159191040 T 13 * ( x ) 1 676457349120 T 14 * ( x ) (34)

where the asterisk in brackets in expression f G ( * ) { 14 } ( x ) refers to the use of the T j * ( x ) polynomials in the approximation of the Gaussian function f G ( x ) .

It is worth noting that Equation (34) is obtained by using the following expression [14]:

[ 1 2 ( 4 x ) 0 1 2 ( 4 x ) 1 1 2 ( 4 x ) N ] = A ˜ [ T 0 * T 1 * T N * ] (35)

where:

N = 14

T 0 * = 1 2 and thus x = 2 T 0 *

A ˜ = [ a i j ] i , j = 0 N is a ( N + 1 ) × ( N + 1 ) triangular matrix such that [14]

a i i = 1 , i = 0 , 1 , , N ; a i j = 0 , j > i ; (36)

a i + 1 , 0 = 2 ( a i 0 + a i 1 ) , i = 0 , 1 , 2 , , N (37)

and

a i j = a i 1 , j 1 + 2 a i 1 , j + a i 1 , j + 1 , i , j = 1 , 2 , , N . (38)

It follows immediately from Equation (35) that

[ T 0 * T 1 * ( x ) T N * ( x ) ] = A ˜ 1 [ 1 2 ( 4 x ) 0 1 2 ( 4 x ) 1 1 2 ( 4 x ) N ] (39)

Since | T j * ( x ) | 1 j and x [ 0 , 1 ] , the last six terms in the right-side of Equation (34) are rather tiny in magnitude ( 1.280672 × 10 6 , 4.65661287 × 10 10 , 8.69234403 × 10 10 , 3.93225087 × 10 10 , 4.13921144 × 10 11 and 1.4782898 × 10 12 respectively). We therefore can chop off these terms (keep terms up to T 8 * ( x ) ) without risk of appreciable change in the final results, and then re-expand back to a monomial series. Doing this gives the following eighth-degree polynomial:

f ˜ G ( * ) { 8 } ( x ) = 338228631227 338228674560 + 10451 503316480 x 335730191 335544320 x 2 + 1073473 188743680 x 3 + 1974601 4194304 x 4 + 330473 3932160 x 5 14501951 47185920 x 6 + 457969 3440640 x 7 4397 245760 x 8 (40)

where the asterisk in brackets refers to the fact that the polynomial f ˜ G ( * ) { 8 } ( x ) results from expanding a series of shifted Chebyshev polynomials of the first kind in a Maclaurin series. We remark that all coefficients of all orders between 0 and 8 are present in the series (40), which is contrary to the result obtained by doing economization using the T j ( x ) polynomials.

Expanding in shifted Chebyshev series the fourteenth-degree polynomials obtained by adding to the series (40) the associated error, i.e., exp ( x 2 ) f ˜ G ( * ) { 8 } ( x ) , fitted by a polynomial of degree 14, and then truncating the resulting series by keeping terms up to T 10 * ( x ) , we obtain, after re-expanding back to a monomial series:

f G e ( * ) { 10 } ( x ) = 19218139958724 00 1921813993952917 1826255126789 743588417 00 23661913 x 46726932 0 469 0 532 4672739946124667 x 2 1 0 2312372739415 642352316915332223 x 3 + 2178621924686 0 46 43459 0 74987 0 7751 x 4 256326879832656 41 0 16598156699859 x 5 1351939948987433 913823779811 0 217 x 6 2 0 17 0 5968295 0 979 55887252994313927 x 7 + 531462878751634 616434556 0 849269 x 8 5 0 2318618293295 15 0 521 0 7665351233 x 9 + 2463 00 847 0 73295 59133395 0 8637 0 6 0 8 x 10 (41)

Note that this expression can be used to approximate exp ( x 2 ) on [ 1 , 1 ] by multiplying all terms of odd exponents by the sign of the independent variable x.

In Figure 2, we show the plot of the error in the approximation of exp ( x 2 ) based on the use of Equation (41), together with the plots of

E ˜ G ( * ) { 8 } ( x ) exp ( x 2 ) f ˜ G ( * ) { 8 } ( x ) , E G e { 10 } ( x ) and E G { 14 } ( x ) . We see that the T j * ( x ) polynomials economize the Gaussian function exp ( x 2 ) more efficiently than

Figure 2. Comparison of E G e ( * ) { 10 } ( x ) with other errors in the approximation of exp ( x 2 ) . (a) Graph of E G { 14 } ( x ) ; (b) Graph of E ˜ G ( * ) { 8 } ( x ) ; (c) Graph of E G e { 10 } ( x ) ; (d) Graph of E G e ( * ) { 10 } ( x ) .

the T j ( x ) ones, but there is a price to pay since for a given degree 2p, the T j * ( x ) polynomials lead to an economized polynomial whose number of terms, i.e., 2p + 1, is almost twice the number of terms in the economized power series obtained when the T j ( x ) polynomials are used, which is exactly p + 1 .

3. Application to the Asymptotic Iteration Method for the Gaussian Potential

3.1. Basic Equations of the Asymptotic Iteration Method (AIM)

In this subsection, we briefly outline the asymptotic iteration method; the details can be found in [17] and [18].

The AIM was introduced to solve the second-order homogeneous linear differential equations of the form [17] [18] [19]

y ( x ) = λ 0 ( x ) y ( x ) + s 0 ( x ) y ( x ) (42)

where λ 0 ( x ) 0 and s 0 ( x ) have sufficiently many continuous derivatives in some interval, not necessarily bounded. The differential Equation (42) has a general solution [17] [18]

y ( x ) = exp ( x α ( ξ ) d ξ ) [ c 2 + c 1 x exp ( ξ ( λ 0 ( η ) + 2 α ( η ) ) d η ) d ξ ] (43)

where

s k ( x ) λ k ( x ) = s k 1 ( x ) λ k 1 ( x ) α ( x ) (44)

for sufficiently large k.

In Equation (44), λ k ( x ) and s k ( x ) are defined as follows [17] [18]:

λ k ( x ) = λ k 1 ( x ) + s k 1 ( x ) + λ 0 ( x ) λ k 1 ( x ) , k = 1 , 2 , 3 , ; (45)

s k ( x ) = s k 1 ( x ) + s 0 ( x ) λ k 1 ( x ) , k = 1 , 2 , 3 , (46)

The convergence (quantization) condition of the method, as given in (44), can also be written as follows [15] [20]:

δ k ( x ) = λ k ( x ) s k 1 ( x ) λ k 1 ( x ) s k ( x ) = 0 , k = 1 , 2 , 3 , (47)

For a given radial potential such as the Gaussian one, the radial Schrödinger equation is converted to the form of the Equation (42). Once this form has been obtained, it is easy to determine s 0 ( x ) and λ 0 ( x ) and calculate s k ( x ) and λ k ( x ) by using Equations (45) and (46). The energy eigenvalues are then obtained from the quantization condition given by Equation (47).

3.2. Asymptotic Iteration Method for Gaussian Potential

We here consider the Gaussian potential of the form

V ( r ) = A exp ( λ r 2 ) , r [ 0 , + [ (48)

where A > 0 is the depth of the potential and λ > 0 determines its width. The radial Schrödinger equation (SE) for a particle with mass m that moves in three-dimensional space under the effect of the attractive Gaussian potential (48) can be written as

d 2 R n l ( r ) d r 2 + ( ε + A ˜ exp ( r 2 ) l ( l + 1 ) λ r 2 ) R n l ( r ) = 0 (49)

where r = λ r , A ˜ = 2 m A / ( 2 λ ) and ε = 2 m E / ( 2 λ ) , E being the energy of the particle. This is a second order non-linear differential equation. In order to solve this equation via the AIM, we should first model it with a second order linear differential equation and then convert this model equation to the form of Equation (42). Mutuk [15] solved the non-linear differential Equation (42) for λ = 1 via the AIM by suggesting a wave function of the form

R n l ( r ) = r l + 1 exp ( β r 2 ) f n l ( r ) (50)

and making use of the tenth-degree truncated Maclaurin series of exp ( r 2 ) , i.e.,

exp ( x 2 ) 1 r 2 + r 4 2 r 6 6 + r 8 24 r 10 120 (51)

to construct a linear model of this non-linear equation. He obtained a second order linear homogeneous differential equation for the factor f n l ( r ) with the general form

d 2 f n l ( r ) d r 2 = λ 0 ( r ) d f n l ( r ) d r + s 0 ( r ) f n l ( r ) (52)

where

λ 0 ( r ) = ( 2 ( l + 1 ) r + 4 β r ) , (53)

s 0 ( r ) = A ˜ ( r 10 120 r 8 24 + r 6 6 r 4 2 + r 2 1 ) ε + 2 β ( 2 l 2 β r 2 + 3 ) (54)

We have to emphasize that in the AIM, energy eigenvalues are calculated from the quantization condition given by Equation (47). For each iteration, this equation will depend on two variables, ε and r. The eigenvalues calculated by means of δ k ( r ) = 0 should however be independent from the choice of r. Actually, this will be the case for most iteration sequences. The choice of r can be critical to the speed of the convergence of the eigenvalues, as well as the stability of the process [17] [20]. This suitable choice of r minimizes the potential or maximizes the radial wave function given by Equation (50) in the case of the attractive Gaussian potential.

In the AIM, the wave function can be written as

R ( r ) = f ( r ) g ( r ) , (55)

where f ( r ) represents the asymptotic behavior. In our case, f ( r ) = r l + 1 exp ( β r 2 ) . Hence, we have taken r 0 = l + 1 / 2 β , which is the value of r that minimizes the wave function. β is an arbitrary parameter related to the convergence.

The convergence of the eigenvalues for the cases of β = 5 , β = 10 , β = 15 , β = 20 and β = 25 is reported in Table 1 where we compute the eigenvalue associated with n = 0 and l = 0 by means of the AIM using the maple 18 software which is known to be a powerful symbolic mathematical software. It is clear that the eigenvalues converge for all the five values of β whatever the method used to approximate the Gaussian potential, which is contrary to the results obtained by Mutuk [15], results in which the eigenvalues associated with β = 25 start to diverge when the iteration number exceeds 25. We think that the discrepancy between our results and the Mutuk ones for big values of β is

Table 1. The convergence of the eigenvalues of the attractive Gaussian potential for different β values and various approximations of exp ( r 2 ) with n = 0 and l = 0 . k is the iteration number. Potential parameters are A = 400 atomic units (a.u) and λ = 1 .

due to the fact that during the implementation of the AIM, the precision level has been set to 50 digits, which means that our results have been computed with high precision and are more accurate. It is clear from Table 1 that the approximation of exp ( r 2 ) based on the use of the T j ( r ) polynomials has the advantage that the energies converge significantly faster towards the accurate value of E 00 , i.e., −341.8952145612, than those calculated using the two other approximations.

Table 2 presents the results for a few values of n and l computed by means of 50 iterations using Equations (32) and (41) to approximate exp ( r 2 ) (third and fourth columns). The energy eigenvalues are obtained with β = 10 because the solutions are in many cases very close after few iterations when this value of β is used. Our results are compared with the Mutuk ones [15] (second column) and the numerically calculated ones by the spectral Galerkin method (SGM) [10] [21] [22] [23] based on expanding the radial wave function on a finite basis of Coulomb Sturmian functions defined by [24] [25]:

S n , l κ ( r ) = N n , l κ r l + 1 e κ r L n l 1 2 l + 1 ( 2 κ r ) (56)

Table 2. Comparison of the energy eigenvalues of the Gaussian potential in a.u. obtained by using AIM for various approximations of exp ( r 2 ) with those calculated by means of the SGM for different values of n and l . We have chosen N s = 500 as the number of Coulomb Sturmian functions and 0.75 as the value of κ .

where L m α ( x ) denotes the associated Laguerre polynomial and n the principal quantum number. The normalization constant N n , l κ , given by

N n , l κ = κ n ( 2 κ ) l + 1 [ ( n l 1 ) ! ( n + l ) ! ] 1 / 2 (57)

is obtained from the normalization condition 0 [ S n , l κ ( r ) ] S n , l κ ( r ) d r = 1 .

Note that the spectral methods have the advantage of “exponential convergence” property, depending on the size of the basis, which makes them more accurate than local methods. Unlike finite difference methods, spectral methods are global methods, where the computation at any given point depends not only on information at neighboring points, but also on information from the entire domain. We appreciate that the results associated with the f G e ( * ) { 10 } ( r ) appear to approach the numerical eigenvalues reasonably well for all taken values of n and l .

4. Conclusion

In this work, we have applied the procedure of economization to the Gaussian function of the form f G ( x ) = exp ( x 2 ) by using Chebyshev polynomials of the first kind on one hand and the shifted ones on the other, with an application to the solution of the radial Schrödinger equation for the attractive Gaussian well via the Asymptotic Iteration Method (AIM). We have seen that the use of T j ( x ) polynomials leads to more efficient economized power series of exp ( x 2 ) which can be used to well model the non-linear radial Schrödinger equation for the Gaussian potential with a second order linear differential equation solvable by means of AIM.

Conflicts of Interest

The authors declare no conflicts of interest regarding the publication of this paper.

References

[1] Bekir, E. (2019) Efficient Chebyshev Economization for Elementary Functions. Communications Faculty of Sciences University of Ankara Series A2-A3, 61, 33-56.
http://communications.science.ankara.edu.tr/index.php?series=A2-A3
[2] Lanczos, C. (1956) Applied Analysis. Prentice-Hall, Inc., Englewood Cliffs.
[3] Conte, S. and Carl de Boor, D. (1980) Elementary Numerical Analysis: An Algorithmic Approach. Third Edition, McGraw-Hill, Inc., New York.
[4] Kendall, A. and Weimin, H. (2004) Elementary Numerical Analysis. Third Edition, John Wiley & Sons, Inc., Iowa City.
[5] Guilpin, Ch. (1999) Manuel de Calcul Numérique Appliqué. EDP Sciences.
[6] Spanier, J. and Oldham, K.B. (1987) An Atlas of Functions. Hemisphere Publication Corporation/Springer-Verlag, New York.
[7] Mason, J.C. and Handscomb, D.C. (2002) Chebyshev Polynomials. First Edition, Chapman and Hall/CRC, London. https://doi.org/10.1201/9781420036114
[8] Horner, J.S. (1977) Chebyshev Polynomials in the Solution of Ordinary and Partial Differential Equations. Doctor of Philosophy Thesis, Department of Mathematics, University of Wollongong, Wollongong. http://ro.uow.edu.au/theses/1543
[9] Hamming, R.W. (1973) Numerical Methods for Scientists and Engineers. Second Edition, Dover Publications, Inc., New York.
[10] Fletcher, C.A.J. (1984) Computational Galerkin Methods. Springer-Verlag, New York.
https://doi.org/10.1007/978-3-642-85949-6_2
[11] Press, W.H., Teukolsky, S.A., Vetterling, T.T. and Flanney, B.P. (2007) Numerical Recipes. Third Edition: The Art of Scientific Computing. Cambridge University Press, New York.
[12] Unruh, P.F. (1968) Chebyshev Aproximations. Master’s Report, Kansas State University, Manhattan. https://ia800706.us.archive.org/25/items/chebyshevapproxi00unru/chebyshevapproxi00unru.pdf
[13] Mudde, M.H. (2017) Chebyshev Aproximation. Master Thesis, University of Groningen, Faculty of Science and Engineering, Groningen.
[14] López-Bonilla, J., Ramírez-García, E. and Sasa-Caraveo, C. (2010) Power Expansion in Terms of Shifted Chebyshev-Lanczos Polynomials. Revista Notas de Matemática, 6, 18-22.
[15] Mutuk, H. (2019) Asymptotic Iteration and Variational Methods for Gaussian Potential. Pramana—Journal of Physics, 92, 66. https://doi.org/10.1007/s12043-019-1729-z
[16] Ralston, A. and Rabinowitz, P. (2001) A First Course in Numerical Analysis. Second Edition, Dover Publications, Mineola.
[17] Ciftci, H., Hall, R.L. and Saad, N. (2003) Asymptotic Iteration Method for Eigenvalue Problems. Journal of Physics A: Mathematical and General, 36, 11807-11816.
https://doi.org/10.1088/0305-4470/36/47/008
[18] Ciftci, H., Hall, R.L. and Saad, N. (2005) Construction of Exact Solutions to Eigenvalue Problems by Asymptotic Iteration Method. Journal of Physics A: Mathematical and General, 38, 1147-1155.
https://doi.org/10.1088/0305-4470/38/5/015
[19] Ciftci, H., Hall, R.L. and Saad, N. (2005) Perturbation Theory in a Framework of Iteration Methods. Physics Letters A, 340, 388-396. https://doi.org/10.1016/j.physleta.2005.04.030
[20] Marakoc, M. and Boztosun, I. (2006) Accurate Iteration and Perturbative Solutions of the Yukawa Potential. International Journal of Modern Physics E, 15, 1253-1262.
https://doi.org/10.1142/S0218301306004806
[21] Mortensen, M. (2017) Shenfun-Automating the Spectral Galerkin Method. In: Skallerud, B.H. and Anderson, H.I., Eds., Ninth National Conference on Computational Mechanics, International Center for Numerical Methods in Engineering (CIMNE), 273-298.
[22] Shen, J. (1994) Efficient Spectral Galerkin Method I. Direct Solvers of Second- and Fourth-Order Equations Using Legendre Polynomials. SIAM Journal of Scientific Computing, 15, 1489-1505.
https://doi.org/10.1137/0915089
[23] Shen, J. (1995) Efficient Spectral-Galerkin Method II. Direct Solvers of Second- and Fourth-Order Equations Using Chebyshev Polynomials. SIAM Journal on Scientific Computing, 16, 74-87.
https://doi.org/10.1137/0916006
[24] Pont, M., Proulx, D. and Shakeshaft, R. (1991) Numerical Integration of Time-Dependent Schrödinger Equation for an Atom in a Radiation Field. Physical Review A, 44, 4486-4492.
https://doi.org/10.1103/PhysRevA.44.4486
[25] Nyengeri, H., Nizigiyimana, R., Ndenzako, E., Bigirimana, F., Niyonkuru, D. and Girukwishaka, A. (2018) Application of the Fröbenius Method to the Schrödinger Equation for a Spherically Symmetric Hyperbolic Potential. Open Access Library Journal, 5, e4950.
https://doi.org/10.4236/oalib.1104950

Copyright © 2024 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.