Application of the Economization of Power Series to Solving the Schrödinger Equation for the Gaussian Potential via the Asymptotic Iteration Method ()

1. Introduction
The easiest and most obvious way to obtain a polynomial approximation to a
given function
is to use a truncated Taylor series of the form
or more generally
[1] [2]. In this truncated method, the more the number of the retained terms, the higher the accuracy of the approximation.
However, this method suffers from the uneven distribution of errors in the approximation. The closer the evaluated point to the origin of expansion, the higher the accuracy and vice versa. This means that for a desired level of accuracy, the points far from the origin will need substantially more terms than those close to the origin of expansion. For computational purposes, however, it may be undesirable to require as many as
terms when N is large. Indeed, it may be unnecessary to use more than a few terms, especially if interest in the function
is restricted to a small range
of the argument.
The powers of a variable x appeared originally purely in algebraic problems [2]. With the development of calculus, the great importance of power expansions became evident. The expansion discovered by Taylor in 1715 and by Maclaurin in 1742 allows predicting the evolution of a function from its value and all its derivatives in one particular point [3]. The “Taylor series” thus became one of the cornerstones of analytical research and was particularly useful in establishing the existence of solutions of differential equations [2]. It should be recalled, however, that the Taylor expansion suffers from slow convergence speed for points far from the origin of expansion. This problem could be alleviated by using minimization methods such as least square (LS) algorithm [3] [4]. In this case, the function
is approximated with a finite degree polynomial
whose coefficients
are selected such that
(1)
is minimum, where
is an arbitrary weighting function and
is the interval in which the function is approximated. The minimization in Equation (1) yields [1]
(2)
We have to indicate that the system of Equation (2) is difficult to solve because it requires the computation of a full two-dimensional matrix. The reason is that the function
is approximated with non-orthogonal power series basis
. This could be avoided if the function is approximated with an orthogonal basis. That is, if the orthogonal basis is given by
, then the coefficients
are determined by
(3)
Using an orthogonal basis will cause the off diagonal terms to be null, and can occasionally lead to the so-called “economized power series”. As a side note, we should indicate that much attention has also been paid to the problem of inventing methods of summing a series in such a way that it shall become convergent, although the original series, if added term by term, increases to infinity [5].
Economization of power series is a procedure that replaces a very accurate (or even exact) polynomial approximation
of degree N by an “economized” polynomial
of a smaller degree n such that, in the range of
interest, the absolute error introduced by the replacement is less than some acceptable value E [6]:
(4)
The procedure of economization, or telescoping as it is sometimes called [2] [6], is accomplished by utilizing the properties of Chebyshev polynomials of the first kind [2] [6] [7] [8], among which the minimax property [9] [10]. According to the minimax principle, Chebyshev approximations are associated with the approximations which minimize the maximum error.
We have to emphasize that the economization algorithm has many distinct phases [6] [11] [12]. More precisely, the economization of power series has four basic steps:
Step 1. Expand
in a Taylor series valid on the interval
. Truncate this series to obtain a polynomial
, (5)
which approximates
within a prescribed tolerance error E for all x in
.
Step 2. Expand
in a Chebyshev series,
, (6)
making use of the following matrix equation [8]:
(7)
Step 3. Truncate this Chebyshev series to a smaller number of terms by retaining the first n terms, choosing n so that the maximum error given by
(8)
is acceptable, where
designs the resulting small Chebyshev series:
. (9)
Step 4. Replace
by its polynomial form, which leads to
, (10)
using the following matrix equation [8]:
(11)
If necessary in step 1, i.e., when we have an interval
other than
, make a transformation of independent variables so that the expansion is valid on that interval, by means of the expression [6] [11]
. (12)
In this case, it is necessary to change variable back to x after step 4, making use of the expression [13]
(13)
For the special domain
, we can write
(14)
In this domain, the Chebyshev polynomials are denoted
and defined by [14]:
and
for
,
. They are called shifted Chebyshev polynomials of the first kind.
Note that Equations (7) and (11) can, in general, be summarized as [8]
(15)
and
(16)
respectively, where:
・
and
are the
-element vectors, i.e.,
(17)
and
. (18)
・ P and C are lower triangle matrices such that [8]
(19)
(20)
(21)
(22)
(23)
(24)
The main purpose of this paper is to develop a technique for generating a polynomial approximation for the Gaussian function which, among all polynomial approximations with the same degree, has a very small maximum error. This technique is based on the telescoping procedure of power series proposed by Lanczos [2], and the polynomial fitting of the error in the approximation, with the objective of economizing a sufficiently accurate truncated Maclaurin series of the Gaussian function. The resulting economized series will be used to compute bound state energies associated with the attractive Gaussian potential via the Asymptotic Iteration Method (AIM). Similar computations have been made by Mutuk [15] who applied the AIM to the Gaussian potential using a truncated Maclaurin series to approximate the function
.
The rest of this paper is organized as follows. In Section 2, we apply the procedure of economization to the Gaussian function by using firstly Chebyshev polynomials of the first kind, and secondly the
polynomials. For each economized series, the exact error is calculated and fitted by a power series having the same degree as the initial non-economized finite power series. The new finite series obtained by adding the approximate error to the associated economized series in turn undergoes the procedure of economization, which leads to a much more efficient economized power series. The originality of our work is precisely the multiple application of the economization method, which alleviates one of the most harmful aspects of the telescoping method, i.e., the low accuracy of the economized series around the origin of expansion [16]. Section 3 contains a brief introduction to the AIM for the Gaussian potential using the economized series obtained in Section 2 to approximate the Gaussian function. We also present and comment our results concerning bound state energies of the attractive Gaussian potential for a given well depth. We compare them with those given by the exact Hamiltonian diagonalization on a finite basis of Coulomb Sturmian functions. The conclusion is given in Section 4.
2. Gaussian Function Economization
We here consider the Gaussian function of the form
(25)
and the interval [−1, 1] for the independent variable x. The Maclaurin series expansion of this function is given by
(26)
We denote by
the Nth-degree truncated Maclaurin series of
and we choose
. We have:
(27)
Expanding
in a Chebyshev series, we obtain:
(28)
where we have used relations given in Equation (7). Of course if we expand the Chebyshev polynomials again in terms of power series of x, we obtain the same polynomial back.
Let us truncate the Chebyshev series (28) by neglecting the last two terms, and denote by
the resulting expression, i.e.,
(29)
Replacing
by its polynomial form (see Equation (11)), we obtain the tenth-degree economized power series of
associated with the finite series
which is a polynomial of degree 14:
(30)
Figure 1 shows four errors
,
,
and
in the approximation of
calculated as the differences between the Gaussian function and the truncated power series
,
,
and
, i.e.,
,
,
and
. The definition of the function
whose graph is shown in Figure 1 will be given below.
We see that the tenth-degree economized power series
approximates
![]()
Figure 1. Plots of different errors in the approximation of
. (a) Graph of
; (b) Graph of
; (c) Curves of
(solid line) and
(symbol
); (d) Graph of
.
on [−1, 1] better than the tenth-degree Maclaurin series and nearly as well as the twelfth and fourteenth-degree Maclaurin series
and
.
Indeed, its maximum error (at
) is
whereas the error of approximation equals
for
,
for
and
for
. We “economize” in sense that we get about the same precision with a lower-degree polynomial.
We have to add that we can get a much more efficient economized power series by first adding to the series
the associated error fitted by a high-degree polynomial, and then applying the procedure of economization to the resulting polynomial. To this end, we discretize the problem in the interval [0, 1] and evaluate the function
at
(for
) where p is the number of mesh points and h the step size, thus creating two p-components real vectors X and Y such that
and
,
, where
and
design the k-th components of the vectors X and Y respectively. We then appeal to the maple 18 software (the Fit command) to construct the (2K)th-degree polynomial
of type
,
, that best fits the above set of data points, i.e.,
,
. It is worth noting that in maple, the Fit command fits a model function to given data by minimizing the least-square error. In the case we are concerned with, the calling sequence is:
where
is to be replaced by
and
are adjustable parameters to be computed. With K = 7 and p = 101, we find:
(31)
Applying the procedure of economization to the fourteen-degree polynomial
, we find a new tenth-degree economized series, which we denote by
:
(32)
The function
defined by the expression
(33)
is shown in Figure 1. It is clear that the series
is more accurate than all the above power series.
So far in this section, we have used the
polynomials to economize the fourteenth-degree Maclaurin series of the Gaussian function on the domain
. In what follows the economization will be done on the interval [0, 1] using shifted Chebyshev polynomials of the first kind
. We get
(34)
where the asterisk in brackets in expression
refers to the use of the
polynomials in the approximation of the Gaussian function
.
It is worth noting that Equation (34) is obtained by using the following expression [14]:
(35)
where:
・
・
and thus
・
is a
triangular matrix such that [14]
(36)
(37)
and
(38)
It follows immediately from Equation (35) that
(39)
Since
and
, the last six terms in the right-side of Equation (34) are rather tiny in magnitude (
,
,
,
,
and
respectively). We therefore can chop off these terms (keep terms up to
) without risk of appreciable change in the final results, and then re-expand back to a monomial series. Doing this gives the following eighth-degree polynomial:
(40)
where the asterisk in brackets refers to the fact that the polynomial
results from expanding a series of shifted Chebyshev polynomials of the first kind in a Maclaurin series. We remark that all coefficients of all orders between 0 and 8 are present in the series (40), which is contrary to the result obtained by doing economization using the
polynomials.
Expanding in shifted Chebyshev series the fourteenth-degree polynomials obtained by adding to the series (40) the associated error, i.e.,
, fitted by a polynomial of degree 14, and then truncating the resulting series by keeping terms up to
, we obtain, after re-expanding back to a monomial series:
(41)
Note that this expression can be used to approximate
on
by multiplying all terms of odd exponents by the sign of the independent variable x.
In Figure 2, we show the plot of the error in the approximation of
based on the use of Equation (41), together with the plots of
,
and
. We see that the
polynomials economize the Gaussian function
more efficiently than
![]()
Figure 2. Comparison of
with other errors in the approximation of
. (a) Graph of
; (b) Graph of
; (c) Graph of
; (d) Graph of
.
the
ones, but there is a price to pay since for a given degree 2p, the
polynomials lead to an economized polynomial whose number of terms, i.e., 2p + 1, is almost twice the number of terms in the economized power series obtained when the
polynomials are used, which is exactly
.
3. Application to the Asymptotic Iteration Method for the Gaussian Potential
3.1. Basic Equations of the Asymptotic Iteration Method (AIM)
In this subsection, we briefly outline the asymptotic iteration method; the details can be found in [17] and [18].
The AIM was introduced to solve the second-order homogeneous linear differential equations of the form [17] [18] [19]
(42)
where
and
have sufficiently many continuous derivatives in some interval, not necessarily bounded. The differential Equation (42) has a general solution [17] [18]
(43)
where
(44)
for sufficiently large k.
In Equation (44),
and
are defined as follows [17] [18]:
(45)
(46)
The convergence (quantization) condition of the method, as given in (44), can also be written as follows [15] [20]:
(47)
For a given radial potential such as the Gaussian one, the radial Schrödinger equation is converted to the form of the Equation (42). Once this form has been obtained, it is easy to determine
and
and calculate
and
by using Equations (45) and (46). The energy eigenvalues are then obtained from the quantization condition given by Equation (47).
3.2. Asymptotic Iteration Method for Gaussian Potential
We here consider the Gaussian potential of the form
(48)
where
is the depth of the potential and
determines its width. The radial Schrödinger equation (SE) for a particle with mass m that moves in three-dimensional space under the effect of the attractive Gaussian potential (48) can be written as
(49)
where
,
and
,
being the energy of the particle. This is a second order non-linear differential equation. In order to solve this equation via the AIM, we should first model it with a second order linear differential equation and then convert this model equation to the form of Equation (42). Mutuk [15] solved the non-linear differential Equation (42) for
via the AIM by suggesting a wave function of the form
(50)
and making use of the tenth-degree truncated Maclaurin series of
, i.e.,
(51)
to construct a linear model of this non-linear equation. He obtained a second order linear homogeneous differential equation for the factor
with the general form
(52)
where
(53)
(54)
We have to emphasize that in the AIM, energy eigenvalues are calculated from the quantization condition given by Equation (47). For each iteration, this equation will depend on two variables,
and r. The eigenvalues calculated by means of
should however be independent from the choice of r. Actually, this will be the case for most iteration sequences. The choice of r can be critical to the speed of the convergence of the eigenvalues, as well as the stability of the process [17] [20]. This suitable choice of r minimizes the potential or maximizes the radial wave function given by Equation (50) in the case of the attractive Gaussian potential.
In the AIM, the wave function can be written as
(55)
where
represents the asymptotic behavior. In our case,
. Hence, we have taken
, which is the value of r that minimizes the wave function.
is an arbitrary parameter related to the convergence.
The convergence of the eigenvalues for the cases of
,
,
,
and
is reported in Table 1 where we compute the eigenvalue associated with
and
by means of the AIM using the maple 18 software which is known to be a powerful symbolic mathematical software. It is clear that the eigenvalues converge for all the five values of
whatever the method used to approximate the Gaussian potential, which is contrary to the results obtained by Mutuk [15], results in which the eigenvalues associated with
start to diverge when the iteration number exceeds 25. We think that the discrepancy between our results and the Mutuk ones for big values of
is
![]()
Table 1. The convergence of the eigenvalues of the attractive Gaussian potential for different β values and various approximations of
with
and
. k is the iteration number. Potential parameters are
atomic units (a.u) and
.
due to the fact that during the implementation of the AIM, the precision level has been set to 50 digits, which means that our results have been computed with high precision and are more accurate. It is clear from Table 1 that the approximation of
based on the use of the
polynomials has the advantage that the energies converge significantly faster towards the accurate value of
, i.e., −341.8952145612, than those calculated using the two other approximations.
Table 2 presents the results for a few values of n and
computed by means of 50 iterations using Equations (32) and (41) to approximate
(third and fourth columns). The energy eigenvalues are obtained with
because the solutions are in many cases very close after few iterations when this value of
is used. Our results are compared with the Mutuk ones [15] (second column) and the numerically calculated ones by the spectral Galerkin method (SGM) [10] [21] [22] [23] based on expanding the radial wave function on a finite basis of Coulomb Sturmian functions defined by [24] [25]:
(56)
![]()
Table 2. Comparison of the energy eigenvalues of the Gaussian potential in a.u. obtained by using AIM for various approximations of
with those calculated by means of the SGM for different values of n and
. We have chosen
as the number of Coulomb Sturmian functions and 0.75 as the value of
.
where
denotes the associated Laguerre polynomial and n the principal quantum number. The normalization constant
, given by
(57)
is obtained from the normalization condition
.
Note that the spectral methods have the advantage of “exponential convergence” property, depending on the size of the basis, which makes them more accurate than local methods. Unlike finite difference methods, spectral methods are global methods, where the computation at any given point depends not only on information at neighboring points, but also on information from the entire domain. We appreciate that the results associated with the
appear to approach the numerical eigenvalues reasonably well for all taken values of n and
.
4. Conclusion
In this work, we have applied the procedure of economization to the Gaussian function of the form
by using Chebyshev polynomials of the first kind on one hand and the shifted ones on the other, with an application to the solution of the radial Schrödinger equation for the attractive Gaussian well via the Asymptotic Iteration Method (AIM). We have seen that the use of
polynomials leads to more efficient economized power series of
which can be used to well model the non-linear radial Schrödinger equation for the Gaussian potential with a second order linear differential equation solvable by means of AIM.