The Numerical Inversion of the Laplace Transform in a Multi-Precision Environment ()
1. Introduction
This paper examines the performance of five algorithms for numerically inverting the Laplace transform, in standard 16-digit and multi-precision environments. The algorithms, whose derivations are outlined in Section 5, are taken from three of the four main classes of numerical methods used to invert the Laplace transform [1].
The Abate-Valko [1] and Logan schemes [2] belong to the class of inversion algorithms which deform the Bromwich contour [3]. They are closely related versions of this approach as they both use Talbot’s method for deformation of the contour [4]. Logan, however, chooses an exponential transform while Abate-Valko extends the original Talbot formulation expressing the contour in trigonometric form.
The Stehfest and Salzer-Gaver algorithms [5], are again two closely related schemes based on the acceleration of the Gaver functional [6]. Stehfest applied a modified Salzer acceleration scheme [7] onto the Gaver functional simplifying this result to yield one of the most widely used algorithms for inverting the Laplace transform. We find, however, that when we implement a direct application of the Salzer acceleration scheme onto the Gaver functional, (Salzer-Gaver), with Stehfest’s modifications, we do not obtain the same results as those produced by the Stehfest scheme. We conclude that Stehfest’s simplification process is at least in part responsible for the differences in performance of these two versions.
Finally, we examine the Fourier series method [8], which expresses the inversion integral as a Fourier series and then uses the trapezium rule to evaluate the integral numerically. The Fourier series method differs from the other four algorithms as no acceleration scheme is used to force convergence. As such, this is only used in a standard 16 digit precision environment and is compared with the four other schemes using standard precision.
2. The Laplace Transform
The Laplace transform is an integral transform defined as follows:
Let
be defined for
, then the Laplace transform of
is given by,
(1)
Thus
is a function of s denoted as
. The Laplace transform can be shown to exist for any function which can be integrated over any finite interval
for
, and for which
is of exponential order, i.e.
(2)
as
, where
is a finite real number and a is a small real positive number.
Analytically the inverse Laplace transform is usually obtained using the techniques of complex contour integration with the resulting set of standard transforms presented in tables [9].
However, using the Laplace transform can generate data in the Laplace domain which is not easily invertible to the real domain by analytical means. Thus numerical inversion techniques have to be used to convert the data from the s-space to the time domain.
3. The Inverse Laplace Transform, Perturbation and Multi-Precision
The recovery of the function
is via the inverse Laplace transform which is most commonly defined via the Bromwich contour integral,
(3)
such that
. The inversion integral is widely known to be highly perturbed [10] [11] [12]. This is due to the
term in the integral which amplifies small changes in the input data. Hence all numerical schemes are vulnerable to this perturbation and this has to be taken into account when using the various algorithms to invert the Laplace transform. This suggests that working in a multi-precision environment can act to reduce errors and the resulting perturbations which exist in the transforming the data from the s-space into the time domain.
As Abate-Valko noted [1], “In the traditional development of the inversion methods, most of the effort was directed at controlling round-off errors. This is because the process is numerically unstable in a fixed-precision, computing environment. The problem is that as the user tries to increase the accuracy there comes a point where the round off error causes the error to increase dramatically”.
In fact Abate-Valko got further and made the claim that “for our purposes, we add the proviso that values of the transform can be computed to any desired precision as a function of the complex variable”.
This suggests that working in a multi-precision environment can act to reduce errors and the resulting perturbations which exist in transforming the data from the s-space into the time domain.
4. The Algorithms
Large parts of this section are taken from our earlier work [13]. This is necessary to provide sufficient background on the derivation of the algorithms and their performance. However, we extend this work to include comments on Salzer acceleration and the subsequent Salzer Gaver scheme’s derivation and compare this to the Stehfest inversion scheme.
However this has been extended to include analysis of the Salzer acceleration scheme on the Gaver functional. We also comp There are over 100 algorithms available for inverting the Laplace Transform with numerous comparative studies, examples include, Duffy [14], Narayanan and Beskos [15], Cohen [10] and perhaps the most comprehensive by Davies and Martin [16]. However for the purposes of this investigation we apply our tests using “Those algorithms that have passed the test of time” [1]. These fall into four groups,
1) Fourier series expansion.
2) Combination of Gaver Functionals.
3) Laguerre function expansion.
4) Deformation of the Bromwich contour.
4.1. The Fourier Series Method
In their survey of algorithms for inverting the Laplace transform, Davies and Martin [16] note that the Fourier series method without accelerated convergence gives good accuracy on a wide variety of functions. Since the Laplace Transform is closely related to the Fourier transform it is not surprising that inversion methods based on a Fourier series expansion would yield accurate results. In fact the two sided Laplace transform can be derived from the Fourier transform in the following way. We can define the Fourier transform as
(4)
Then letting
we have
(5)
This Fourier transform exists provided
is an absolutely integrable function, i.e.
(6)
As many functions do not satisfy condition (6),
is multiplied by the exponential dampening factor
thus
(7)
and letting
we obtain the two sided Laplace Transform of
as
(8)
Le Page [17] notes that the integral given by (8) can be written in two parts as follows:
(9)
The second term on the RHS in the above expression is referred to as the one sided Laplace transform or simply the Laplace transform. Thus, s is defined as a complex variable in the definition of the Laplace transform.
As before the inverse Laplace transform is given as:
(10)
With
in (10) this leads to the result
(11)
[8], which can be replaced by the cosine transform pair
(12)
(13)
or by the sine transform pair
(14)
(15)
Dubner and Abate [18] applied a trapezoid rule to (13) resulting in the Fourier series approximation,
(16)
where
is expanded in the interval
. For faster computation Simon et al. [19] proposed the following version of (16).
(17)
This series can be summed much faster than (16) as there are no cosines to compute [20]. This algorithm is relatively easy to implement with u being the only real varying parameter.
However, as Crump [8] points out, for the expression in (17) the transform
must now be computed for a different set of s-values for each distinct t. Since this type of application occurs often in practice in which the numerical computations of
is itself quite time consuming this may not be an economical inversion algorithm to use. These drawbacks to some extent can be overcome by using the fast Fourier transform techniques in [20] [21].
Crump [8] also extends this method to one with faster convergence by making use of the already computed imaginary parts. There are several other acceleration schemes for example, those outlined by Cohen [10], however these acceleration methods in general require the introduction of new parameters which for the purposes of this investigation we wish to avoid.
4.2. The Stehfest Algorithm
Davies and Martin [16] cite the Stehfest [5] algorithm as providing accurate results on a wide variety of test functions. Since that time, this algorithm has become widely used for inverting the Laplace Transform, being favoured due its reported accuracy and ease of implementation.
Here we give a brief overview of the evolution of the algorithm from a probability distribution function to the Gaver functional whose asymptotic expansion leads to an acceleration scheme which yields the algorithm in its most widely used form.
Gaver [6] investigated a method for obtaining numerical information on the time dependent behavior of stochastic processes which often arise in queuing theory. The investigation involved examining the properties of the three parameter class of density functions namely
(18)
with
.
After the binomial expansion of the term
, Gaver went on to find the expectancy
, where
is the random variable with density (18). From this Gaver was able to express the inverse Laplace transform in terms of the functional
(19)
with certain conditions on n and m, Gaver makes
and Expresses (19) as
(20)
While the expression in (20) can be used to successfully invert the Laplace transform for a large class of functions its rate of convergence is slow [9] [10]. However Gaver [6] has shown that (20), with
has the asymptotic expansion
(21)
where the
’s are constant coefficients in the asymptotic series. Hence (21) in the limit converges to
For the conditions on m and n and justification for the substitution for a referred to above see Gaver [6]. This asymptotic expansion provides scope for applying various acceleration techniques enabling a more viable application of the basic algorithm.
Much of the literature alludes to the fact that a Salzer [7] acceleration scheme is used on the Gaver functional in (20) which results in the Stehfest algorithm. In fact Stehfest’s approach was a little more subtle than a direct application of the Salzer acceleration.
4.2.1. Using Salzer Acceleration
The Salzer acceleration scheme makes use of the “Toeplitz limit theorem” [7], “this concerns the convergence of a transformation of a sequence
where the
th member of the transformed sequence is a weighted mean of the first
terms”
(22)
Here
is the transformed sequence and
the original sequence which for our purposes
in (20). The Salzer means are given by
(23)
For the sake of compatibility with (22) we make the change
and
in (20). With this change of variables we also write
This allows the sum to be taken from
to n without
in the denominator of (20). So with Salzer acceleration we can express the Salzer-Gaver algorithm as
(24)
4.2.2. Stehfest’s Acceleration Scheme
For the purposes of following Stehfest’s derivation it will be convenient to rewrite (20) as
(25)
Stehfest [5] begins by supposing we have N values for
with
,
,
,
for N even. Using (25) we can then determine
values
. Now each of these
values satisfy the asymptotic series in (21) with the same coefficients.
As Stehfest [5] points out, the αj’s are the same for each of the above expansions and by using a suitable linear combination the first (
) error terms in (21) can be eliminated. That is
(26)
which may be achieved by selecting the coefficients to satisfy
(27)
which produce the same coefficients as the Salzer acceleration scheme used in (22). In fact for any n, Stehfest generates the required coefficients using what is in effect a modified Salzer acceleration scheme giving
(28)
Finally, Stehfest substitutes these results into (26) and gets the inversion formula
(29)
for N even and
(30)
However, a direct application of the modified Salzer acceleration scheme in (28) onto the Gaver functional in (25) does not produce the same results for the expression in (30) so they are not exactly equal to each other.
To show this we consider the function
whose Laplace transform is
The eight weights produced by the Salzer acceleration for
are exactly the same for
in Stehfest’s modified Salzer acceleration scheme in (28).
However, Table 1 shows that for
with these same weights, the Salzer-Gaver scheme produces different results when compared to Stehfest’s scheme in (30). We believe this is due to Stehfest’s simplification of the Salzer-Gaver scheme to the expression in (30).
This simplification was necessary because as we show in our results in Section 6, Stehfest’s final expression in (30) is faster and works better in standard double precision. As the algorithm was developed in 1970, this would be far more efficient when taking into consideration the computing power available at the time. Again as we show in Section 6, a direct application of a Salzer acceleration scheme onto the Gaver functional is only advantageous in a multi-precision environment.
4.3. The Talbot Algorithm
Equations (4) to (8) showed that the Laplace transform can be seen as a Fourier transform of the function
(31)
i.e.
(32)
Table 1. Salzer-Gaver stehfest for
.
Hence the Fourier transform inversion formula can be applied to recover the function thus:
(33)
as
we have that
and so
(34)
This result provides a direct means of obtaining the inverse Laplace transform. In practice the integral in (34) is evaluated using a contour
(35)
with B here denoting the Bromwich contour [3]. The contour is chosen so that it encloses all the possible singularities of
. The idea of the contour is introduced so that the Cauchy residue theorem can be used to evaluate the integral.
However, when
is to be calculated using numerical quadrature it may be more appropriate to devise a new contour. To ensure that (35) is integrable we may wish to control the growth of the magnitude of the integrand
by moving the contour to the left and so giving the real part of s a large negative component [1] [22].
However, the deformed contour must not be allowed to pass through any singularities of
. This is to ensure that the transform is analytic in the region to the right of B.
4.3.1. Derivation of the Fixed Talbot Contour
In the derivation that follows [1] and [22] are used as the primary basis for extending the explanation of the derivation of the Talbot algorithm for inverting the Laplace Transform.
Abate-Valko [1] began with the Bromwich inversion integral along the Bromwich contour B with the transform
(36)
So
can be expressed as
(37)
with
in (37). As Abate-Valko [1] indicated, numerically evaluating the
integral in (37) is difficult due to the oscillatory nature of the integrand.
However, this evaluation can be achieved by deforming the contour B into a path of constant phase thus eliminating the oscillations in the imaginary component. These paths of constant phase are also paths of steepest decent for the real part of the integrand [1] [22] [23].
There are in general a number of contours for which the imaginary component remains constant so we choose one on which the real part attains a maximum on the interior (a saddle point) and this occurs at
at some point on the contour. At these saddle points the
[17]. Where
(38)
in (37). Thus we have
(39)
So the stationary point occurs when
.
With
we have
(40)
Expressing
as
we have
(41)
then
(42)
and as
(43)
Then
(44)
[1].
With
then s can be parameterized to (44) Talbots contour
(45)
[4].
4.3.2. Conformal Mapping of the Talbot Contour
While the above parametrization can be used as a basis for inverting the Laplace transform we proceed with the algorithm’s development via a convenient conformal mapping as follows.
(46)
Then
(47)
with
(47) is equal to
(48)
Hence (45) becomes,
(49)
The function
(50)
maps the closed interval
on the imaginary z-plane onto the curve L in the s plane giving the integral,
(51)
For the details of this transformation one can refer the study of Logan [2].
Next we follow the procedure as adopted by Logan [2] for numerically integrating (51). With the change of variable (51) becomes
(52)
where
(53)
For convenience we write,
(54)
where
(55)
The integral in (54) is then rotated by
so the interval of integration is now real and becomes
and then we use the trapezoid rule with n odd and
to obtain
(56)
where
(57)
and we note that
[2].
4.3.3. Valko
Abate-Valko [1] deformed the Bromwich contour using the Talbot path which has the form,
(58)
Which is the same as (47) with
and
So we have
(59)
where,
(60)
Then from (52) we find,
(61)
They then approximate the value of the integral in (63) by using the trapezoidal rule with step size
and
,
(62)
Based on numerical experiments, Abate-Valko then fixed the parameterr to the value,
(63)
[1]. We also use this value for a in Logan’s transformation.
5. Results
We tested the five algorithms on the functions listed in Table 2 and Table 3
below. Functions 1 - 11 and 18 are taken from the 16 functions tested by Davies and Martin [16]. The remaining functions are selected from those tested by Abate-Valko [1].
The first set of tests were carried out using 16 digits double precision. These results are shown in Table 4. The Fourier, Logan and Abate-Valko schemes were run with weights
,
and
, however for brevity we include only the result for
.
For the Stehfest and the Salzer-Gaver algorithms best results were obtained with weights of
and
respectively. This is in keeping with Stehfest’s observations on the instability of this method as M increases above an optimal level [5].
In multi-precision, the number of precision digits N for Abate-Valko were set equal to
[1] and for the Slazer-Gaver, and Stehfest schemes, best results were obtained when the number of precision digits was set equal to
.
(64)
And
Table 4. Standard double precision.
(65)
where
is the analytical solution and
is the numerical solution. Hence L is the root-mean-square error and
is the same as L but weighted by the factor
[14].
All computations were done using a 64-bit operating system with an Intel(R) Core(TM) i7-855ou CPU processor. The algorithms were implemented in Maple 2018 using the Maple’s digits command to set the required precision.
5.1. Standard Double Precision
Table 4 and Table 5 show that when compared with the other four algorithms, the Fourier series method performs with the least accuracy on all the functions tested. It also fails to reconstruct functions 8, 15, 17 and 18, with poor results for functions 4, 5 and 12.
However, for the functions which it successfully reconstructs it does so with a RMS accuracy of between L= 3.6(−5) and 1.2(−2). We believe that this scheme will improve greatly when an acceleration scheme is applied. This is an issue we intend to investigate in our future work.
With the exception of function 7,
, Logan’s algorithm successfully inverts all the functions given in Table 2 and Table 3 with very good accuracy. We found that in SDP best results are obtained by equating
in (50). Table 3 and Table 4 show that for these functions the RMS error varies between 3.6(−8) to 8.4(−12).
Except for function 7 the
, the Abate-Valko scheme successfully inverts all the functions in Table 3 and Table 4. Moreover, it does so with greater
Table 5. Standard double precision continued.
Table 7. Multi-Precision continued N = 200.
accuracy than the Logan scheme. The tables show that the RMS error varied between 6.5(−11) and 6.2(−12).
Table 3 and Table 4 show that the Stehfest algorithm shows poor accuracy when inverting functions 1, 7, 10 and 11. For these functions the RMS error varies between 2.01(−2) to 9.2(−3). Its poor performance is due to the fact that the Stehfest algorithm has difficulty inverting functions of a cyclic nature [5]. However, it inverts the remaining functions with good accuracy with a RMS error of between 0 to 2.9(−5). Table 3 and Table 4 shows that the Salzer-Gaver algorithm shows poor accuracy for functions 1, 7, 10 and 11. These are the very same functions that the Stehfest algorithm has problems inverting. Again this is due to the difficulties it encounters when inverting cyclic functions. It inverts the remaining functions with less accuracy than the Stehfest with an RMS error varying between 10(−15) to 10(−5).
5.2. Multi Precision
With the exception of function 7, the Logan and Abate-Valko algorithms successfully inverted the remaining functions to a high degree of accuracy, see Table 6 and Table 7. Duffy [14] also remarks that when using the Talbot contour he had difficulties accurately inverting the Bessel function. This may related to the combination of the singularity on the imaginary axis and the branching nature of the square root function.
Abate-Valko [1] stated that they were able to overcome this by increasing the weights and hence the precision as a function of t. However, we were unable to replicate their results for this function.
Overall, the Abate-Valko scheme showed far greater accuracy than Logan’s across all the functions tested. However, Logan’s algorithm was still able to produce highly accurate results with RMS errors varying between 10(−60) to 10(−63). Moreover, Table 8 shows that Logan’s scheme was able to perform the inversion of these functions with shorter elapsed times.
The Stehfset and Salzer-Gaver algorithms were able to invert all the functions to a high degree of accuracy. The Salzer-Gaver scheme was in general about twice as accurate as the Stehfest algorithm which, was less accurate than Abate-Valko’s scheme. Nevertheless, the Stehfest scheme inverted the functions
Table 8. Elapsed time
for
.
well within any generally desired accuracy with the RMS error varying from 0.0 to 10(−41). Moreover, as Table 8 shows it in terms of the elapsed time it was the fastest of all the algorithms for the most part twice as fast as the Abate-Valko scheme which in turn was at least twice as fast as the Salzer-Gaver scheme.
6. Conclusions
In standard-double-precision, the Abate-Valko algorithm provides the best results for the numerical reconstructions for the functions tested in this paper. The Fourier algorithm had the worst performance of the five algorithms tested. Both the Stehfest and Salzer-Gaver algorithms had difficulty reconstructing functions of a cyclic nature. None of the algorithms was able to invert the
function accurately.
In multi-precision, the Stehfest and the Salzer-Gaver schemes inverted all the functions with high accuracy. The Logan and Abate-Valko schemes were only able to invert the
with limited accuracy. However they were both able to reconstruct all the other functions with a high degree of accuracy. The most accurate algorithm in multi-precision was the Salzer-Gaver scheme. However, as Table 8 shows it also had the longest elapsed times. On the other hand, the Stehfest algorithm had the shortest elapsed times for the selected functions in Table 8. The algorithms that used the Abate-Valko were the most accurate, but Logan could reconstruct the functions with shorter elapsed times. Therefore we conclude that when working in standard-precision Valko’s algorithm performed best. However, in multi-precision, the Stehfest algorithm is best as it inverted all the functions with a high degree of accuracy and the shortest elapsed times.