The Numerical Inversion of the Laplace Transform in a Multi-Precision Environment

Abstract

This paper examines the performance of five algorithms for numerically inverting the Laplace transform, in standard, 16-digit and multi-precision environments. The algorithms are taken from three of the four main classes of numerical methods used to invert the Laplace transform. Because the numerical inversion of the Laplace transform is a perturbed problem, rounding errors which are generated in numerical approximations can adversely affect the accurate reconstruction of the inverse transform. This paper demonstrates that working in a multi-precision environment can substantially reduce these errors and the resulting perturbations exist in transforming the data from the s-space into the time domain and in so doing overcome the main drawback of numerically inverting the Laplace transform. Our main finding is that both the Talbot and the accelerated Gaver functionals perform considerably better in a multi-precision environment increasing the advantages of using Laplace transform methods over time-stepping procedures in solving diffusion and more generally parabolic partial differential equations.

Share and Cite:

Defreitas, C. and Kane, S. (2022) The Numerical Inversion of the Laplace Transform in a Multi-Precision Environment. Applied Mathematics, 13, 401-418. doi: 10.4236/am.2022.135027.

1. Introduction

This paper examines the performance of five algorithms for numerically inverting the Laplace transform, in standard 16-digit and multi-precision environments. The algorithms, whose derivations are outlined in Section 5, are taken from three of the four main classes of numerical methods used to invert the Laplace transform [1].

The Abate-Valko [1] and Logan schemes [2] belong to the class of inversion algorithms which deform the Bromwich contour [3]. They are closely related versions of this approach as they both use Talbot’s method for deformation of the contour [4]. Logan, however, chooses an exponential transform while Abate-Valko extends the original Talbot formulation expressing the contour in trigonometric form.

The Stehfest and Salzer-Gaver algorithms [5], are again two closely related schemes based on the acceleration of the Gaver functional [6]. Stehfest applied a modified Salzer acceleration scheme [7] onto the Gaver functional simplifying this result to yield one of the most widely used algorithms for inverting the Laplace transform. We find, however, that when we implement a direct application of the Salzer acceleration scheme onto the Gaver functional, (Salzer-Gaver), with Stehfest’s modifications, we do not obtain the same results as those produced by the Stehfest scheme. We conclude that Stehfest’s simplification process is at least in part responsible for the differences in performance of these two versions.

Finally, we examine the Fourier series method [8], which expresses the inversion integral as a Fourier series and then uses the trapezium rule to evaluate the integral numerically. The Fourier series method differs from the other four algorithms as no acceleration scheme is used to force convergence. As such, this is only used in a standard 16 digit precision environment and is compared with the four other schemes using standard precision.

2. The Laplace Transform

The Laplace transform is an integral transform defined as follows:

Let f ( t ) be defined for t 0 , then the Laplace transform of f ( t ) is given by,

L { f ( t ) } = 0 f ( t ) e s t d t (1)

Thus L { f ( t ) } is a function of s denoted as F ( s ) . The Laplace transform can be shown to exist for any function which can be integrated over any finite interval 0 < t < l for l > 0 , and for which f ( t ) is of exponential order, i.e.

| f ( t ) | < M e a t (2)

as t , where M > 0 is a finite real number and a is a small real positive number.

Analytically the inverse Laplace transform is usually obtained using the techniques of complex contour integration with the resulting set of standard transforms presented in tables [9].

However, using the Laplace transform can generate data in the Laplace domain which is not easily invertible to the real domain by analytical means. Thus numerical inversion techniques have to be used to convert the data from the s-space to the time domain.

3. The Inverse Laplace Transform, Perturbation and Multi-Precision

The recovery of the function f ( t ) is via the inverse Laplace transform which is most commonly defined via the Bromwich contour integral,

L 1 { F ( s ) } = f ( t ) = 1 2 π i α i α + i f ( s ) e s t d s (3)

such that α R . The inversion integral is widely known to be highly perturbed [10] [11] [12]. This is due to the e s t term in the integral which amplifies small changes in the input data. Hence all numerical schemes are vulnerable to this perturbation and this has to be taken into account when using the various algorithms to invert the Laplace transform. This suggests that working in a multi-precision environment can act to reduce errors and the resulting perturbations which exist in the transforming the data from the s-space into the time domain.

As Abate-Valko noted [1], “In the traditional development of the inversion methods, most of the effort was directed at controlling round-off errors. This is because the process is numerically unstable in a fixed-precision, computing environment. The problem is that as the user tries to increase the accuracy there comes a point where the round off error causes the error to increase dramatically”.

In fact Abate-Valko got further and made the claim that “for our purposes, we add the proviso that values of the transform can be computed to any desired precision as a function of the complex variable”.

This suggests that working in a multi-precision environment can act to reduce errors and the resulting perturbations which exist in transforming the data from the s-space into the time domain.

4. The Algorithms

Large parts of this section are taken from our earlier work [13]. This is necessary to provide sufficient background on the derivation of the algorithms and their performance. However, we extend this work to include comments on Salzer acceleration and the subsequent Salzer Gaver scheme’s derivation and compare this to the Stehfest inversion scheme.

However this has been extended to include analysis of the Salzer acceleration scheme on the Gaver functional. We also comp There are over 100 algorithms available for inverting the Laplace Transform with numerous comparative studies, examples include, Duffy [14], Narayanan and Beskos [15], Cohen [10] and perhaps the most comprehensive by Davies and Martin [16]. However for the purposes of this investigation we apply our tests using “Those algorithms that have passed the test of time” [1]. These fall into four groups,

1) Fourier series expansion.

2) Combination of Gaver Functionals.

3) Laguerre function expansion.

4) Deformation of the Bromwich contour.

4.1. The Fourier Series Method

In their survey of algorithms for inverting the Laplace transform, Davies and Martin [16] note that the Fourier series method without accelerated convergence gives good accuracy on a wide variety of functions. Since the Laplace Transform is closely related to the Fourier transform it is not surprising that inversion methods based on a Fourier series expansion would yield accurate results. In fact the two sided Laplace transform can be derived from the Fourier transform in the following way. We can define the Fourier transform as

F { f ( t ) } = f ( t ) e 2 π i ν t d t (4)

Then letting v = 2 π ν we have

F { f ( t ) } = f ( t ) e i v t d t (5)

This Fourier transform exists provided f ( t ) is an absolutely integrable function, i.e.

| f ( t ) | d t < (6)

As many functions do not satisfy condition (6), f ( t ) is multiplied by the exponential dampening factor e u t thus

F { f ( t ) e u t } = f ( t ) e i v t e u t d t (7)

and letting s = u + i v we obtain the two sided Laplace Transform of f ( t ) as

F { f ( t ) e u t } = L { f ( t ) } = e s t f ( t ) d t (8)

Le Page [17] notes that the integral given by (8) can be written in two parts as follows:

e s t f ( t ) d t = 0 e s t f ( t ) d t + 0 e s t f ( t ) d t (9)

The second term on the RHS in the above expression is referred to as the one sided Laplace transform or simply the Laplace transform. Thus, s is defined as a complex variable in the definition of the Laplace transform.

As before the inverse Laplace transform is given as:

f ( t ) = 1 2 π i u i u + i e s t F ( s ) d s (10)

With s = u + i v in (10) this leads to the result

f ( t ) = 2 e u t π 0 [ Re { F ( u + i v ) } cos ( v t ) Im { F ( u + i v ) } sin ( v t ) ] d v (11)

[8], which can be replaced by the cosine transform pair

Re { F ( u + i v ) } = 0 e u t f ( t ) cos ( v t ) d t (12)

f ( t ) = 2 e u t π 0 Re { F ( u + i v ) } cos ( v t ) d v (13)

or by the sine transform pair

Re { F ( u + i v ) } = 0 e u t f ( t ) sin ( v t ) d t (14)

f ( t ) = 2 e u t π 0 Im { F ( u + i v ) } sin ( v t ) d v (15)

Dubner and Abate [18] applied a trapezoid rule to (13) resulting in the Fourier series approximation,

f ( t ) 2 e u t T [ 1 2 F ( u ) + k = 1 Re { F ( u + k π i T ) } cos ( k π t T ) ] (16)

where f ( t ) is expanded in the interval 0 t < T . For faster computation Simon et al. [19] proposed the following version of (16).

f ( t ) e u t t [ 1 2 F ( u ) + k = 1 Re { F ( u + k π i t ) } ( 1 ) k ] (17)

This series can be summed much faster than (16) as there are no cosines to compute [20]. This algorithm is relatively easy to implement with u being the only real varying parameter.

However, as Crump [8] points out, for the expression in (17) the transform F ( s ) must now be computed for a different set of s-values for each distinct t. Since this type of application occurs often in practice in which the numerical computations of F ( s ) is itself quite time consuming this may not be an economical inversion algorithm to use. These drawbacks to some extent can be overcome by using the fast Fourier transform techniques in [20] [21].

Crump [8] also extends this method to one with faster convergence by making use of the already computed imaginary parts. There are several other acceleration schemes for example, those outlined by Cohen [10], however these acceleration methods in general require the introduction of new parameters which for the purposes of this investigation we wish to avoid.

4.2. The Stehfest Algorithm

Davies and Martin [16] cite the Stehfest [5] algorithm as providing accurate results on a wide variety of test functions. Since that time, this algorithm has become widely used for inverting the Laplace Transform, being favoured due its reported accuracy and ease of implementation.

Here we give a brief overview of the evolution of the algorithm from a probability distribution function to the Gaver functional whose asymptotic expansion leads to an acceleration scheme which yields the algorithm in its most widely used form.

Gaver [6] investigated a method for obtaining numerical information on the time dependent behavior of stochastic processes which often arise in queuing theory. The investigation involved examining the properties of the three parameter class of density functions namely

p n , m ( a , t ) = ( n + m ) ! n ! ( m 1 ) ! ( 1 e a t ) n a e m a t (18)

with n , m .

After the binomial expansion of the term ( 1 e a t ) n , Gaver went on to find the expectancy E [ f ( T n , m ) ] , where T n , m is the random variable with density (18). From this Gaver was able to express the inverse Laplace transform in terms of the functional

f n , m ( t ) = ln 2 t ( n + m ) ! n ! ( m 1 ) ! j = 0 n ( n k ) ( 1 ) k F ( ( k + m ) ln 2 t ) (19)

with certain conditions on n and m, Gaver makes n = m and Expresses (19) as

f n ( t ) = ln 2 t ( 2 n ) ! n ! ( n 1 ) ! k = 0 n ( n k ) ( 1 ) k F ( ( k + n ) ln 2 t ) (20)

While the expression in (20) can be used to successfully invert the Laplace transform for a large class of functions its rate of convergence is slow [9] [10]. However Gaver [6] has shown that (20), with t = ln 2 a has the asymptotic expansion

f n ( t ) f ( ln 2 a ) + α 1 n + α 2 n 2 + α 3 n 3 + (21)

where the α j ’s are constant coefficients in the asymptotic series. Hence (21) in the limit converges to

f n ( t ) n = f ( ln 2 a )

For the conditions on m and n and justification for the substitution for a referred to above see Gaver [6]. This asymptotic expansion provides scope for applying various acceleration techniques enabling a more viable application of the basic algorithm.

Much of the literature alludes to the fact that a Salzer [7] acceleration scheme is used on the Gaver functional in (20) which results in the Stehfest algorithm. In fact Stehfest’s approach was a little more subtle than a direct application of the Salzer acceleration.

4.2.1. Using Salzer Acceleration

The Salzer acceleration scheme makes use of the “Toeplitz limit theorem” [7], “this concerns the convergence of a transformation of a sequence ζ s where the ( n + 1 ) th member of the transformed sequence is a weighted mean of the first ( n + 1 ) terms”

S ¯ n = k = 0 n μ n k S k (22)

Here S ¯ n is the transformed sequence and S k the original sequence which for our purposes = f n ( t ) in (20). The Salzer means are given by

μ n k = ( 1 ) n + k ( 1 + k ) n n ! ( n k ) (23)

For the sake of compatibility with (22) we make the change k i and n k in (20). With this change of variables we also write

( 2 k ) ! k ! ( k 1 ) ! = k ( 2 k ) ! k ! k !

This allows the sum to be taken from k = 0 to n without ( 0 1 ) ! in the denominator of (20). So with Salzer acceleration we can express the Salzer-Gaver algorithm as

f n ( t ) = ln 2 t k = 0 n ( 1 ) n + k ( k + 1 ) n k ! ( n k ) ! k ( 2 k ) ! k ! k ! i = 0 k k ! i ! ( k i ) ! ( 1 ) i F { ( k + i ) ln 2 t } (24)

4.2.2. Stehfest’s Acceleration Scheme

For the purposes of following Stehfest’s derivation it will be convenient to rewrite (20) as

f n ( t ) = F n = ( 2 n ) ! a n ! ( n 1 ) ! k = 0 n ( n k ) ( 1 ) k F ( ( k + n ) a ) (25)

Stehfest [5] begins by supposing we have N values for F [ ( k + n ) a ] with F ( a ) , F ( 2 a ) , F ( 3 a ) , F ( N a ) for N even. Using (25) we can then determine N 2 values F 1 , F 2 , , F N / 2 . Now each of these N 2 values satisfy the asymptotic series in (21) with the same coefficients.

As Stehfest [5] points out, the αj’s are the same for each of the above expansions and by using a suitable linear combination the first ( N 2 1 ) error terms in (21) can be eliminated. That is

f ( ln 2 a ) = n = 1 N 2 a n F ( n 2 + i 1 ) + O ( 1 N N 2 ) (26)

which may be achieved by selecting the coefficients to satisfy

n = 1 N 2 a n 1 ( N 2 + 1 n ) k = δ k , 0 , k = 1 , , N / 2 1 (27)

which produce the same coefficients as the Salzer acceleration scheme used in (22). In fact for any n, Stehfest generates the required coefficients using what is in effect a modified Salzer acceleration scheme giving

a n = ( 1 ) n 1 ( N 2 ) ! ( N 2 n ) n { ( N 2 + 1 n ) N 2 1 } (28)

Finally, Stehfest substitutes these results into (26) and gets the inversion formula

f ( t ) ln 2 t j = 1 N A j F ( j ln 2 t ) (29)

for N even and

A j = ( 1 ) N 2 + j = k = j + 1 2 min ( j , N 2 ) k N 2 ( 2 k ) ! ( N 2 k ) ! k ! ( k 1 ) ! ( j k ) ! ( 2 k j ) ! (30)

However, a direct application of the modified Salzer acceleration scheme in (28) onto the Gaver functional in (25) does not produce the same results for the expression in (30) so they are not exactly equal to each other.

To show this we consider the function sin ( t ) whose Laplace transform is

1 s 2 + 1

The eight weights produced by the Salzer acceleration for n = 8 are exactly the same for n = 18 in Stehfest’s modified Salzer acceleration scheme in (28).

However, Table 1 shows that for sin ( t ) with these same weights, the Salzer-Gaver scheme produces different results when compared to Stehfest’s scheme in (30). We believe this is due to Stehfest’s simplification of the Salzer-Gaver scheme to the expression in (30).

This simplification was necessary because as we show in our results in Section 6, Stehfest’s final expression in (30) is faster and works better in standard double precision. As the algorithm was developed in 1970, this would be far more efficient when taking into consideration the computing power available at the time. Again as we show in Section 6, a direct application of a Salzer acceleration scheme onto the Gaver functional is only advantageous in a multi-precision environment.

4.3. The Talbot Algorithm

Equations (4) to (8) showed that the Laplace transform can be seen as a Fourier transform of the function

e u t f ( t ) , t > 0 (31)

i.e.

F { e u t f ( t ) } = L { f ( t ) } = F ( s ) (32)

Table 1. Salzer-Gaver stehfest for sin ( t ) .

Hence the Fourier transform inversion formula can be applied to recover the function thus:

F 1 { F ( s ) } = e u t f ( t ) = 1 2 π F ( s ) e i v t d v (33)

as s = u + i v we have that d s = i d v and so

f ( t ) = 1 2 π i u i u + i F ( s ) e s t d s (34)

This result provides a direct means of obtaining the inverse Laplace transform. In practice the integral in (34) is evaluated using a contour

1 2 π i B e s t F ( s ) d s (35)

with B here denoting the Bromwich contour [3]. The contour is chosen so that it encloses all the possible singularities of F ( s ) . The idea of the contour is introduced so that the Cauchy residue theorem can be used to evaluate the integral.

However, when f ( t ) is to be calculated using numerical quadrature it may be more appropriate to devise a new contour. To ensure that (35) is integrable we may wish to control the growth of the magnitude of the integrand e s t by moving the contour to the left and so giving the real part of s a large negative component [1] [22].

However, the deformed contour must not be allowed to pass through any singularities of F ( s ) . This is to ensure that the transform is analytic in the region to the right of B.

4.3.1. Derivation of the Fixed Talbot Contour

In the derivation that follows [1] and [22] are used as the primary basis for extending the explanation of the derivation of the Talbot algorithm for inverting the Laplace Transform.

Abate-Valko [1] began with the Bromwich inversion integral along the Bromwich contour B with the transform

F ( s ) = 1 s α , α > 0 (36)

So f ( t ) can be expressed as

f ( t ) = 1 2 π i B e t ( s a ln s ) d s (37)

with a = α t in (37). As Abate-Valko [1] indicated, numerically evaluating the

integral in (37) is difficult due to the oscillatory nature of the integrand.

However, this evaluation can be achieved by deforming the contour B into a path of constant phase thus eliminating the oscillations in the imaginary component. These paths of constant phase are also paths of steepest decent for the real part of the integrand [1] [22] [23].

There are in general a number of contours for which the imaginary component remains constant so we choose one on which the real part attains a maximum on the interior (a saddle point) and this occurs at g ( s ) = 0 at some point on the contour. At these saddle points the Im { g ( s ) } = 0 [17]. Where

g ( s ) = s a ln s (38)

in (37). Thus we have

g ( s ) = 1 a s (39)

So the stationary point occurs when s = a .

With s = u + i v we have

Im { u + i v a ln ( u + i v ) } = 0 (40)

Expressing u + i v as R e i θ we have

Im { ( u a ln R ) + i ( v a θ ) } = 0 (41)

then

v = a θ (42)

and as

tan ( θ ) = v u (43)

Then

u = a θ cot ( θ ) (44)

[1].

With v = a θ then s can be parameterized to (44) Talbots contour

s ( θ ) = a θ ( cot ( θ ) + i ) , π < θ < + π (45)

[4].

4.3.2. Conformal Mapping of the Talbot Contour

While the above parametrization can be used as a basis for inverting the Laplace transform we proceed with the algorithm’s development via a convenient conformal mapping as follows.

cot θ = i ( e i θ + e i θ ) e i θ e i θ (46)

Then

θ cot θ + i θ = 2 i θ 1 e 2 i θ (47)

with z = 2 i θ (47) is equal to

z 1 e z (48)

Hence (45) becomes,

a z 1 e z (49)

The function

s ( z ) = a z 1 e z (50)

maps the closed interval M = [ 2 π i ,2 π i ] on the imaginary z-plane onto the curve L in the s plane giving the integral,

f ( t ) = 1 2 π i L F ( s ) e s t d s (51)

For the details of this transformation one can refer the study of Logan [2].

Next we follow the procedure as adopted by Logan [2] for numerically integrating (51). With the change of variable (51) becomes

f ( t ) = 1 2 π i M F ( s ( z ) ) e s ( z ) t s ( z ) d z (52)

where

s ( z ) = a ( z e z + e z 1 ) ( e z 1 ) 2 (53)

For convenience we write,

f ( t ) = 1 2 π i M I ( z , t ) d z (54)

where

I ( z , t ) = F ( s ( z ) ) e s ( z ) t s ( z ) (55)

The integral in (54) is then rotated by π 2 so the interval of integration is now real and becomes [ 2 π ,2 π ] and then we use the trapezoid rule with n odd and w = i z to obtain

f ( t ) 1 n { I ( 2 π i ) + T ( 2 π i ) + 2 j = 1 n 1 I ( i w j ) } (56)

where

w j = 2 π ( 2 j n 1 ) (57)

and we note that I ( 2 π i ) = I ( 2 π i ) = 0 [2].

4.3.3. Valko

Abate-Valko [1] deformed the Bromwich contour using the Talbot path which has the form,

s ( h ) = r h ( cot ( h ) + i ) , π < h < π (58)

Which is the same as (47) with a = r and h = θ

So we have

s ( h ) = i r ( 1 + i r ( h ) ) (59)

where,

r ( h ) = h + ( h cot ( h ) 1 ) cot ( h ) (60)

Then from (52) we find,

f ( t ) = r π 0 π Re [ e t s ( h ) F ( s ( h ) ) ( 1 + i r ( h ) ) ] d h (61)

They then approximate the value of the integral in (63) by using the trapezoidal rule with step size π m and h k = k π m ,

f ( t , M ) = r m [ 1 2 F ( r ) exp ( r t ) + k = 1 M 1 Re [ e t s ( h k ) F ( s ( h k ) ) ( 1 + i r ( h k ) ) ] ] (62)

Based on numerical experiments, Abate-Valko then fixed the parameterr to the value,

r = 2 M 5 t . (63)

[1]. We also use this value for a in Logan’s transformation.

5. Results

We tested the five algorithms on the functions listed in Table 2 and Table 3

Table 2. Test functions.

Table 3. Test functions continued.

below. Functions 1 - 11 and 18 are taken from the 16 functions tested by Davies and Martin [16]. The remaining functions are selected from those tested by Abate-Valko [1].

The first set of tests were carried out using 16 digits double precision. These results are shown in Table 4. The Fourier, Logan and Abate-Valko schemes were run with weights M = 50 , M = 100 and M = 200 , however for brevity we include only the result for M = 200 .

For the Stehfest and the Salzer-Gaver algorithms best results were obtained with weights of M = 16 and M = 8 respectively. This is in keeping with Stehfest’s observations on the instability of this method as M increases above an optimal level [5].

In multi-precision, the number of precision digits N for Abate-Valko were set equal to N = M [1] and for the Slazer-Gaver, and Stehfest schemes, best results were obtained when the number of precision digits was set equal to N = 2 M .

L = i = 1 30 [ f ( t i ) f ˜ ( t i ) ] 2 30 (64)

And

Table 4. Standard double precision.

L e = i = 1 30 [ f ( t i ) f ˜ ( t 1 ) ] 2 e t i i = 1 30 e t i (65)

where f ( t ) is the analytical solution and f ˜ ( t ) is the numerical solution. Hence L is the root-mean-square error and L e is the same as L but weighted by the factor e t [14].

All computations were done using a 64-bit operating system with an Intel(R) Core(TM) i7-855ou CPU processor. The algorithms were implemented in Maple 2018 using the Maple’s digits command to set the required precision.

5.1. Standard Double Precision

Table 4 and Table 5 show that when compared with the other four algorithms, the Fourier series method performs with the least accuracy on all the functions tested. It also fails to reconstruct functions 8, 15, 17 and 18, with poor results for functions 4, 5 and 12.

However, for the functions which it successfully reconstructs it does so with a RMS accuracy of between L= 3.6(−5) and 1.2(−2). We believe that this scheme will improve greatly when an acceleration scheme is applied. This is an issue we intend to investigate in our future work.

With the exception of function 7, J 0 ( t ) , Logan’s algorithm successfully inverts all the functions given in Table 2 and Table 3 with very good accuracy. We found that in SDP best results are obtained by equating a = 1 in (50). Table 3 and Table 4 show that for these functions the RMS error varies between 3.6(−8) to 8.4(−12).

Except for function 7 the J 0 ( t ) , the Abate-Valko scheme successfully inverts all the functions in Table 3 and Table 4. Moreover, it does so with greater

Table 5. Standard double precision continued.

Table 6. Multi-Precision N = 200.

Table 7. Multi-Precision continued N = 200.

accuracy than the Logan scheme. The tables show that the RMS error varied between 6.5(−11) and 6.2(−12).

Table 3 and Table 4 show that the Stehfest algorithm shows poor accuracy when inverting functions 1, 7, 10 and 11. For these functions the RMS error varies between 2.01(−2) to 9.2(−3). Its poor performance is due to the fact that the Stehfest algorithm has difficulty inverting functions of a cyclic nature [5]. However, it inverts the remaining functions with good accuracy with a RMS error of between 0 to 2.9(−5). Table 3 and Table 4 shows that the Salzer-Gaver algorithm shows poor accuracy for functions 1, 7, 10 and 11. These are the very same functions that the Stehfest algorithm has problems inverting. Again this is due to the difficulties it encounters when inverting cyclic functions. It inverts the remaining functions with less accuracy than the Stehfest with an RMS error varying between 10(−15) to 10(−5).

5.2. Multi Precision

With the exception of function 7, the Logan and Abate-Valko algorithms successfully inverted the remaining functions to a high degree of accuracy, see Table 6 and Table 7. Duffy [14] also remarks that when using the Talbot contour he had difficulties accurately inverting the Bessel function. This may related to the combination of the singularity on the imaginary axis and the branching nature of the square root function.

Abate-Valko [1] stated that they were able to overcome this by increasing the weights and hence the precision as a function of t. However, we were unable to replicate their results for this function.

Overall, the Abate-Valko scheme showed far greater accuracy than Logan’s across all the functions tested. However, Logan’s algorithm was still able to produce highly accurate results with RMS errors varying between 10(−60) to 10(−63). Moreover, Table 8 shows that Logan’s scheme was able to perform the inversion of these functions with shorter elapsed times.

The Stehfset and Salzer-Gaver algorithms were able to invert all the functions to a high degree of accuracy. The Salzer-Gaver scheme was in general about twice as accurate as the Stehfest algorithm which, was less accurate than Abate-Valko’s scheme. Nevertheless, the Stehfest scheme inverted the functions

Table 8. Elapsed time τ for t = 100 .

well within any generally desired accuracy with the RMS error varying from 0.0 to 10(−41). Moreover, as Table 8 shows it in terms of the elapsed time it was the fastest of all the algorithms for the most part twice as fast as the Abate-Valko scheme which in turn was at least twice as fast as the Salzer-Gaver scheme.

6. Conclusions

In standard-double-precision, the Abate-Valko algorithm provides the best results for the numerical reconstructions for the functions tested in this paper. The Fourier algorithm had the worst performance of the five algorithms tested. Both the Stehfest and Salzer-Gaver algorithms had difficulty reconstructing functions of a cyclic nature. None of the algorithms was able to invert the J 0 ( t ) function accurately.

In multi-precision, the Stehfest and the Salzer-Gaver schemes inverted all the functions with high accuracy. The Logan and Abate-Valko schemes were only able to invert the J 0 ( t ) with limited accuracy. However they were both able to reconstruct all the other functions with a high degree of accuracy. The most accurate algorithm in multi-precision was the Salzer-Gaver scheme. However, as Table 8 shows it also had the longest elapsed times. On the other hand, the Stehfest algorithm had the shortest elapsed times for the selected functions in Table 8. The algorithms that used the Abate-Valko were the most accurate, but Logan could reconstruct the functions with shorter elapsed times. Therefore we conclude that when working in standard-precision Valko’s algorithm performed best. However, in multi-precision, the Stehfest algorithm is best as it inverted all the functions with a high degree of accuracy and the shortest elapsed times.

Conflicts of Interest

The authors declare no conflicts of interest regarding the publication of this paper.

References

[1] Abate, J. and Valkó, P.P. (2004) Multi-Precision Laplace Transform Inversion. International Journal for Numerical Methods in Engineering, 60, 979-993.
https://doi.org/10.1002/nme.995
[2] Logan, J.D. (2013) Transport Modeling in Hydrogeochemical Systems, Volume 15. Springer Science & Business Media, Berlin.
[3] Spiegel, M.R. (1965) Schaum’s Outline of Theory and Problems of Laplace Transforms. McGraw-Hill, New York.
[4] Talbot, A. (1979) The Accurate Numerical Inversion of Laplace Transforms. IMA Journal of Applied Mathematics, 23, 97-120.
https://doi.org/10.1093/imamat/23.1.97
[5] Stehfest, H. (1970) Algorithm 368: Numerical Inversion of Laplace Transforms [D5]. Communications of the ACM, 13, 47-49.
https://doi.org/10.1145/361953.361969
[6] Gaver Jr., D.P. (1966) Observing Stochastic Processes, and Approximate Transform Inversion. Operations Research, 14, 444-459.
https://doi.org/10.1287/opre.14.3.444
[7] Wimp, J. (1981) Sequence Transformations and Their Applications. Academic Press, Cambridge.
[8] Crump, K.S. (1976) Numerical Inversion of Laplace Transforms Using a Fourier Series Approximation. Journal of the ACM (JACM), 23, 89-96.
https://doi.org/10.1145/321921.321931
[9] Davies, A. and Crann, D. (2004) A Handbook of Essential Mathematical Formulae. University of Hertfordshire Press, Hatfield.
[10] Cohen, A.M. (2007) Numerical Methods for Laplace Transform Inversion, Volume 5. Springer Science & Business Media, Berlin.
[11] Epstein, C.L. and Schotland, J. (2008) The Bad Truth about Laplace’s Transform. SIAM Review, 50, 504-520.
https://doi.org/10.1137/060657273
[12] Kuhlman, K.L. (2012) Comparison of Inverse Laplace Transform Algorithms for Laplace-Space Numerical Approaches. Technical Report, Sandia National Laboratories, Albuquerque.
[13] Defreitas, C.L. and Kane, S.J. (2018) The Noise Handling Properties of the Talbot Algorithm for Numerically Inverting the Laplace Transform. Journal of Algorithms & Computational Technology, 13, 1-14.
https://doi.org/10.1177/1748301818797069
[14] Duffy, D.G. (1993) On the Numerical Inversion of Laplace Transforms: Comparison of Three New Methods on Characteristic Problems from Applications. ACM Transactions on Mathematical Software (TOMS), 19, 333-359.
https://doi.org/10.1145/155743.155788
[15] Narayanan, G.V. and Beskos, D.E. (1982) Numerical Operational Methods for Timedependent Linear Problems. International Journal for Numerical Methods in Engineering, 18, 1829-1854.
https://doi.org/10.1002/nme.1620181207
[16] Davies, B. and Martin, B. (1979) Numerical Inversion of the Laplace Transform: A Survey and Comparison of Methods. Journal of Computational Physics, 33, 1-32.
https://doi.org/10.1016/0021-9991(79)90025-1
[17] Le Page, W.R. (1980) Complex Variables and the Laplace Transform for Engineers. Courier Corporation, North Chelmsford, MA.
[18] Dubner, H. and Abate, J. (1968) Numerical Inversion of Laplace Transforms by Relating Them to the Finite Fourier Cosine Transform. Journal of the ACM (JACM), 15, 115-123.
https://doi.org/10.1145/321439.321446
[19] Simon, R.M., Stroot, M.T. and Weiss, G.H. (1972) Numerical Inversion of Laplace Transforms with Application to Percentage Labeled Mitoses Experiments. Computers and Biomedical Research, 5, 596-607.
https://doi.org/10.1016/0010-4809(72)90039-0
[20] Cooley, J.W. and Tukey, J.W. (1965) An Algorithm for the Machine Calculation of Complex Fourier Series. Mathematics of Computation, 19, 297-301.
https://doi.org/10.1090/S0025-5718-1965-0178586-1
[21] Cooley, J.W., Lewis, P.A.W. and Welch, P.D. (1970) The Fast Fourier Transform Algorithm: Programming Considerations in the Calculation of Sine, Cosine and Laplace Transforms. Journal of Sound and Vibration, 12, 315-337.
https://doi.org/10.1016/0022-460X(70)90075-1
[22] Murli, A. and Rizzardi, M. (1990) Algorithm 682: Talbot’s Method of the Laplace Inversion Problems. ACM Transactions on Mathematical Software (TOMS), 16, 158-168.
https://doi.org/10.1145/78928.78932
[23] Bender, C.M. and Orszag, S.A. (2013) Advanced Mathematical Methods for Scientists and Engineers I: Asymptotic Methods and Perturbation Theory. Springer Science & Business Media, Berlin.

Copyright © 2024 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.