The Numerical Inversion of the Laplace Transform in a Multi-Precision Environment ()

Colin L. Defreitas^{}, Steve J. Kane^{}

School of Physics Astronomy and Mathematics, University of Hertfordshire, Hertfordshire, UK.

**DOI: **10.4236/am.2022.135027
PDF
HTML XML
232
Downloads
1,081
Views
Citations

School of Physics Astronomy and Mathematics, University of Hertfordshire, Hertfordshire, UK.

This paper examines the performance of five algorithms for numerically inverting the Laplace transform, in standard, 16-digit and multi-precision environments. The algorithms are taken from three of the four main classes of numerical methods used to invert the Laplace transform. Because the numerical inversion of the Laplace transform is a perturbed problem, rounding errors which are generated in numerical approximations can adversely affect the accurate reconstruction of the inverse transform. This paper demonstrates that working in a multi-precision environment can substantially reduce these errors and the resulting perturbations exist in transforming the data from the s-space into the time domain and in so doing overcome the main drawback of numerically inverting the Laplace transform. Our main finding is that both the Talbot and the accelerated Gaver functionals perform considerably better in a multi-precision environment increasing the advantages of using Laplace transform methods over time-stepping procedures in solving diffusion and more generally parabolic partial differential equations.

Share and Cite:

Defreitas, C. and Kane, S. (2022) The Numerical Inversion of the Laplace Transform in a Multi-Precision Environment. *Applied Mathematics*, **13**, 401-418. doi: 10.4236/am.2022.135027.

1. Introduction

This paper examines the performance of five algorithms for numerically inverting the Laplace transform, in standard 16-digit and multi-precision environments. The algorithms, whose derivations are outlined in Section 5, are taken from three of the four main classes of numerical methods used to invert the Laplace transform [1].

The Abate-Valko [1] and Logan schemes [2] belong to the class of inversion algorithms which deform the Bromwich contour [3]. They are closely related versions of this approach as they both use Talbot’s method for deformation of the contour [4]. Logan, however, chooses an exponential transform while Abate-Valko extends the original Talbot formulation expressing the contour in trigonometric form.

The Stehfest and Salzer-Gaver algorithms [5], are again two closely related schemes based on the acceleration of the Gaver functional [6]. Stehfest applied a modified Salzer acceleration scheme [7] onto the Gaver functional simplifying this result to yield one of the most widely used algorithms for inverting the Laplace transform. We find, however, that when we implement a direct application of the Salzer acceleration scheme onto the Gaver functional, (Salzer-Gaver), with Stehfest’s modifications, we do not obtain the same results as those produced by the Stehfest scheme. We conclude that Stehfest’s simplification process is at least in part responsible for the differences in performance of these two versions.

Finally, we examine the Fourier series method [8], which expresses the inversion integral as a Fourier series and then uses the trapezium rule to evaluate the integral numerically. The Fourier series method differs from the other four algorithms as no acceleration scheme is used to force convergence. As such, this is only used in a standard 16 digit precision environment and is compared with the four other schemes using standard precision.

2. The Laplace Transform

The Laplace transform is an integral transform defined as follows:

Let $f\left(t\right)$ be defined for $t\ge 0$, then the Laplace transform of $f\left(t\right)$ is given by,

$\mathcal{L}\left\{f\left(t\right)\right\}={\displaystyle {\int}_{0}^{\infty}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}f\left(t\right){\text{e}}^{-st}\text{d}t$ (1)

Thus
$\mathcal{L}\left\{f\left(t\right)\right\}$ is a function of *s* denoted as
$F\left(s\right)$. The Laplace transform can be shown to exist for any function which can be integrated over any finite interval
$0<t<l$ for
$l>0$, and for which
$f\left(t\right)$ is of exponential order, *i.e*.

$\left|f\left(t\right)\right|<M{\text{e}}^{at}$ (2)

as
$t\to \infty $, where
$M>0$ is a finite real number and *a* is a small real positive number.

Analytically the inverse Laplace transform is usually obtained using the techniques of complex contour integration with the resulting set of standard transforms presented in tables [9].

However, using the Laplace transform can generate data in the Laplace domain which is not easily invertible to the real domain by analytical means. Thus numerical inversion techniques have to be used to convert the data from the *s*-space to the time domain.

3. The Inverse Laplace Transform, Perturbation and Multi-Precision

The recovery of the function $f\left(t\right)$ is via the inverse Laplace transform which is most commonly defined via the Bromwich contour integral,

${L}^{-1}\left\{F\left(s\right)\right\}=f\left(t\right)=\frac{1}{2\pi i}{\displaystyle {\int}_{\alpha -i\infty}^{\alpha +i\infty}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}f\left(s\right){\text{e}}^{st}\text{d}s$ (3)

such that
$\alpha \in R$. The inversion integral is widely known to be highly perturbed [10] [11] [12]. This is due to the
${\text{e}}^{st}$ term in the integral which amplifies small changes in the input data. Hence all numerical schemes are vulnerable to this perturbation and this has to be taken into account when using the various algorithms to invert the Laplace transform. This suggests that working in a multi-precision environment can act to reduce errors and the resulting perturbations which exist in the transforming the data from the *s*-space into the time domain.

As Abate-Valko noted [1], “In the traditional development of the inversion methods, most of the effort was directed at controlling round-off errors. This is because the process is numerically unstable in a fixed-precision, computing environment. The problem is that as the user tries to increase the accuracy there comes a point where the round off error causes the error to increase dramatically”.

In fact Abate-Valko got further and made the claim that “for our purposes, we add the proviso that values of the transform can be computed to any desired precision as a function of the complex variable”.

This suggests that working in a multi-precision environment can act to reduce errors and the resulting perturbations which exist in transforming the data from the *s*-space into the time domain.

4. The Algorithms

Large parts of this section are taken from our earlier work [13]. This is necessary to provide sufficient background on the derivation of the algorithms and their performance. However, we extend this work to include comments on Salzer acceleration and the subsequent Salzer Gaver scheme’s derivation and compare this to the Stehfest inversion scheme.

However this has been extended to include analysis of the Salzer acceleration scheme on the Gaver functional. We also comp There are over 100 algorithms available for inverting the Laplace Transform with numerous comparative studies, examples include, Duffy [14], Narayanan and Beskos [15], Cohen [10] and perhaps the most comprehensive by Davies and Martin [16]. However for the purposes of this investigation we apply our tests using “Those algorithms that have passed the test of time” [1]. These fall into four groups,

1) Fourier series expansion.

2) Combination of Gaver Functionals.

3) Laguerre function expansion.

4) Deformation of the Bromwich contour.

4.1. The Fourier Series Method

In their survey of algorithms for inverting the Laplace transform, Davies and Martin [16] note that the Fourier series method without accelerated convergence gives good accuracy on a wide variety of functions. Since the Laplace Transform is closely related to the Fourier transform it is not surprising that inversion methods based on a Fourier series expansion would yield accurate results. In fact the two sided Laplace transform can be derived from the Fourier transform in the following way. We can define the Fourier transform as

$\mathcal{F}\left\{f\left(t\right)\right\}={\displaystyle {\int}_{-\infty}^{\infty}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}f\left(t\right){\text{e}}^{-2\pi i\nu t}\text{d}t$ (4)

Then letting $v=2\pi \nu $ we have

$\mathcal{F}\left\{f\left(t\right)\right\}={\displaystyle {\int}_{-\infty}^{\infty}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}f\left(t\right){\text{e}}^{-ivt}\text{d}t$ (5)

This Fourier transform exists provided
$f\left(t\right)$ is an absolutely integrable function, *i.e*.

${\int}_{-\infty}^{\infty}}\left|f\left(t\right)\right|\text{d}t<\infty $ (6)

As many functions do not satisfy condition (6), $f\left(t\right)$ is multiplied by the exponential dampening factor ${\text{e}}^{-ut}$ thus

$\mathcal{F}\left\{f\left(t\right){\text{e}}^{-ut}\right\}={\displaystyle {\int}_{-\infty}^{\infty}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}f\left(t\right){\text{e}}^{-ivt}{\text{e}}^{-ut}\text{d}t$ (7)

and letting $s=u+iv$ we obtain the two sided Laplace Transform of $f\left(t\right)$ as

$\mathcal{F}\left\{f\left(t\right){\text{e}}^{-ut}\right\}=\mathcal{L}\left\{f\left(t\right)\right\}={\displaystyle {\int}_{-\infty}^{\infty}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{\text{e}}^{-st}f\left(t\right)\text{d}t$ (8)

Le Page [17] notes that the integral given by (8) can be written in two parts as follows:

${\int}_{-\infty}^{\infty}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{\text{e}}^{-st}f\left(t\right)\text{d}t={\displaystyle {\int}_{-\infty}^{0}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{\text{e}}^{-st}f\left(t\right)\text{d}t+{\displaystyle {\int}_{0}^{\infty}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{\text{e}}^{-st}f\left(t\right)\text{d}t$ (9)

The second term on the RHS in the above expression is referred to as the one sided Laplace transform or simply the Laplace transform. Thus, *s* is defined as a complex variable in the definition of the Laplace transform.

As before the inverse Laplace transform is given as:

$f\left(t\right)=\frac{1}{2\pi i}{\displaystyle {\int}_{u-i\infty}^{u+i\infty}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{\text{e}}^{st}F\left(s\right)\text{d}s$ (10)

With $s=u+iv$ in (10) this leads to the result

$f\left(t\right)=\frac{2{\text{e}}^{ut}}{\pi}{\displaystyle {\int}_{0}^{\infty}}\left[\mathrm{Re}\left\{F\left(u+iv\right)\right\}\mathrm{cos}\left(vt\right)-\mathrm{Im}\left\{F\left(u+iv\right)\right\}\mathrm{sin}\left(vt\right)\right]\text{d}v$ (11)

[8], which can be replaced by the cosine transform pair

$\mathrm{Re}\left\{F\left(u+iv\right)\right\}={\displaystyle {\int}_{0}^{\infty}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{\text{e}}^{-ut}f\left(t\right)\mathrm{cos}\left(vt\right)\text{d}t$ (12)

$f\left(t\right)=\frac{2{\text{e}}^{ut}}{\pi}{\displaystyle {\int}_{0}^{\infty}}\mathrm{Re}\left\{F\left(u+iv\right)\right\}\mathrm{cos}\left(vt\right)\text{d}v$ (13)

or by the sine transform pair

$\mathrm{Re}\left\{F\left(u+iv\right)\right\}=-{\displaystyle {\int}_{0}^{\infty}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{\text{e}}^{-ut}f\left(t\right)\mathrm{sin}\left(vt\right)\text{d}t$ (14)

$f\left(t\right)=-\frac{2{\text{e}}^{ut}}{\pi}{\displaystyle {\int}_{0}^{\infty}}\mathrm{Im}\left\{F\left(u+iv\right)\right\}\mathrm{sin}\left(vt\right)\text{d}v$ (15)

Dubner and Abate [18] applied a trapezoid rule to (13) resulting in the Fourier series approximation,

$f\left(t\right)\approx \frac{2{\text{e}}^{ut}}{T}\left[\frac{1}{2}F\left(u\right)+{\displaystyle \underset{k=1}{\overset{\infty}{\sum}}}\mathrm{Re}\left\{F\left(u+\frac{k\pi i}{T}\right)\right\}\mathrm{cos}\left(\frac{k\pi t}{T}\right)\right]$ (16)

where
$f\left(t\right)$ is expanded in the interval
$0\le t<T$. For faster computation Simon *et al*. [19] proposed the following version of (16).

$f\left(t\right)\approx \frac{{\text{e}}^{ut}}{t}\left[\frac{1}{2}F\left(u\right)+{\displaystyle \underset{k=1}{\overset{\infty}{\sum}}}\mathrm{Re}\left\{F\left(u+\frac{k\pi i}{t}\right)\right\}{\left(-1\right)}^{k}\right]$ (17)

This series can be summed much faster than (16) as there are no cosines to compute [20]. This algorithm is relatively easy to implement with *u* being the only real varying parameter.

However, as Crump [8] points out, for the expression in (17) the transform
$F\left(s\right)$ must now be computed for a different set of *s*-values for each distinct *t*. Since this type of application occurs often in practice in which the numerical computations of
$F\left(s\right)$ is itself quite time consuming this may not be an economical inversion algorithm to use. These drawbacks to some extent can be overcome by using the fast Fourier transform techniques in [20] [21].

Crump [8] also extends this method to one with faster convergence by making use of the already computed imaginary parts. There are several other acceleration schemes for example, those outlined by Cohen [10], however these acceleration methods in general require the introduction of new parameters which for the purposes of this investigation we wish to avoid.

4.2. The Stehfest Algorithm

Davies and Martin [16] cite the Stehfest [5] algorithm as providing accurate results on a wide variety of test functions. Since that time, this algorithm has become widely used for inverting the Laplace Transform, being favoured due its reported accuracy and ease of implementation.

Here we give a brief overview of the evolution of the algorithm from a probability distribution function to the Gaver functional whose asymptotic expansion leads to an acceleration scheme which yields the algorithm in its most widely used form.

Gaver [6] investigated a method for obtaining numerical information on the time dependent behavior of stochastic processes which often arise in queuing theory. The investigation involved examining the properties of the three parameter class of density functions namely

${p}_{n,m}\left(a,t\right)=\frac{\left(n+m\right)!}{n!\left(m-1\right)!}{\left(1-{\text{e}}^{-at}\right)}^{n}a{\text{e}}^{-mat}$ (18)

with $n\mathrm{,}m\in \mathbb{N}$.

After the binomial expansion of the term ${\left(1-{\text{e}}^{-at}\right)}^{n}$, Gaver went on to find the expectancy $E\left[f\left({T}_{n\mathrm{,}m}\right)\right]$, where ${T}_{n\mathrm{,}m}$ is the random variable with density (18). From this Gaver was able to express the inverse Laplace transform in terms of the functional

${f}_{n,m}\left(t\right)=\frac{\mathrm{ln}2}{t}\frac{\left(n+m\right)!}{n!\left(m-1\right)!}{\displaystyle \underset{j=0}{\overset{n}{\sum}}}\left(\begin{array}{c}n\\ k\end{array}\right){\left(-1\right)}^{k}F\left(\left(k+m\right)\frac{\mathrm{ln}2}{t}\right)$ (19)

with certain conditions on *n* and *m*, Gaver makes
$n=m$ and Expresses (19) as

${f}_{n}\left(t\right)=\frac{\mathrm{ln}2}{t}\frac{\left(2n\right)!}{n!\left(n-1\right)!}{\displaystyle \underset{k=0}{\overset{n}{\sum}}}\left(\begin{array}{c}n\\ k\end{array}\right){\left(-1\right)}^{k}F\left(\left(k+n\right)\frac{\mathrm{ln}2}{t}\right)$ (20)

While the expression in (20) can be used to successfully invert the Laplace transform for a large class of functions its rate of convergence is slow [9] [10]. However Gaver [6] has shown that (20), with $t=\frac{\mathrm{ln}2}{a}$ has the asymptotic expansion

${f}_{n}\left(t\right)\approx f\left(\frac{\mathrm{ln}2}{a}\right)+\frac{{\alpha}_{1}}{n}+\frac{{\alpha}_{2}}{{n}^{2}}+\frac{{\alpha}_{3}}{{n}^{3}}+\cdots $ (21)

where the ${\alpha}_{j}$ ’s are constant coefficients in the asymptotic series. Hence (21) in the limit converges to

$\underset{n\to \infty}{{f}_{n}\left(t\right)}=f\left(\frac{\mathrm{ln}2}{a}\right)$

For the conditions on *m* and *n* and justification for the substitution for *a* referred to above see Gaver [6]. This asymptotic expansion provides scope for applying various acceleration techniques enabling a more viable application of the basic algorithm.

Much of the literature alludes to the fact that a Salzer [7] acceleration scheme is used on the Gaver functional in (20) which results in the Stehfest algorithm. In fact Stehfest’s approach was a little more subtle than a direct application of the Salzer acceleration.

4.2.1. Using Salzer Acceleration

The Salzer acceleration scheme makes use of the “Toeplitz limit theorem” [7], “this concerns the convergence of a transformation of a sequence ${\zeta}_{s}$ where the $\left(n+1\right)$ th member of the transformed sequence is a weighted mean of the first $\left(n+1\right)$ terms”

${\stackrel{\xaf}{S}}_{n}={\displaystyle \underset{k=0}{\overset{n}{\sum}}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{\mu}_{nk}\cdot {S}_{k}$ (22)

Here ${\stackrel{\xaf}{S}}_{n}$ is the transformed sequence and ${S}_{k}$ the original sequence which for our purposes $={f}_{n}\left(t\right)$ in (20). The Salzer means are given by

${\mu}_{nk}={\left(-1\right)}^{n+k}\frac{{\left(1+k\right)}^{n}}{n\mathrm{!}}\left(\begin{array}{c}n\\ k\end{array}\right)$ (23)

For the sake of compatibility with (22) we make the change $k\to i$ and $n\to k$ in (20). With this change of variables we also write

$\frac{\left(2k\right)!}{k!\left(k-1\right)!}=\frac{k\left(2k\right)!}{k!k!}$

This allows the sum to be taken from
$k=0$ to *n* without
$\left(0-1\right)\mathrm{!}$ in the denominator of (20). So with Salzer acceleration we can express the Salzer-Gaver algorithm as

${f}_{n}\left(t\right)=\frac{\mathrm{ln}2}{t}{\displaystyle \underset{k=0}{\overset{n}{\sum}}}{\left(-1\right)}^{n+k}\frac{{\left(k+1\right)}^{n}}{k!\left(n-k\right)!}\frac{k\left(2k\right)!}{k!k!}{\displaystyle \underset{i=0}{\overset{k}{\sum}}}\frac{k!}{i!\left(k-i\right)!}{\left(-1\right)}^{i}F\left\{\frac{\left(k+i\right)\mathrm{ln}2}{t}\right\}$ (24)

4.2.2. Stehfest’s Acceleration Scheme

For the purposes of following Stehfest’s derivation it will be convenient to rewrite (20) as

${f}_{n}\left(t\right)={F}_{n}=\frac{\left(2n\right)!a}{n!\left(n-1\right)!}{\displaystyle \underset{k=0}{\overset{n}{\sum}}}\left(\begin{array}{c}n\\ k\end{array}\right){\left(-1\right)}^{k}F\left(\left(k+n\right)a\right)$ (25)

Stehfest [5] begins by supposing we have *N* values for
$F\left[\left(k+n\right)a\right]$ with
$F\left(a\right)$,
$F\left(2a\right)$,
$F\left(3a\right)$,
$\cdots $
$F\left(Na\right)$ for *N* even. Using (25) we can then determine
$\frac{N}{2}$ values
${F}_{1}\mathrm{,}{F}_{2}\mathrm{,}\cdots \mathrm{,}{F}_{N/2}$. Now each of these
$\frac{N}{2}$ values satisfy the asymptotic series in (21) with the same coefficients.

As Stehfest [5] points out, the *α** _{j}*’s are the same for each of the above expansions and by using a suitable linear combination the first (
$\frac{N}{2}-1$) error terms in (21) can be eliminated. That is

$f\left(\frac{\mathrm{ln}2}{a}\right)={\displaystyle \underset{n=1}{\overset{\frac{N}{2}}{\sum}}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{a}_{n}{F}_{\left(\frac{n}{2}+i-1\right)}+O\left(\frac{1}{{N}^{\frac{N}{2}}}\right)$ (26)

which may be achieved by selecting the coefficients to satisfy

$\underset{n=1}{\overset{\frac{N}{2}}{\sum}}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{a}_{n}\frac{1}{{\left(\frac{N}{2}+1-n\right)}^{k}}={\delta}_{k,0},\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}k=1,\cdots ,N/2-1$ (27)

which produce the same coefficients as the Salzer acceleration scheme used in (22). In fact for any *n*, Stehfest generates the required coefficients using what is in effect a modified Salzer acceleration scheme giving

${a}_{n}=\frac{{\left(-1\right)}^{n-1}}{\left(\frac{N}{2}\right)!}\left(\begin{array}{c}\frac{N}{2}\\ n\end{array}\right)n\left\{{\left(\frac{N}{2}+1-n\right)}^{\frac{N}{2}-1}\right\}$ (28)

Finally, Stehfest substitutes these results into (26) and gets the inversion formula

$f\left(t\right)\approx \frac{\mathrm{ln}2}{t}{\displaystyle \underset{j=1}{\overset{N}{\sum}}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{A}_{j}F\left(\frac{j\mathrm{ln}2}{t}\right)$ (29)

for *N* even and

${A}_{j}={\left(-1\right)}^{\frac{N}{2}+j}={\displaystyle \underset{k=\lfloor \frac{j+1}{2}\rfloor}{\overset{\mathrm{min}\left(j,\frac{N}{2}\right)}{\sum}}}\frac{{k}^{\frac{N}{2}}\left(2k\right)!}{\left(\frac{N}{2}-k\right)!k!\left(k-1\right)!\left(j-k\right)!\left(2k-j\right)!}$ (30)

However, a direct application of the modified Salzer acceleration scheme in (28) onto the Gaver functional in (25) does not produce the same results for the expression in (30) so they are not exactly equal to each other.

To show this we consider the function $\mathrm{sin}\left(t\right)$ whose Laplace transform is

$\frac{1}{{s}^{2}+1}$

The eight weights produced by the Salzer acceleration for $n=8$ are exactly the same for $n=18$ in Stehfest’s modified Salzer acceleration scheme in (28).

However, Table 1 shows that for $\mathrm{sin}\left(t\right)$ with these same weights, the Salzer-Gaver scheme produces different results when compared to Stehfest’s scheme in (30). We believe this is due to Stehfest’s simplification of the Salzer-Gaver scheme to the expression in (30).

This simplification was necessary because as we show in our results in Section 6, Stehfest’s final expression in (30) is faster and works better in standard double precision. As the algorithm was developed in 1970, this would be far more efficient when taking into consideration the computing power available at the time. Again as we show in Section 6, a direct application of a Salzer acceleration scheme onto the Gaver functional is only advantageous in a multi-precision environment.

4.3. The Talbot Algorithm

Equations (4) to (8) showed that the Laplace transform can be seen as a Fourier transform of the function

${\text{e}}^{-ut}f\left(t\right),\text{\hspace{0.17em}}\text{\hspace{0.17em}}t>0$ (31)

*i.e. *

$F\left\{{\text{e}}^{-ut}f\left(t\right)\right\}=\mathcal{L}\left\{f\left(t\right)\right\}=F\left(s\right)$ (32)

Table 1. Salzer-Gaver stehfest for $\mathrm{sin}\left(t\right)$.

Hence the Fourier transform inversion formula can be applied to recover the function thus:

${F}^{-1}\left\{F\left(s\right)\right\}={\text{e}}^{-ut}f\left(t\right)=\frac{1}{2\pi}{\displaystyle {\int}_{-\infty}^{\infty}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}F\left(s\right){\text{e}}^{ivt}\text{d}v$ (33)

as $s=u+iv$ we have that $\text{d}s=i\text{d}v$ and so

$f\left(t\right)=\frac{1}{2\pi i}{\displaystyle {\int}_{u-i\infty}^{u+i\infty}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}F\left(s\right){\text{e}}^{st}\text{d}s$ (34)

This result provides a direct means of obtaining the inverse Laplace transform. In practice the integral in (34) is evaluated using a contour

$\frac{1}{2\pi i}{\displaystyle {\int}_{B}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{\text{e}}^{st}F\left(s\right)\text{d}s$ (35)

with *B* here denoting the Bromwich contour [3]. The contour is chosen so that it encloses all the possible singularities of
$F\left(s\right)$. The idea of the contour is introduced so that the Cauchy residue theorem can be used to evaluate the integral.

However, when
$f\left(t\right)$ is to be calculated using numerical quadrature it may be more appropriate to devise a new contour. To ensure that (35) is integrable we may wish to control the growth of the magnitude of the integrand
${\text{e}}^{st}$ by moving the contour to the left and so giving the real part of *s* a large negative component [1] [22].

However, the deformed contour must not be allowed to pass through any singularities of
$F\left(s\right)$. This is to ensure that the transform is analytic in the region to the right of *B*.

4.3.1. Derivation of the Fixed Talbot Contour

In the derivation that follows [1] and [22] are used as the primary basis for extending the explanation of the derivation of the Talbot algorithm for inverting the Laplace Transform.

Abate-Valko [1] began with the Bromwich inversion integral along the Bromwich contour *B* with the transform

$F\left(s\right)=\frac{1}{{s}^{\alpha}},\text{\hspace{0.17em}}\text{\hspace{0.17em}}\alpha >0$ (36)

So $f\left(t\right)$ can be expressed as

$f\left(t\right)=\frac{1}{2\pi i}{\displaystyle {\int}_{B}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{\text{e}}^{t\left(s-a\mathrm{ln}s\right)}\text{d}s$ (37)

with $a=\frac{\alpha}{t}$ in (37). As Abate-Valko [1] indicated, numerically evaluating the

integral in (37) is difficult due to the oscillatory nature of the integrand.

However, this evaluation can be achieved by deforming the contour *B* into a *path of constant phase* thus eliminating the oscillations in the imaginary component. These paths of constant phase are also paths of steepest decent for the real part of the integrand [1] [22] [23].

There are in general a number of contours for which the imaginary component remains constant so we choose one on which the real part attains a maximum on the interior (a saddle point) and this occurs at ${g}^{\prime}\left(s\right)=0$ at some point on the contour. At these saddle points the $\mathrm{Im}\left\{g\left(s\right)\right\}=0$ [17]. Where

$g\left(s\right)=s-a\mathrm{ln}s$ (38)

in (37). Thus we have

${g}^{\prime}\left(s\right)=1-\frac{a}{s}$ (39)

So the stationary point occurs when $s=a$.

With $s=u+iv$ we have

$\mathrm{Im}\left\{u+iv-a\mathrm{ln}\left(u+iv\right)\right\}=0$ (40)

Expressing $u+iv$ as $R{\text{e}}^{i\theta}$ we have

$\mathrm{Im}\left\{\left(u-a\mathrm{ln}R\right)+i\left(v-a\theta \right)\right\}=0$ (41)

then

$v=a\theta $ (42)

and as

$\mathrm{tan}\left(\theta \right)=\frac{v}{u}$ (43)

Then

$u=a\theta \mathrm{cot}\left(\theta \right)$ (44)

[1].

With
$v=a\theta $ then *s* can be parameterized to (44) Talbots contour

$s\left(\theta \right)=a\theta \left(\mathrm{cot}\left(\theta \right)+i\right),\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}-\pi <\theta <+\pi $ (45)

[4].

4.3.2. Conformal Mapping of the Talbot Contour

While the above parametrization can be used as a basis for inverting the Laplace transform we proceed with the algorithm’s development via a convenient conformal mapping as follows.

$\mathrm{cot}\theta =\frac{i\left({\text{e}}^{i\theta}+{\text{e}}^{-i\theta}\right)}{{\text{e}}^{i\theta}-{\text{e}}^{-i\theta}}$ (46)

Then

$\theta \mathrm{cot}\theta +i\theta =\frac{2i\theta}{1-{\text{e}}^{-2i\theta}}$ (47)

with $z=2i\theta $ (47) is equal to

$\frac{z}{1-{\text{e}}^{-z}}$ (48)

Hence (45) becomes,

$\frac{az}{1-{\text{e}}^{-z}}$ (49)

The function

$s\left(z\right)=\frac{az}{1-{\text{e}}^{-z}}$ (50)

maps the closed interval
$M=\left[-2\pi i\mathrm{,2}\pi i\right]$ on the imaginary *z*-plane onto the curve *L* in the *s* plane giving the integral,

$f\left(t\right)=\frac{1}{2\pi i}{\displaystyle {\int}_{L}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}F\left(s\right){\text{e}}^{st}\text{d}s$ (51)

For the details of this transformation one can refer the study of Logan [2].

Next we follow the procedure as adopted by Logan [2] for numerically integrating (51). With the change of variable (51) becomes

$f\left(t\right)=\frac{1}{2\pi i}{\displaystyle {\int}_{M}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}F\left(s\left(z\right)\right){\text{e}}^{s\left(z\right)t}{s}^{\prime}\left(z\right)\text{d}z$ (52)

where

${s}^{\prime}\left(z\right)=\frac{-a\left(z{\text{e}}^{-z}+{\text{e}}^{-z}-1\right)}{{\left({\text{e}}^{-z}-1\right)}^{2}}$ (53)

For convenience we write,

$f\left(t\right)=\frac{1}{2\pi i}{\displaystyle {\int}_{M}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}I\left(z\mathrm{,}t\right)\text{d}z$ (54)

where

$I\left(z\mathrm{,}t\right)=F\left(s\left(z\right)\right){\text{e}}^{s\left(z\right)t}{s}^{\prime}\left(z\right)$ (55)

The integral in (54) is then rotated by
$\frac{\pi}{2}$ so the interval of integration is now real and becomes
$\left[-2\pi \mathrm{,2}\pi \right]$ and then we use the trapezoid rule with *n* odd and
$w=-iz$ to obtain

$f\left(t\right)\cong \frac{1}{n}\left\{I\left(2\pi i\right)+T\left(-2\pi i\right)+2{\displaystyle \underset{j=1}{\overset{n-1}{\sum}}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}I\left(i{w}_{j}\right)\right\}$ (56)

where

${w}_{j}=2\pi \left(\frac{2j}{n}-1\right)$ (57)

and we note that $I\left(2\pi i\right)=I\left(-2\pi i\right)=0$ [2].

4.3.3. Valko

Abate-Valko [1] deformed the Bromwich contour using the Talbot path which has the form,

$s\left(h\right)=rh\left(\mathrm{cot}\left(h\right)+i\right),\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}-\pi <h<\pi $ (58)

Which is the same as (47) with $a=r$ and $h=\theta $

So we have

${s}^{\prime}\left(h\right)=ir\left(1+ir\left(h\right)\right)$ (59)

where,

$r\left(h\right)=h+\left(h\mathrm{cot}\left(h\right)-1\right)\mathrm{cot}\left(h\right)$ (60)

Then from (52) we find,

$f\left(t\right)=\frac{r}{\pi}{\displaystyle {\int}_{0}^{\pi}}\mathrm{Re}\left[{\text{e}}^{ts\left(h\right)}F\left(s\left(h\right)\right)\left(1+ir\left(h\right)\right)\right]\text{d}h$ (61)

They then approximate the value of the integral in (63) by using the trapezoidal rule with step size $\frac{\pi}{m}$ and ${h}_{k}=\frac{k\pi}{m}$,

$f\left(t,M\right)=\frac{r}{m}\left[\frac{1}{2}F\left(r\right)\mathrm{exp}\left(rt\right)+{\displaystyle \underset{k=1}{\overset{M-1}{\sum}}}\mathrm{Re}\left[{\text{e}}^{ts\left({h}_{k}\right)}F\left(s\left({h}_{k}\right)\right)\left(1+ir\left({h}_{k}\right)\right)\right]\right]$ (62)

Based on numerical experiments, Abate-Valko then fixed the parameter*r* to the value,

$r=\frac{2M}{5t}.$ (63)

[1]. We also use this value for *a* in Logan’s transformation.

5. Results

We tested the five algorithms on the functions listed in Table 2 and Table 3

Table 2. Test functions.

Table 3. Test functions continued.

below. Functions 1 - 11 and 18 are taken from the 16 functions tested by Davies and Martin [16]. The remaining functions are selected from those tested by Abate-Valko [1].

The first set of tests were carried out using 16 digits double precision. These results are shown in Table 4. The Fourier, Logan and Abate-Valko schemes were run with weights $M=50$, $M=100$ and $M=200$, however for brevity we include only the result for $M=200$.

For the Stehfest and the Salzer-Gaver algorithms best results were obtained with weights of
$M=16$ and
$M=8$ respectively. This is in keeping with Stehfest’s observations on the instability of this method as *M* increases above an optimal level [5].

In multi-precision, the number of precision digits *N* for Abate-Valko were set equal to
$N=M$ [1] and for the Slazer-Gaver, and Stehfest schemes, best results were obtained when the number of precision digits was set equal to
$N=2M$.

$L=\sqrt{{\displaystyle \underset{i=1}{\overset{30}{\sum}}}\frac{{\left[f\left({t}_{i}\right)-\stackrel{\u02dc}{f}\left({t}_{i}\right)\right]}^{2}}{30}}$ (64)

And

Table 4. Standard double precision.

${L}_{e}=\sqrt{\frac{{\displaystyle {\sum}_{i=1}^{30}{\left[f\left({t}_{i}\right)-\stackrel{\u02dc}{f}\left({t}_{1}\right)\right]}^{2}{\text{e}}^{-{t}_{i}}}}{{\displaystyle {\sum}_{i=1}^{30}{\text{e}}^{-{t}_{i}}}}}$ (65)

where
$f\left(t\right)$ is the analytical solution and
$\stackrel{\u02dc}{f}\left(t\right)$ is the numerical solution. Hence *L* is the root-mean-square error and
${L}_{e}$ is the same as *L* but weighted by the factor
${\text{e}}^{-t}$ [14].

All computations were done using a 64-bit operating system with an Intel(R) Core(TM) i7-855ou CPU processor. The algorithms were implemented in Maple 2018 using the Maple’s digits command to set the required precision.

5.1. Standard Double Precision

Table 4 and Table 5 show that when compared with the other four algorithms, the Fourier series method performs with the least accuracy on all the functions tested. It also fails to reconstruct functions 8, 15, 17 and 18, with poor results for functions 4, 5 and 12.

However, for the functions which it successfully reconstructs it does so with a RMS accuracy of between *L*= 3.6(−5) and 1.2(−2). We believe that this scheme will improve greatly when an acceleration scheme is applied. This is an issue we intend to investigate in our future work.

With the exception of function 7, ${J}_{0}\left(t\right)$, Logan’s algorithm successfully inverts all the functions given in Table 2 and Table 3 with very good accuracy. We found that in SDP best results are obtained by equating $a=1$ in (50). Table 3 and Table 4 show that for these functions the RMS error varies between 3.6(−8) to 8.4(−12).

Except for function 7 the ${J}_{0}\left(t\right)$, the Abate-Valko scheme successfully inverts all the functions in Table 3 and Table 4. Moreover, it does so with greater

Table 5. Standard double precision continued.

Table 6. Multi-Precision *N *= 200.

Table 7. Multi-Precision continued *N *= 200.

accuracy than the Logan scheme. The tables show that the RMS error varied between 6.5(−11) and 6.2(−12).

Table 3 and Table 4 show that the Stehfest algorithm shows poor accuracy when inverting functions 1, 7, 10 and 11. For these functions the RMS error varies between 2.01(−2) to 9.2(−3). Its poor performance is due to the fact that the Stehfest algorithm has difficulty inverting functions of a cyclic nature [5]. However, it inverts the remaining functions with good accuracy with a RMS error of between 0 to 2.9(−5). Table 3 and Table 4 shows that the Salzer-Gaver algorithm shows poor accuracy for functions 1, 7, 10 and 11. These are the very same functions that the Stehfest algorithm has problems inverting. Again this is due to the difficulties it encounters when inverting cyclic functions. It inverts the remaining functions with less accuracy than the Stehfest with an RMS error varying between 10(−15) to 10(−5).

5.2. Multi Precision

With the exception of function 7, the Logan and Abate-Valko algorithms successfully inverted the remaining functions to a high degree of accuracy, see Table 6 and Table 7. Duffy [14] also remarks that when using the Talbot contour he had difficulties accurately inverting the Bessel function. This may related to the combination of the singularity on the imaginary axis and the branching nature of the square root function.

Abate-Valko [1] stated that they were able to overcome this by increasing the weights and hence the precision as a function of *t*. However, we were unable to replicate their results for this function.

Overall, the Abate-Valko scheme showed far greater accuracy than Logan’s across all the functions tested. However, Logan’s algorithm was still able to produce highly accurate results with RMS errors varying between 10(−60) to 10(−63). Moreover, Table 8 shows that Logan’s scheme was able to perform the inversion of these functions with shorter elapsed times.

The Stehfset and Salzer-Gaver algorithms were able to invert all the functions to a high degree of accuracy. The Salzer-Gaver scheme was in general about twice as accurate as the Stehfest algorithm which, was less accurate than Abate-Valko’s scheme. Nevertheless, the Stehfest scheme inverted the functions

Table 8. Elapsed time $\tau $ for $t=100$.

well within any generally desired accuracy with the RMS error varying from 0.0 to 10(−41). Moreover, as Table 8 shows it in terms of the elapsed time it was the fastest of all the algorithms for the most part twice as fast as the Abate-Valko scheme which in turn was at least twice as fast as the Salzer-Gaver scheme.

6. Conclusions

In standard-double-precision, the Abate-Valko algorithm provides the best results for the numerical reconstructions for the functions tested in this paper. The Fourier algorithm had the worst performance of the five algorithms tested. Both the Stehfest and Salzer-Gaver algorithms had difficulty reconstructing functions of a cyclic nature. None of the algorithms was able to invert the ${J}_{0}\left(t\right)$ function accurately.

In multi-precision, the Stehfest and the Salzer-Gaver schemes inverted all the functions with high accuracy. The Logan and Abate-Valko schemes were only able to invert the ${J}_{0}\left(t\right)$ with limited accuracy. However they were both able to reconstruct all the other functions with a high degree of accuracy. The most accurate algorithm in multi-precision was the Salzer-Gaver scheme. However, as Table 8 shows it also had the longest elapsed times. On the other hand, the Stehfest algorithm had the shortest elapsed times for the selected functions in Table 8. The algorithms that used the Abate-Valko were the most accurate, but Logan could reconstruct the functions with shorter elapsed times. Therefore we conclude that when working in standard-precision Valko’s algorithm performed best. However, in multi-precision, the Stehfest algorithm is best as it inverted all the functions with a high degree of accuracy and the shortest elapsed times.

Conflicts of Interest

The authors declare no conflicts of interest regarding the publication of this paper.

[1] |
Abate, J. and Valkó, P.P. (2004) Multi-Precision Laplace Transform Inversion. International Journal for Numerical Methods in Engineering, 60, 979-993. https://doi.org/10.1002/nme.995 |

[2] | Logan, J.D. (2013) Transport Modeling in Hydrogeochemical Systems, Volume 15. Springer Science & Business Media, Berlin. |

[3] | Spiegel, M.R. (1965) Schaum’s Outline of Theory and Problems of Laplace Transforms. McGraw-Hill, New York. |

[4] |
Talbot, A. (1979) The Accurate Numerical Inversion of Laplace Transforms. IMA Journal of Applied Mathematics, 23, 97-120. https://doi.org/10.1093/imamat/23.1.97 |

[5] |
Stehfest, H. (1970) Algorithm 368: Numerical Inversion of Laplace Transforms [D5]. Communications of the ACM, 13, 47-49. https://doi.org/10.1145/361953.361969 |

[6] |
Gaver Jr., D.P. (1966) Observing Stochastic Processes, and Approximate Transform Inversion. Operations Research, 14, 444-459. https://doi.org/10.1287/opre.14.3.444 |

[7] | Wimp, J. (1981) Sequence Transformations and Their Applications. Academic Press, Cambridge. |

[8] |
Crump, K.S. (1976) Numerical Inversion of Laplace Transforms Using a Fourier Series Approximation. Journal of the ACM (JACM), 23, 89-96. https://doi.org/10.1145/321921.321931 |

[9] | Davies, A. and Crann, D. (2004) A Handbook of Essential Mathematical Formulae. University of Hertfordshire Press, Hatfield. |

[10] | Cohen, A.M. (2007) Numerical Methods for Laplace Transform Inversion, Volume 5. Springer Science & Business Media, Berlin. |

[11] |
Epstein, C.L. and Schotland, J. (2008) The Bad Truth about Laplace’s Transform. SIAM Review, 50, 504-520. https://doi.org/10.1137/060657273 |

[12] | Kuhlman, K.L. (2012) Comparison of Inverse Laplace Transform Algorithms for Laplace-Space Numerical Approaches. Technical Report, Sandia National Laboratories, Albuquerque. |

[13] |
Defreitas, C.L. and Kane, S.J. (2018) The Noise Handling Properties of the Talbot Algorithm for Numerically Inverting the Laplace Transform. Journal of Algorithms & Computational Technology, 13, 1-14. https://doi.org/10.1177/1748301818797069 |

[14] |
Duffy, D.G. (1993) On the Numerical Inversion of Laplace Transforms: Comparison of Three New Methods on Characteristic Problems from Applications. ACM Transactions on Mathematical Software (TOMS), 19, 333-359. https://doi.org/10.1145/155743.155788 |

[15] |
Narayanan, G.V. and Beskos, D.E. (1982) Numerical Operational Methods for Timedependent Linear Problems. International Journal for Numerical Methods in Engineering, 18, 1829-1854. https://doi.org/10.1002/nme.1620181207 |

[16] |
Davies, B. and Martin, B. (1979) Numerical Inversion of the Laplace Transform: A Survey and Comparison of Methods. Journal of Computational Physics, 33, 1-32. https://doi.org/10.1016/0021-9991(79)90025-1 |

[17] | Le Page, W.R. (1980) Complex Variables and the Laplace Transform for Engineers. Courier Corporation, North Chelmsford, MA. |

[18] |
Dubner, H. and Abate, J. (1968) Numerical Inversion of Laplace Transforms by Relating Them to the Finite Fourier Cosine Transform. Journal of the ACM (JACM), 15, 115-123. https://doi.org/10.1145/321439.321446 |

[19] |
Simon, R.M., Stroot, M.T. and Weiss, G.H. (1972) Numerical Inversion of Laplace Transforms with Application to Percentage Labeled Mitoses Experiments. Computers and Biomedical Research, 5, 596-607. https://doi.org/10.1016/0010-4809(72)90039-0 |

[20] |
Cooley, J.W. and Tukey, J.W. (1965) An Algorithm for the Machine Calculation of Complex Fourier Series. Mathematics of Computation, 19, 297-301. https://doi.org/10.1090/S0025-5718-1965-0178586-1 |

[21] |
Cooley, J.W., Lewis, P.A.W. and Welch, P.D. (1970) The Fast Fourier Transform Algorithm: Programming Considerations in the Calculation of Sine, Cosine and Laplace Transforms. Journal of Sound and Vibration, 12, 315-337. https://doi.org/10.1016/0022-460X(70)90075-1 |

[22] |
Murli, A. and Rizzardi, M. (1990) Algorithm 682: Talbot’s Method of the Laplace Inversion Problems. ACM Transactions on Mathematical Software (TOMS), 16, 158-168. https://doi.org/10.1145/78928.78932 |

[23] | Bender, C.M. and Orszag, S.A. (2013) Advanced Mathematical Methods for Scientists and Engineers I: Asymptotic Methods and Perturbation Theory. Springer Science & Business Media, Berlin. |

Journals Menu

Contact us

+1 323-425-8868 | |

customer@scirp.org | |

+86 18163351462(WhatsApp) | |

1655362766 | |

Paper Publishing WeChat |

Copyright © 2024 by authors and Scientific Research Publishing Inc.

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.