Numerical Integration of Forced and Damped Oscillators through a New Multistep Method

Forced and damped oscillators appear in the mathematical modelling of many problems in pure and applied sciences such as physics, engineering and celestial mechanics among others. Although the accuracy of the T-functions series method is high, the calculus of their coefficients needs specific recurrences in each case. To avoid this inconvenience, the T-functions series method is transformed into a multistep method whose coefficients are calculated using recurrence procedures. These methods are convergent and have the same properties to the T-functions series method. Numerical examples already used by other authors are presented, such as a stiff problem, a Duffing oscillator and an equatorial satellite problem when the perturbation comes from zonal harmonics J2.

1. Introduction

The numerical solutions of IVP’s (Initial Value Problems) associated with perturbed and damped oscillators are a frequent problem in pure and applied sciences such as physics, engineering, celestial mechanics, structural mechanics and the molecular biology among others.

One of the ways that are normally used to solve numerically these problems is through the Runge-Kutta methods and its adaptations    .

The first multistep codes, fixed-step and non-linear, for solving forced oscillators, were published by Stiefel and Bettis  and Bettis  . Stiefel and Scheifele  defined the so-called G-functions. Based on them, these authors constructed a series method that supposes a refinement based on Taylor series. These methods have the important property of integrating harmonic oscillations as a fixed frequency without truncation error. A new method TFSTS (Trigonometrically-Fitted Scheifele Two-Step), based on the Scheifele methods, which verifies this property has been published in .

In , a multistep numerical was developed, which, when contrasted with other methods which use Scheifele functions, has the advantage of incorporating a simple algebraic procedure for calculating the coefficients using recurrence formulas.

During the same period, methods were designed using φ-function series , instead of G-function series, which also integrate the homogeneous problem without truncation error.

In  a family of analytical functions is introduced known as T-functions which are dependent on three parameters. The solution of forced and damped oscillator is expressed as a series of T-functions. Furthermore, the series method of the T-functions is zero, stable and convergent. The series method is extremely precise, however, it has the disadvantage that it needs to be adapted to each specific problem.

In this paper, the transformation of the T-functions series method in the multistep scheme is explained. The calculation of the coefficients of the multistep scheme is made through a recurrent procedure based on the existing relationship between the divided differences and the elemental and complete symmetrical functions . The recurrent calculation of the coefficients permits the multistep scheme to be considered as a VSVO (Variable Step Variable Order) type scheme.

The multistep numerical algorithm, based on T-function series, permits integration of the non-perturbed and damped problem without truncation error.

In this paper, the predictor and corrector methods are designed for the integration of forced and damped oscillators of type ${x}^{″}\left(t\right)+\gamma {x}^{\prime }\left(t\right)+\alpha x\left(t\right)=\epsilon \cdot f\left(t,x\left(t\right),{x}^{\prime }\left(t\right)\right)$, with $x\left(0\right)={x}_{0}$ and ${x}^{\prime }\left(0\right)={{x}^{\prime }}_{0}$, being $\epsilon$ a parameter of perturbation, usually small, $\alpha$ is a known constant frequency and $\gamma$ is the damping constant.

In order to cancel the perturbation and carry out the exact integration of the previous IVP, the differential linear operator, ${D}^{2}+{\beta }^{2}$, is defined with $\beta$ being a third frequency. In the case of a more complex perturbation, which the operator cannot cancel, simpler expressions of it would be obtained, which would facilitate numerical integration of the IVP.

The resolution of three popular test problems is explained which illustrates the excellent performance of the multistep method. Firstly, we show the resolution of a stiff problem proposed by Lambert  ; secondly, a Duffing oscillator   is treated; and finally, the application of our method to the orbital calculus of an equatorial satellite orbit when the perturbation comes from J2 (zonal harmonics) in B-F variables (Burdet-Ferrándiz variables)     is considered. Its accuracy is compared with the known LSODE (Livermore Solver for Ordinary Differential Equations), MGEAR (C. W. Gear’s Multistep Method) and GEAR (C.W. Gear’s Extrapolation Method) codes implemented in MAPLE (MAThematic PLEasure program).

2. T-Functions

This paragraph presents a brief description of the T-functions, as well as the series method for the integration of forced and damped oscillators based on them .

2.1. Description of the T-Functions

Let us consider the following IVP:

${x}^{″}\left(t\right)+\gamma {x}^{\prime }\left(t\right)+\alpha x\left(t\right)=\epsilon \cdot f\left(t,x\left(t\right),{x}^{\prime }\left(t\right)\right),\text{\hspace{0.17em}}\text{\hspace{0.17em}}x\left(0\right)={x}_{0}\text{ }\text{\hspace{0.17em}}\text{and}\text{\hspace{0.17em}}\text{ }{x}^{\prime }\left(0\right)={{x}^{\prime }}_{0},$ (1)

which formulates an IVP corresponding to a forced and damped oscillator with $\alpha ,\gamma \in \left[0,+\infty \left[$ frequencies, where $\epsilon$ is a perturbation parameter, usually small.

The solution of (1), $x\left(t\right)$ obtained for the initial given conditions, is analytic in the interval $\left[0,T\right]\subset ℝ$ and the perturbation function

$g\left(t\right)=f\left(t,x\left(t;{x}_{0},{{x}^{\prime }}_{0},{t}_{0}\right),{x}^{\prime }\left(t;{x}_{0},{{x}^{\prime }}_{0},{t}_{0}\right)\right)$

admits in $\left[0,T\right]$, an absolutely convergent power series development in the following manner:

$g\left(t\right)=f\left(t,x\left(t;{x}_{0},{{x}^{\prime }}_{0},{t}_{0}\right),{x}^{\prime }\left(t;{x}_{0},{{x}^{\prime }}_{0},{t}_{0}\right)\right)=\underset{n=0}{\overset{\infty }{\sum }}\text{ }{c}_{n}\frac{{t}^{n}}{n!}.$ (2)

By applying the differential operator ${D}^{2}+{\beta }^{2}$ a (1), in order to cancel perturbation, the following superior order equation is obtained:

$\begin{array}{l}{D}^{4}x\left(t\right)+\gamma {D}^{3}x\left(t\right)+\left(\alpha +{\beta }^{2}\right){D}^{2}x\left(t\right)+{\beta }^{2}\gamma Dx\left(t\right)+\alpha {\beta }^{2}x\left(t\right)\\ =\epsilon \cdot \left({D}^{2}+{\beta }^{2}\right)f\left(t,x\left(t\right),{x}^{\prime }\left(t\right)\right).\end{array}$ (3)

Taking into account the initial conditions $x\left(0\right)={x}_{0}$ and ${x}^{\prime }\left(0\right)={{x}^{\prime }}_{0}$ is obtained:

$\begin{array}{l}{x}^{″}\left(0\right)=-\alpha {x}_{0}-\gamma {{x}^{\prime }}_{0}+\epsilon \cdot f\left(0,{x}_{0},{{x}^{\prime }}_{0}\right)={{x}^{″}}_{0},\\ {x}^{‴}\left(0\right)=-\alpha {x}^{\prime }\left(0\right)-\gamma {x}^{″}\left(0\right)+\epsilon \stackrel{\to }{\nabla }f\left(0,{x}_{0},{{x}^{\prime }}_{0}\right)\left(1,{x}_{0},{{x}^{\prime }}_{0}\right)={{x}^{‴}}_{0},\end{array}$

where $\stackrel{\to }{\nabla }f$ is the usual notation of the gradient vector $\left(\frac{\partial f}{\partial t},\frac{\partial f}{\partial x},\frac{\partial f}{\partial {x}^{\prime }}\right)$.

A more compact notation is introduced:

${L}_{4}\left(x\left(t\right)\right)=\left(\left({D}^{2}+{\beta }^{2}\right)\left({D}^{2}+\gamma D+\alpha \right)\right)x\left(t\right).$

The IVP (3), may be described in the following manner:

$\begin{array}{l}{L}_{4}\left(x\left(t\right)\right)=\epsilon \left({D}^{2}+{\beta }^{2}\right)g\left(t\right),\\ x\left(0\right)={x}_{0},\text{ }{x}^{\prime }\left(0\right)={{x}^{\prime }}_{0},\text{ }{x}^{″}\left(0\right)={{x}^{″}}_{0},\text{ }{x}^{‴}\left(0\right)={{x}^{‴}}_{0}.\end{array}$ (4)

With the help of Taylor development of $g\left(t\right)$ given in (2), the IVP (4), may be formulated by the equations:

$\begin{array}{l}{L}_{4}\left(x\left(t\right)\right)=\epsilon \cdot \underset{n=0}{\overset{\infty }{\sum }}\left({c}_{n+2}+{\beta }^{2}{c}_{n}\right)\frac{{t}^{n}}{n!},\\ x\left(0\right)={x}_{0},\text{ }{x}^{\prime }\left(0\right)={{x}^{\prime }}_{0},\text{ }{x}^{″}\left(0\right)={{x}^{″}}_{0},\text{ }{x}^{‴}\left(0\right)={{x}^{‴}}_{0}.\end{array}$ (5)

Definition 1 The solutions of ${L}_{4}\left(x\left(t\right)\right)=\frac{{t}^{n}}{n!}$, $x\left(0\right)={x}^{\prime }\left(0\right)={x}^{″}\left(0\right)={x}^{‴}\left(0\right)=0$, are denoted by ${\Gamma }_{n}\left(t\right),\forall n\in ℕ$.

Proposition 1 ${{\Gamma }^{\prime }}_{n}\left(t\right)={\Gamma }_{n-1}\left(t\right)$, $\forall n\in ℕ$ with $n\ge 1$.

Proposition 2 The functions ${\Gamma }_{n}\left(t\right)$, $\forall n\in ℕ$ verify the following recurrence law:

${\Gamma }_{n}\left(t\right)+\gamma {\Gamma }_{n+1}\left(t\right)+\left(\alpha +{\beta }^{2}\right){\Gamma }_{n+2}\left(t\right)+{\beta }^{2}\gamma {\Gamma }_{n+3}\left(t\right)+\alpha {\beta }^{2}{\Gamma }_{n+4}\left(t\right)=\frac{{t}^{n+4}}{\left(n+4\right)!}$

Definition 2 Let ${T}_{0},{T}_{1},{T}_{2},{T}_{3}$ be the solutions of ${L}_{4}\left(x\left(t\right)\right)=0$ with initial conditions ${T}_{i}^{\left(j\right)}\left(0\right)={\delta }_{i,j}$, $i,j=0,1,2,3$.

Theorem 1 The general solution of ${L}_{4}\left(x\left(t\right)\right)=0$ with $x\left(0\right)={x}_{0}$, ${x}^{\prime }\left(0\right)={{x}^{\prime }}_{0}$, ${x}^{″}\left(0\right)={{x}^{″}}_{0}$, ${x}^{‴}\left(0\right)={x}^{‴}$ is ${x}_{H}\left(t\right)={x}_{0}{T}_{0}\left(t\right)+{{x}^{\prime }}_{0}{T}_{1}\left(t\right)+{{x}^{″}}_{0}{T}_{2}\left(t\right)+{{x}^{‴}}_{0}\text{ }{T}_{3}\left(t\right)$.

Theorem 2 The general solution for (5) is $x\left(t\right)={x}_{H}\left(t\right)+\epsilon \cdot \underset{n=0}{\overset{\infty }{\sum }}\left({c}_{n+2}+{\beta }^{2}{c}_{n}\right){\Gamma }_{n}\left(t\right)$.

Using the functions ${\Gamma }_{n}\left(t\right)$, it is possible to extend the functions ${T}_{n}\left(t\right)$, in the following manner.

Definition 3 ${T}_{n+4}\left(t\right)={\Gamma }_{n}\left(t\right)$ with $n\ge 0$.

Thus the solution of (5) may be written as:

$x\left(t\right)={x}_{0}{T}_{0}\left(t\right)+{{x}^{\prime }}_{0}{T}_{1}\left(t\right)+{{x}^{″}}_{0}{T}_{2}\left(t\right)+{{x}^{‴}}_{0}\text{ }{T}_{3}\left(t\right)+\epsilon \cdot \underset{n=0}{\overset{\infty }{\sum }}\left({c}_{n+2}+{\beta }^{2}{c}_{n}\right){T}_{n+4}\left(t\right).$

2.2. T-Functions Series Method

We should consider the IVP (1) and with $x\left(t\right)$ being the solution, which we consider being analytical in the interval $\left[0,T\right]\subset ℝ$, that is, $x\left(t\right)=\underset{n=0}{\overset{\infty }{\sum }}\text{ }{a}_{n}\frac{{t}^{n}}{n!}$.

Defining:

$\begin{array}{l}{b}_{0}={a}_{0},{b}_{1}={a}_{1},{b}_{2}={a}_{2},{b}_{3}={a}_{3},\\ {b}_{n}={a}_{n}+\gamma {a}_{n-1}+\left(\alpha +{\beta }^{2}\right){a}_{n-2}+\gamma {\beta }^{2}{a}_{n-3}+\alpha {\beta }^{2}{a}_{n-4},\text{ }\text{\hspace{0.17em}}\text{with}\text{\hspace{0.17em}}\text{ }n\ge 4,\end{array}$

therefore $x\left(t\right)=\underset{n=0}{\overset{\infty }{\sum }}\text{ }{b}_{n}{T}_{n}\left(t\right)$.

We assume that the approximation to the solution ${x}_{k}=x\left(kh\right)$ and ${{x}^{\prime }}_{k}={x}^{\prime }\left(kh\right)$ has been calculated.

$\begin{array}{l}{a}_{0}={x}_{k},{a}_{1}={{x}^{\prime }}_{k},{a}_{2}={{x}^{″}}_{k},{a}_{3}={{x}^{‴}}_{k},\\ {a}_{n}=-\gamma {a}_{n-1}-\alpha {a}_{n-2}+\epsilon {f}^{\left(n-2\right)}\left(kh,x\left(kh\right),{x}^{\prime }\left(kh\right)\right),\text{ }\text{\hspace{0.17em}}\text{with}\text{\hspace{0.17em}}\text{ }n\ge 4,\\ {b}_{0}={a}_{0},{b}_{1}={a}_{1},{b}_{2}={a}_{2},{b}_{3}={a}_{3},\\ {b}_{n}={a}_{n}+\gamma {a}_{n-1}+\left(\alpha +{\beta }^{2}\right){a}_{n-2}+\gamma {\beta }^{2}{a}_{n-3}+\alpha {\beta }^{2}{a}_{n-4},\text{\hspace{0.17em}}\text{with}\text{\hspace{0.17em}}\text{ }n\ge 4,\\ {x}_{n+1}^{\left(j\right)}={x}_{n}{T}_{0}^{\left(j\right)}\left(h\right)+{{x}^{\prime }}_{n}{T}_{1}^{\left(j\right)}\left(h\right)+{{x}^{″}}_{n}{T}_{2}^{\left(j\right)}\left(h\right)+{{x}^{‴}}_{n}\text{ }{T}_{3}^{\left(j\right)}\left(h\right)\\ \text{ }\text{ }\text{\hspace{0.17em}}\text{\hspace{0.17em}}+\epsilon \cdot \underset{n=0}{\overset{\infty }{\sum }}\left({c}_{n+2}+{\beta }^{2}{c}_{n}\right){T}_{n-j+4}\left(h\right),\text{ }0\le j\le 3.\end{array}$ (6)

The T-functions series method is zero-stable and convergent .

3. Explicit Multistep Method for Perturbed and Damped Oscillators

To implement the multistep explicit method, we will base on the T-functions series method, substituting the derivatives of perturbed function that appear in (6) by expressions in term of divided differences, with some coefficients ${d}_{ij}$, elements of a matrix ${A}_{p}^{-t}$, of those we do not know a recurrence relation  . Once the matrix ${A}_{p}^{-t}$ is known, we will set up a recurrent calculus, through matrix ${S}_{p,n}$ for the explicit method. The study of symmetric polynomials and its relation with the divided differences, will allow us the computation of the matrix ${S}_{p,n}$.

To make a variable step explicit multistep method of p-steps, we use up to $\left(p-1\right)$ -th order divided difference of a function g, $g\left[{t}_{n},\cdots ,{t}_{n-k}\right]$, in the grid values ${t}_{n},\cdots ,{t}_{n-p+1}$.

The divided differences of perturbed function, g satisfy the identity  :

$\begin{array}{l}g\left[{t}_{n},\cdots ,{t}_{n-i}\right]=\underset{j=0}{\overset{\infty }{\sum }}\text{ }{P}_{j}\left[0,-{H}_{1},\cdots ,-{H}_{i}\right]\cdot {g}^{\left(j\right)}\left({t}_{n}\right),\\ {P}_{k}\left(t\right)=\frac{{t}^{k}}{k!},\text{ }{H}_{i}={t}_{n}-{t}_{n-i}.\end{array}$

Denoting by ${D}_{p,n}$ the following matrix, with $1×p$ order:

${D}_{p,n}=\left(g\left[{t}_{n}\right]\text{ }1!g\left[{t}_{n},{t}_{n-1}\right]\text{ }\cdots \text{ }\left(p-1\right)!g\left[{t}_{n},\cdots ,{t}_{n-\left(p-1\right)}\right]\right),$

and choosing $H=\mathrm{max}\left\{{H}_{1},\cdots ,{H}_{p-1}\right\}$, verifies the identity

${D}_{p,n}^{t}={A}_{p}\left(\begin{array}{c}g\left({t}_{n}\right)\\ {g}^{\prime }\left({t}_{n}\right)\\ ⋮\\ {g}^{\left(p-1\right)}\left({t}_{n}\right)\end{array}\right)+\left(\begin{array}{c}O\left({H}^{p}\right)\\ O\left({H}^{p-1}\right)\\ ⋮\\ O\left(H\right)\end{array}\right),$

where

${A}_{p}={\left(\begin{array}{ccccc}1& {P}_{1}\left[0\right]& {P}_{2}\left[0\right]& \cdots & {P}_{p-1}\left[0\right]\\ 0& 1& 1!{P}_{2}\left[0,-{H}_{1}\right]& \cdots & 1!{P}_{p-1}\left[0,-{H}_{1}\right]\\ 0& 0& 1& \cdots & 2!{P}_{p-1}\left[0,-{H}_{1},-{H}_{2}\right]\\ ⋮& ⋮& ⋮& \ddots & ⋮\\ 0& 0& 0& \cdots & 1\end{array}\right)}_{p×p},$

using a more compact notation is possible to write as

${D}_{p,n}^{t}={A}_{p}×{Z}_{p×1}+{O}_{p×1},\text{ }\text{\hspace{0.17em}}\text{with}\text{\hspace{0.17em}}\text{ }{Z}_{p×1}=\left(\begin{array}{c}g\left({t}_{n}\right)\\ {g}^{\prime }\left({t}_{n}\right)\\ ⋮\\ {g}^{\left(p-1\right)}\left({t}_{n}\right)\end{array}\right).$

Making a truncation and solving the matrix ${Z}_{p×1}$, it results:

${Z}_{p×1}={A}_{p}^{-1}×{\left({D}_{p,n}^{t}\right)}_{p×1}.$

Designating by ${\left({d}_{ij}\right)}_{p×p}={A}_{p}^{-t}={\left({A}_{p}^{-1}\right)}^{t}$, it can write:

${Z}_{p×1}=\left(\begin{array}{c}\underset{i=1}{\overset{p}{\sum }}\text{ }g\left[{t}_{n},\cdots ,{t}_{n-\left(i-1\right)}\right]{d}_{i1}\left(i-1\right)!\\ ⋮\\ \underset{i=1}{\overset{p}{\sum }}\text{ }g\left[{t}_{n},\cdots ,{t}_{n-\left(i-1\right)}\right]{d}_{ip}\left(i-1\right)!\end{array}\right).$ (7)

Substituting (7) in (6) and truncating, it obtains

$\begin{array}{c}{x}_{n+1}={x}_{n}{T}_{0}\left(h\right)+{{x}^{\prime }}_{n}{T}_{1}\left(h\right)+{{x}^{″}}_{n}{T}_{2}\left(h\right)+{{x}^{‴}}_{n}\text{ }{T}_{3}\left(h\right)\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}+\epsilon \underset{i=1}{\overset{p}{\sum }}\text{ }g\left[{t}_{n},\cdots ,{t}_{n-\left(i-1\right)}\right]\left(\underset{j=0}{\overset{p-3}{\sum }}\left({d}_{ij+3}+{\beta }^{2}{d}_{ij+1}\right)\left(i-1\right)!{T}_{j+4}\left(h\right)\right)\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}+\epsilon {\beta }^{2}\left(\underset{i=1}{\overset{p}{\sum }}\text{ }g\left[{t}_{n},\cdots ,{t}_{n-\left(i-1\right)}\right]\left({d}_{ip-1}{T}_{p+2}\left(h\right)+{d}_{ip}{T}_{p+3}\left(h\right)\right)\left(i-1\right)!\right).\end{array}$

With $i=1,\cdots ,p$, denoting:

$\begin{array}{l}{\Lambda }_{i}=\underset{j=0}{\overset{p-3}{\sum }}\left({d}_{ij+3}+{\beta }^{2}{d}_{ij+1}\right)\left(i-1\right)!{T}_{j+4}\left(h\right),\\ {{\Lambda }^{\prime }}_{i}=\underset{j=0}{\overset{p-3}{\sum }}\left({d}_{ij+3}+{\beta }^{2}{d}_{ij+1}\right)\left(i-1\right)!{T}_{j+3}\left(h\right),\end{array}$

we obtain the following formulas, for a multistep explicit method

$\begin{array}{l}{x}_{n+1}={x}_{n}{T}_{0}\left(h\right)+{{x}^{\prime }}_{n}{T}_{1}\left(h\right)+{{x}^{″}}_{n}{T}_{2}\left(h\right)+{{x}^{‴}}_{n}\text{ }{T}_{3}\left(h\right)+\epsilon \underset{i=1}{\overset{p}{\sum }}\text{ }{\Lambda }_{i}g\left[{t}_{n},\cdots ,{t}_{n-\left(i-1\right)}\right]\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}+\epsilon {\beta }^{2}\left(\underset{i=1}{\overset{p}{\sum }}\text{ }g\left[{t}_{n},\cdots ,{t}_{n-\left(i-1\right)}\right]\left({d}_{ip-1}{T}_{p+2}\left(h\right)+{d}_{ip}{T}_{p+3}\left(h\right)\right)\left(i-1\right)!\right),\\ {{x}^{\prime }}_{n+1}=-\alpha {\beta }^{2}{x}_{n}{T}_{3}\left(h\right)+{{x}^{\prime }}_{n}\left({T}_{0}\left(h\right)-\gamma {\beta }^{2}{T}_{3}\left(h\right)\right)+{{x}^{″}}_{n}\left({T}_{1}\left(h\right)-\left(\alpha +{\beta }^{2}\right){T}_{3}\left(h\right)\right)\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}+{{x}^{‴}}_{n}\left({T}_{2}\left(h\right)-\gamma {T}_{3}\left(h\right)\right)+\epsilon \underset{i=1}{\overset{p}{\sum }}\text{ }{{\Lambda }^{\prime }}_{i}g\left[{t}_{n},\cdots ,{t}_{n-\left(i-1\right)}\right]\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}+\epsilon {\beta }^{2}\left(\underset{i=1}{\overset{p}{\sum }}\text{ }g\left[{t}_{n},\cdots ,{t}_{n-\left(i-1\right)}\right]\left({d}_{ip-1}{T}_{p+1}\left(h\right)+{d}_{ip}{T}_{p+2}\left(h\right)\right)\left(i-1\right)!\right).\end{array}$ (8)

To construct the method we obtain the elements of the matrix ${A}_{p}^{-t}={\left({d}_{ij}\right)}_{p×p}$ by a recurrent procedure, based on the elemental, ${e}_{n,r}$, and complete symmetrical functions, ${h}_{n,r}$. For each $0\le r\le n$ the elementary symmetrical function ${e}_{n,r}$, is the sum of all products of r different variables ${t}_{i}$, being: ${e}_{n,0}=1$,

${e}_{n,r}=\underset{{i}_{1}<\cdots <{i}_{r}}{\overset{n}{\sum }}{t}_{{i}_{1}}\cdots {t}_{{i}_{r}}$, and the complete symmetrical function ${h}_{n,r}$, is defined as the sum of all monomials of total degree r, in the variables ${t}_{1},\cdots ,{t}_{n}$ that is to say:

 ${h}_{n,r}=\underset{|\lambda |=r}{\sum }\underset{{S}_{\lambda }}{\sum }\text{ }\text{ }{t}^{\alpha }$ where $\lambda =\left({\lambda }_{1}\cdots {\lambda }_{n}\right)\in {ℕ}^{n}$ being $|\lambda |={\lambda }_{1}+\cdots +{\lambda }_{n}$,

 ${S}_{\lambda }=\left\{\text{all}\text{\hspace{0.17em}}\text{the}\text{\hspace{0.17em}}\text{different}\text{\hspace{0.17em}}\text{permutations}\text{\hspace{0.17em}}\text{ }\alpha =\left({\alpha }_{1}\cdots {\alpha }_{n}\right)\text{ }\text{\hspace{0.17em}}\text{of}\text{\hspace{0.17em}}\lambda \right\}$ with ${t}^{\alpha }={t}_{1}^{{\alpha }_{1}}\cdots {t}_{n}^{{\alpha }_{n}}$.

 Particularly ${h}_{n,0}=1$ and ${h}_{n,1}={e}_{n,1}$.

Between the divided differences of $g\left(t\right)={t}^{m}$, that we will denote by ${t}^{m}\left[{t}_{1},\cdots ,{t}_{n}\right]$ and the complete symmetrical polynomial the next relation holds: ${t}^{m}\left[{t}_{1},\cdots ,{t}_{n}\right]={h}_{n,m-n+1}$.

If consider the point ${t}_{n}$, it defines the complete symmetrical functions:

${q}_{i,j}={t}^{j-1}\left[{H}_{n},\cdots ,{H}_{n-\left(i-1\right)}\right]\text{\hspace{0.17em}}\text{and}\text{\hspace{0.17em}}\text{ }{\sigma }_{i,j}\left(n\right)={\left(-1\right)}^{j-i}{e}_{j-1,j-i},$

in values ${H}_{n-k}={t}_{n-k}-{t}^{*}$ with $k=0,\cdots ,i-1$, and ${t}^{*}\in \left[a,b\right]$. The square matrices of order k, ${P}_{k,n}=\left({q}_{i,j}\left(n\right)\right)$ and ${S}_{k,n}=\left({\sigma }_{i,j}\left(n\right)\right)$, are inverse to each other.

As ${H}_{n-j}={t}_{n-j}-{t}^{*}$ and ${H}_{j}={t}_{n}-{t}_{n-j}$ we can write $\left({t}_{n}-{t}^{*}\right)-{H}_{j}={H}_{n-j}$ with $j=0,\cdots ,i-1$. In the particular case, ${t}^{*}={t}_{n}$ we will get ${H}_{n-j}=-{H}_{j}$.

The divided differences of one function g satisfy the property:

$g\left[{t}_{n},\cdots ,{t}_{n-\left(i-1\right)}\right]=\underset{j=0}{\overset{\infty }{\sum }}\text{ }{q}_{i,j+1}\left(n\right)\frac{1}{j!}{g}^{\left(j\right)}\left({t}^{*}\right).$

If $H=\mathrm{max}\left\{|{H}_{n}|,\cdots ,|{H}_{n-\left(i-1\right)}|\right\}$, as ${q}_{i,j}\left(n\right)$ have order $j-i$ in H, due to the last result, we can write:

$g\left[{t}_{n},\cdots ,{t}_{n-\left(i-1\right)}\right]=\underset{j=0}{\overset{p-1}{\sum }}\text{ }{q}_{i,j+1}\left(n\right)\frac{1}{j!}{g}^{\left(j\right)}\left({t}^{*}\right)+O\left({H}^{p-\left(i-1\right)}\right)\text{ }\text{ }\text{\hspace{0.17em}}\text{with}\text{\hspace{0.17em}}\text{ }i=1,\cdots ,p.$

Considering ${t}^{*}={t}_{n}$ and expressing those equalities in a matricial way, we have

$\left(\begin{array}{c}g\left[{t}_{n}\right]\\ g\left[{t}_{n},{t}_{n-1}\right]\\ ⋮\\ g\left[{t}_{n},\cdots ,{t}_{n-\left(p-1\right)}\right]\end{array}\right)=\left(\begin{array}{ccc}{q}_{1,1}\left(n\right)& \cdots & {q}_{1,p}\left(n\right)\\ {q}_{2,1}\left(n\right)& \cdots & {q}_{2,p}\left(n\right)\\ ⋮& \ddots & ⋮\\ {q}_{p,1}\left(n\right)& \cdots & {q}_{p,p}\left(n\right)\end{array}\right)\left(\begin{array}{c}g\left({t}_{n}\right)\\ \frac{{g}^{\prime }\left({t}_{n}\right)}{1!}\\ ⋮\\ \frac{{g}^{\left(p-1\right)}\left({t}_{n}\right)}{\left(p-1\right)!}\end{array}\right)+\left(\begin{array}{c}O\left({H}^{p}\right)\\ O\left({H}^{p-1}\right)\\ ⋮\\ O\left(H\right)\end{array}\right),$

and as ${q}_{i,j+1}\left(n\right)={h}_{i,j}$ in the arguments $\left\{{H}_{n},\cdots ,{H}_{n-\left(i-1\right)}\right\}$, we can write

$\left(\begin{array}{c}g\left[{t}_{n}\right]\\ g\left[{t}_{n},{t}_{n-1}\right]\\ ⋮\\ g\left[{t}_{n},\cdots ,{t}_{n-\left(p-1\right)}\right]\end{array}\right)=\left(\begin{array}{cccc}1& {h}_{1,1}& \cdots & {h}_{1,p-1}\\ 0& 1& \cdots & {h}_{2,p-2}\\ ⋮& ⋮& \ddots & ⋮\\ 0& 0& \cdots & 1\end{array}\right)\left(\begin{array}{c}g\left({t}_{n}\right)\\ \frac{{g}^{\prime }\left({t}_{n}\right)}{1!}\\ ⋮\\ \frac{{g}^{\left(p-1\right)}\left({t}_{n}\right)}{\left(p-1\right)!}\end{array}\right)+\left(\begin{array}{c}O\left({H}^{p}\right)\\ O\left({H}^{p-1}\right)\\ ⋮\\ O\left(H\right)\end{array}\right).$

As ${\sigma }_{i,j}\left(n\right)={\sigma }_{i-1,j-1}\left(n\right)-{\text{H}}_{n-j+2}{\sigma }_{i,j-1}\left(n\right)$ for $i,j\ge 2$ , if we consider ${t}^{*}={t}_{n}$ then:

$\begin{array}{l}{S}_{p,n}={\left({\sigma }_{i,j}\left(n\right)\right)}_{p×p}\text{ }\text{\hspace{0.17em}}\text{with}\\ \left\{\begin{array}{l}{\sigma }_{1,1}\left(n\right)=1\\ {\sigma }_{1,j}\left(n\right)=0,\text{\hspace{0.17em}}\text{\hspace{0.17em}}1 (9)

The recurrent form of the matrix ${A}_{p}^{-t}$ is obtained through:

${A}_{p}^{-t}={M}_{p}×{P}_{p,n}^{-t}×{N}_{p}={M}_{p}×{S}_{p,n}^{t}×{N}_{p},$ (10)

i.e.

${d}_{i,j}=\frac{\left(j-1\right)!{\sigma }_{j,i}\left(n\right)}{\left(i-1\right)!}\text{ }\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{with}\text{\hspace{0.17em}}\text{ }i,j=1,\cdots ,p,$ (11)

where ${M}_{p}={\left({m}_{ij}\right)}_{p}$ is a diagonal matrix, such that ${m}_{ii}=\frac{1}{i!}$, with $i=0,\cdots ,p-1$ and ${N}_{p}={M}_{p}^{-1}$.

The expressions (9) and (10) allow us to compute the ${A}_{p}^{-t}$ matrix by recurrence, from ${S}_{p,n}^{t}$ matrix.

Substituting (11) in (8), we obtain the multistep explicit method.

We take ${x}_{n}$ and ${{x}^{\prime }}_{n}$ as the approximate value of the solution and derivative in ${t}_{n}$, with starting values ${x}_{0},\cdots ,{x}_{p-1}$ and ${{x}^{\prime }}_{0},\cdots ,{{x}^{\prime }}_{p-1}$, respectively.

Definition 4

The formal expression of explicit multistep method is

$\begin{array}{l}{x}_{n+1}={x}_{n}{T}_{0}\left(h\right)+{{x}^{\prime }}_{n}{T}_{1}\left(h\right)+{{x}^{″}}_{n}{T}_{2}\left(h\right)+{{x}^{‴}}_{n}\text{ }{T}_{3}\left(h\right)\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}+\epsilon \underset{i=1}{\overset{p}{\sum }}\text{ }{\Lambda }_{i}g\left[{t}_{n},\cdots ,{t}_{n-\left(i-1\right)}\right]\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}+\epsilon {\beta }^{2}\left(\underset{i=1}{\overset{p}{\sum }}\text{ }g\left[{t}_{n},\cdots ,{t}_{n-\left(i-1\right)}\right]\left({\sigma }_{p-1,i}{T}_{p+2}\left(h\right)+\left(p-1\right){\sigma }_{pi}{T}_{p+3}\left(h\right)\right)\left(p-2\right)!\right),\\ \text{ }\text{with}\text{ }\text{\hspace{0.17em}}{\Lambda }_{i}=\underset{j=0}{\overset{p-3}{\sum }}\left(\left(j+1\right)\left(j+2\right){\sigma }_{j+3,i}+{\beta }^{2}{\sigma }_{j+1,i}\right)j!{T}_{j+4}\left(h\right),\text{ }i=1,\cdots ,p,\end{array}$

$\begin{array}{l}{{x}^{\prime }}_{n+1}=-\alpha {\beta }^{2}{x}_{n}{T}_{3}\left(h\right)+{{x}^{\prime }}_{n}\left({T}_{0}\left(h\right)-\gamma {\beta }^{2}{T}_{3}\left(h\right)\right)+{{x}^{″}}_{n}\left({T}_{1}\left(h\right)-\left(\alpha +{\beta }^{2}\right){T}_{3}\left(\left(h\right)\right)\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}+{{x}^{‴}}_{n}\left({T}_{2}\left(h\right)-\gamma {T}_{3}\left(h\right)\right)+\epsilon \underset{i=1}{\overset{p}{\sum }}\text{ }{{\Lambda }^{\prime }}_{i}g\left[{t}_{n},\cdots ,{t}_{n-\left(i-1\right)}\right]\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}+\epsilon {\beta }^{2}\left(\underset{i=1}{\overset{p}{\sum }}\text{ }g\left[{t}_{n},\cdots ,{t}_{n-\left(i-1\right)}\right]\left({\sigma }_{p-1,i}{T}_{p+1}\left(h\right)+\left(p-1\right){\sigma }_{pi}{T}_{p+2}\left(h\right)\right)\left(p-2\right)!\right),\\ \text{ }\text{with}\text{ }\text{\hspace{0.17em}}{{\Lambda }^{\prime }}_{i}=\underset{j=0}{\overset{p-3}{\sum }}\left(\left(j+1\right)\left(j+2\right){\sigma }_{j+3,i}+{\beta }^{2}{\sigma }_{j+1,i}\right)j!{T}_{j+3}\left(h\right),\text{ }i=1,\cdots ,p.\end{array}$

3.1. Residue Calculation

In the explicit method, the $\epsilon$ parameter is a common factor of truncation error in each step.

It is assumed that the value calculated for $x,{x}^{\prime },{x}^{″}$ and ${x}^{‴}$ in ${t}_{k}=kh$ is exact, i.e. ${x}_{k}=x\left(kh\right)$, ${{x}^{\prime }}_{k}={x}^{\prime }\left(kh\right)$, ${{x}^{″}}_{k}={x}^{″}\left(kh\right)$ and ${{x}^{‴}}_{k}={x}^{‴}\left(kh\right)$. The value ${f}_{k},{f}_{k-1},\cdots ,{f}_{k-n+1}$, is also exact, i.e. ${f}_{k-j}=f\left(\left(k-j\right)h\right)$, with $j=0,\cdots ,n-1$.

It is necessary to calculate the value of the residues ${r}_{i}=x\left(\left(k+1\right)h\right)-{x}_{k+1}^{\left(i-1\right)}$, with $i=1,2,3,4$.

In the T-functions series methods:

${a}_{0}={x}_{k},\text{ }{a}_{1}={{x}^{\prime }}_{k},\text{ }{a}_{2}={{x}^{″}}_{k},\text{ }{a}_{3}={{x}^{‴}}_{k},$

${a}_{n}=-\gamma {a}_{n-1}-\alpha {a}_{n-2}+\epsilon {f}^{\left(n-2\right)}\left(kh,x\left(kh\right),{x}^{\prime }\left(kh\right)\right),\text{\hspace{0.17em}}\text{\hspace{0.17em}}n\ge 4,$

${b}_{i}={a}_{i},\text{\hspace{0.17em}}\text{\hspace{0.17em}}i=0,1,2,3$

${b}_{n}={a}_{n}+\gamma {a}_{n-1}+\left(\alpha +{\beta }^{2}\right){a}_{n-2}+{\beta }^{2}\gamma {a}_{n-3}+\alpha {\beta }^{2}{a}_{n-4},\text{\hspace{0.17em}}\text{\hspace{0.17em}}n\ge 4,$

$x\left(\left(k+1\right)h\right)=\underset{n=0}{\overset{3}{\sum }}\text{ }{x}_{k}^{\left(n\right)}{T}_{n}\left(h\right)+\epsilon \underset{n=4}{\overset{\infty }{\sum }}\left({f}^{\left(n-2\right)}\left(kh\right)+{\beta }^{2}{f}^{\left(n-4\right)}\left(kh\right)\right){T}_{n}\left(h\right).$

In the multistep explicit method of T-functions:

$\begin{array}{l}{x}_{k+1}={x}_{k}{T}_{0}\left(h\right)+{{x}^{\prime }}_{k}{T}_{1}\left(h\right)+{{x}^{″}}_{k}{T}_{2}\left(h\right)+{{x}^{‴}}_{k}\text{ }{T}_{3}\left(h\right)+\epsilon \underset{i=1}{\overset{p}{\sum }}\text{ }{\Lambda }_{i}g\left[{t}_{k},\cdots ,{t}_{k-\left(i-1\right)}\right]\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}+\epsilon {\beta }^{2}\left(\underset{i=1}{\overset{p}{\sum }}\text{ }g\left[{t}_{k},\cdots ,{t}_{k-\left(i-1\right)}\right]\left({\sigma }_{p-1,i}{T}_{p+2}\left(h\right)+\left(p-1\right){\sigma }_{pi}{T}_{p+3}\left(h\right)\right)\left(p-2\right)!\right),\\ \text{ }\text{with}\text{ }\text{\hspace{0.17em}}{\Lambda }_{i}=\underset{j=0}{\overset{p-3}{\sum }}\left(\left(j+1\right)\left(j+2\right){\sigma }_{j+3,i}+{\beta }^{2}{\sigma }_{j+1,i}\right)j!{T}_{j+4}\left(h\right),\text{ }i=1,\cdots ,p,\end{array}$

Then

$\begin{array}{l}{r}_{1}=x\left(\left(k+1\right)h\right)-{x}_{k+1}\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}=\epsilon \underset{n=4}{\overset{\infty }{\sum }}\left({f}^{\left(n-2\right)}\left(kh\right)+{\beta }^{2}{f}^{\left(n-4\right)}\left(kh\right)\right){T}_{n}\left(h\right)-\epsilon \left(\underset{i=1}{\overset{p}{\sum }}\text{ }{\Lambda }_{i}g\left[{t}_{k},\cdots ,{t}_{k-\left(i-1\right)}\right]\right)\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}-\epsilon {\beta }^{2}\left(\underset{i=1}{\overset{p}{\sum }}\text{ }g\left[{t}_{k},\cdots ,{t}_{k-\left(i-1\right)}\right]\left({\sigma }_{p-1,i}{T}_{p+2}\left(h\right)+\left(p-1\right){\sigma }_{pi}{T}_{p+3}\left(h\right)\right)\left(p-2\right)!\right),\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}=\epsilon \left(\underset{n=4}{\overset{\infty }{\sum }}\left({f}^{\left(n-2\right)}\left(kh\right)+{\beta }^{2}{f}^{\left(n-4\right)}\left(kh\right)\right){T}_{n}\left(h\right)\right)-\epsilon \left(\underset{i=1}{\overset{p}{\sum }}\text{ }{\Lambda }_{i}g\left[{t}_{k},\cdots ,{t}_{k-\left(i-1\right)}\right]+{\beta }^{2}{M}_{0}\right),\\ \text{ }\text{with}\text{ }\text{\hspace{0.17em}}{M}_{0}=\underset{i=1}{\overset{p}{\sum }}\text{ }g\left[{t}_{k},\cdots ,{t}_{k-\left(i-1\right)}\right]\left({\sigma }_{p-1,i}{T}_{p+2}\left(h\right)+\left(p-1\right){\sigma }_{pi}{T}_{p+3}\left(h\right)\right)\left(p-2\right)!.\end{array}$

Similarly for ${r}_{i}=x\left(\left(k+1\right)h\right)-{x}_{k+1}^{\left(i-1\right)}$ with $i=2,3,4$.

So if $\epsilon =0$ the multistep explicit method of T-functions is exact, i.e. it integrates exactly the non-perturbed problem.

3.2. Consistency and Stability

If it is supposed that the function of perturbation $f\left(u,v,t\right)$ is lipschitzian in the variables u and v, and considering the definition of stability given by Stetter, the method is stable . It suffices to apply the theorems 1 and 2 of Grigorieff . According to the theorem 1 of Grigorieff, it is enough to verify that

$\underset{i=1}{\overset{p}{\sum }}\text{ }{\Lambda }_{i}g\left[{t}_{k},\cdots ,{t}_{k-\left(i-1\right)}\right]+{\beta }^{2}{M}_{0}$,satisfies the Lipschitz condition in the domain of its variables and to verify the stability of the method in the non-perturbed case.

The first part follows that the function f is lipschitzian.

Similarly for $\underset{i=1}{\overset{p}{\sum }}\text{ }{{\Lambda }^{\prime }}_{i}g\left[{t}_{k},\cdots ,{t}_{k-\left(i-1\right)}\right]+{\beta }^{2}{M}_{1}$, $\underset{i=1}{\overset{p}{\sum }}\text{ }{{\Lambda }^{″}}_{i}g\left[{t}_{k},\cdots ,{t}_{k-\left(i-1\right)}\right]+{\beta }^{2}{M}_{2}$ and $\underset{i=1}{\overset{p}{\sum }}\text{ }{{\Lambda }^{‴}}_{i}g\left[{t}_{k},\cdots ,{t}_{k-\left(i-1\right)}\right]+{\beta }^{2}{M}_{3}$ with

${M}_{l}=\underset{i=1}{\overset{p}{\sum }}\text{ }g\left[{t}_{k},\cdots ,{t}_{k-\left(i-1\right)}\right]\left({\sigma }_{p-1,i}{T}_{p+2-l}\left(h\right)+\left(p-1\right){\sigma }_{pi}{T}_{p+3-l}\left(h\right)\right)\left(p-2\right)!$

with $l=1,2,3$.

To demonstrate the second part, by the theorem 2 of Grigorieff it is enough to test that the set

$\left\{|{\left(\begin{array}{cccc}{T}_{0}\left(h\right)& {T}_{1}\left(h\right)& {T}_{2}\left(h\right)& {T}_{3}\left(h\right)\\ {{T}^{\prime }}_{0}\left(h\right)& {{T}^{\prime }}_{1}\left(h\right)& {{T}^{\prime }}_{2}\left(h\right)& {{T}^{\prime }}_{3}\left(h\right)\\ {{T}^{″}}_{0}\left(h\right)& {{T}^{″}}_{1}\left(h\right)& {{T}^{″}}_{2}\left(h\right)& {{T}^{″}}_{3}\left(h\right)\\ {{T}^{‴}}_{0}\left(h\right)& {{T}^{‴}}_{1}\left(h\right)& {{T}^{‴}}_{2}\left(h\right)& {{T}^{‴}}_{3}\left(h\right)\end{array}\right)}^{k}|,0\le k\le N,h=\frac{T}{N}\right\}$

is bounded.

This arises naturally from the expressions obtained in the construction of the T-functions, since they are continuous functions in $\left[0,T\right]$ and therefore bounded.

For the above results the method is consistent and stable. The method is convergent applying the result given by Stetter , which indicates that consistency and stability imply convergence.

4. Implicit Method for Perturbed and Damped Oscillators

Similarly for the implicit case, the matrix of that we extract the coefficients ${d}_{ij}$ we will denote as ${B}_{p}^{-t}$. The matrix ${B}_{p}$, as described in  , is

${B}_{p}={\left(\begin{array}{ccccc}1& {P}_{1}\left[h\right]& {P}_{2}\left[h\right]& \cdots & {P}_{p}\left[h\right]\\ 0& 1& 1!{P}_{2}\left[h,0\right]& \cdots & 1!{P}_{p}\left[h,0\right]\\ 0& 0& 1& \cdots & 2!{P}_{p}\left[h,0,-{H}_{1}\right]\\ ⋮& ⋮& ⋮& \ddots & ⋮\\ 0& 0& 0& \cdots & 1\end{array}\right)}_{\left(p+1\right)×\left(p+1\right)}$

Designating by ${\left({d}_{ij}\right)}_{\left(p+1\right)×\left(p+1\right)}={B}_{p}^{-t}={\left({B}_{p}^{-1}\right)}^{t}$, it can write

${Z}_{\left(p+1\right)×1}=\left(\begin{array}{c}\underset{i=1}{\overset{p+1}{\sum }}\text{ }g\left[{t}_{n+1},\cdots ,{t}_{n+1-\left(i-1\right)}\right]{d}_{i1}\left(i-1\right)!\\ ⋮\\ \underset{i=1}{\overset{p+1}{\sum }}\text{ }g\left[{t}_{n+1},\cdots ,{t}_{n+1-\left(i-1\right)}\right]{d}_{i\left(p+1\right)}\left(i-1\right)!\end{array}\right),$ (12)

substituting (12) in (6) and truncating, it results

$\begin{array}{l}{x}_{n+1}={x}_{n}{T}_{0}\left(h\right)+{{x}^{\prime }}_{n}{T}_{1}\left(h\right)+{{x}^{″}}_{n}{T}_{2}\left(h\right)+{{x}^{‴}}_{n}\text{ }{T}_{3}\left(h\right)\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}+\epsilon \underset{i=1}{\overset{p+1}{\sum }}\text{ }g\left[{t}_{n+1},\cdots ,{t}_{n+1-\left(i-1\right)}\right]\left(\underset{j=0}{\overset{p-2}{\sum }}\left({d}_{ij+3}+{\beta }^{2}{d}_{ij+1}\right)\left(i-1\right)!{T}_{j+4}\left(h\right)\right)\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}+\epsilon {\beta }^{2}\left(\underset{i=1}{\overset{p+1}{\sum }}\text{ }g\left[{t}_{n+1},\cdots ,{t}_{n+1-\left(i-1\right)}\right]\left({d}_{ip}{T}_{p+2}\left(h\right)+{d}_{ip+1}{T}_{p+3}\left(h\right)\right)\left(i-1\right)!\right).\end{array}$ (13)

With $i=1,\cdots ,p+1$, denoting

${\Gamma }_{i}=\underset{j=0}{\overset{p-2}{\sum }}\left({d}_{ij+3}+{\beta }^{2}{d}_{ij+1}\right)\left(i-1\right)!{T}_{j+4}\left(h\right),$

${{\Gamma }^{\prime }}_{i}=\underset{j=0}{\overset{p-2}{\sum }}\left({d}_{ij+3}+{\beta }^{2}{d}_{ij+1}\right)\left(i-1\right)!{T}_{j+3}\left(h\right),$

we obtain the following formulas, for an implicit multistep method

$\begin{array}{l}{x}_{n+1}={x}_{n}{T}_{0}\left(h\right)+{{x}^{\prime }}_{n}{T}_{1}\left(h\right)+{{x}^{″}}_{n}{T}_{2}\left(h\right)+{{x}^{‴}}_{n}\text{ }{T}_{3}\left(h\right)+\epsilon \underset{i=1}{\overset{p+1}{\sum }}\text{ }{\Gamma }_{i}g\left[{t}_{n+1},\cdots ,{t}_{n+1-\left(i-1\right)}\right]\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}+\epsilon {\beta }^{2}\left(\underset{i=1}{\overset{p+1}{\sum }}\text{ }g\left[{t}_{n+1},\cdots ,{t}_{n+1-\left(i-1\right)}\right]\left({d}_{ip}{T}_{p+2}\left(h\right)+{d}_{ip+1}{T}_{p+3}\left(h\right)\right)\left(i-1\right)!\right).\end{array}$

$\begin{array}{c}{{x}^{\prime }}_{n+1}=-\alpha {\beta }^{2}{x}_{n}{T}_{3}\left(h\right)+{{x}^{\prime }}_{n}\left({T}_{0}\left(h\right)-\gamma {\beta }^{2}{T}_{3}\left(h\right)\right)+{{x}^{″}}_{n}\left({T}_{1}\left(h\right)-\left(\alpha +{\beta }^{2}\right){T}_{3}\left(h\right)\right)\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}+{{x}^{‴}}_{n}\left({T}_{2}\left(h\right)-\gamma {T}_{3}\left(h\right)\right)+\epsilon \underset{i=1}{\overset{p+1}{\sum }}\text{ }{{\Gamma }^{\prime }}_{i}g\left[{t}_{n+1},\cdots ,{t}_{n+1-\left(i-1\right)}\right]\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}+\epsilon {\beta }^{2}\left(\underset{i=1}{\overset{p+1}{\sum }}\text{ }g\left[{t}_{n+1},\cdots ,{t}_{n+1-\left(i-1\right)}\right]\left({d}_{ip}{T}_{p+1}\left(h\right)+{d}_{ip+1}{T}_{p+2}\left(h\right)\right)\left(i-1\right)!\right).\end{array}$

Once the matrix ${B}_{p}^{-t}$ is known, we will set up a recurrent calculus, through matrix ${S}_{p,n+1}$ for the implicit method. The matrix ${S}_{p,n+1}$, as detailed in , is:

$\begin{array}{l}{S}_{p+1,n+1}={\left({\sigma }_{i,j}\left(n\right)\right)}_{\left(p+1\right)×\left(p+1\right)}\text{ }\text{\hspace{0.17em}}\text{with}\\ \left\{\begin{array}{l}{\sigma }_{1,1}\left(n\right)=1,\\ {\sigma }_{1,2}\left(n\right)=-{h}_{n+1},\\ {\sigma }_{1,j}\left(n\right)=0,\text{\hspace{0.17em}}\text{\hspace{0.17em}}2 (14)

The recurrent form of the matrix ${B}_{p}^{-t}$ is then obtained through:

${B}_{p}^{-t}={M}_{p+1}×{P}_{p,n+1}^{-t}×{N}_{p+1}={M}_{p+1}×{S}_{p,n+1}^{t}×{N}_{p+1},$ (15)

i.e.

${d}_{i,j}=\frac{\left(j-1\right)!{\sigma }_{j,i}\left(n\right)}{\left(i-1\right)!}\text{ }\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{with}\text{\hspace{0.17em}}\text{ }i,j=1,\cdots ,p+1,$ (16)

where ${M}_{p+1}={\left({m}_{ij}\right)}_{p+1}$ is a diagonal matrix, such that ${m}_{ii}=\frac{1}{i!}$, with $i=0,\cdots ,p$ and ${N}_{p+1}={M}_{p+1}^{-1}$.

The expressions (14) and (15) allow us to compute the ${B}_{p}^{-t}$ matrix by recurrence, from ${S}_{p,n+1}^{t}$ matrix.

Substituting (16) in (13), we obtain the implicit multistep method.

We take ${x}_{n}$ and ${{x}^{\prime }}_{n}$ as the approximate value of the solution and derivative in ${t}_{n}$, with starting values ${x}_{0},\cdots ,{x}_{p-1}$ and ${{x}^{\prime }}_{0},\cdots ,{{x}^{\prime }}_{p-1}$, respectively.

Definition 5

The formal expression of implicit multistep method is

$\begin{array}{l}{x}_{n+1}={x}_{n}{T}_{0}\left(h\right)+{{x}^{\prime }}_{n}{T}_{1}\left(h\right)+{{x}^{″}}_{n}{T}_{2}\left(h\right)+{{x}^{‴}}_{n}\text{ }{T}_{3}\left(h\right)+\epsilon \underset{i=1}{\overset{p+1}{\sum }}\text{ }{\Gamma }_{i}g\left[{t}_{n+1},\cdots ,{t}_{n+1-\left(i-1\right)}\right]\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}+\epsilon {\beta }^{2}\left(\underset{i=1}{\overset{p+1}{\sum }}\text{ }g\left[{t}_{n+1},\cdots ,{t}_{n+1-\left(i-1\right)}\right]\left({\sigma }_{pi}{T}_{p+3}\left(h\right)+p{\sigma }_{p+1,i}{T}_{p+4}\left(h\right)\right)\left(p-1\right)!\right),\\ \text{ }\text{with}\text{ }\text{\hspace{0.17em}}{\Gamma }_{i}=\underset{j=0}{\overset{p-2}{\sum }}\left(\left(j+1\right)\left(j+2\right){\sigma }_{j+3,i}+{\beta }^{2}{\sigma }_{j+1,i}\right)j!{T}_{j+4}\left(h\right),\text{\hspace{0.17em}}\text{\hspace{0.17em}}i=1,\cdots ,p+1,\end{array}$

$\begin{array}{l}{{x}^{\prime }}_{n+1}=-\alpha {\beta }^{2}{x}_{n}{T}_{3}\left(h\right)+{{x}^{\prime }}_{n}\left({T}_{0}\left(h\right)-\gamma {\beta }^{2}{T}_{3}\left(h\right)\right)+{{x}^{″}}_{n}{{T}^{\prime }}_{2}\left(h\right)\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}+{{x}^{‴}}_{n}\left({T}_{2}\left(h\right)-\gamma {T}_{3}\left(h\right)\right)+\epsilon \underset{i=1}{\overset{p+1}{\sum }}\text{ }{{\Gamma }^{\prime }}_{i}g\left[{t}_{n+1},\cdots ,{t}_{n+1-\left(i-1\right)}\right]\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}+\epsilon {\beta }^{2}\left(\underset{i=1}{\overset{p+1}{\sum }}\text{ }g\left[{t}_{n+1},\cdots ,{t}_{n+1-\left(i-1\right)}\right]\left({\sigma }_{p,i}{T}_{p+3}\left(h\right)+p{\sigma }_{p+1,i}{T}_{p+4}\left(h\right)\right)\left(p-1\right)!\right),\\ \text{ }\text{with}\text{ }\text{\hspace{0.17em}}\text{ }{{\Gamma }^{\prime }}_{i}=\underset{j=0}{\overset{p-2}{\sum }}\left(\left(j+1\right)\left(j+2\right){\sigma }_{j+3,i}+{\beta }^{2}{\sigma }_{j+1,i}\right)j!{T}_{j+3}\left(h\right),\text{\hspace{0.17em}}\text{\hspace{0.17em}}i=1,\cdots ,p+1.\end{array}$

Analogously to the multistep explicit method, the multistep implicit method of T-functions is convergent.

Predictor Corrector Method for Perturbed and Damped Oscillators

The predictor-corrector method, with variable step size, of p steps for perturbed oscillators is defined like the one which have as predictor the explicit method and as corrector the implicit method, with the previous definitions.

5. An Application for Highly Oscillatory Stiff Problems

In this section, we present an application of the multistep method based in T-functions to the resolution of stiff and highly oscillatory problems. The good behaviour of the method is presented by comparison with other known codes, implemented in the program packaged NUMERIC DSOLVE MAPLE. This program has been developed in an Intel(R) Xeon(r) processor with 12GB of RAM.

This program is useful because permits easy change of the numbers of digits used in calculations and provides convenient graphic capabilities.

5.1. Problem 1

This stiff problem has been selected in order to demonstrate the accuracy of the multistep predictor-corrector method, when the perturbation function is cancelled.

Let’s consider the following IVP stiff problem:

$\left\{\begin{array}{l}{{x}^{\prime }}_{1}\left(t\right)=2{x}_{1}\left(t\right)+{x}_{2}\left(t\right)+2\mathrm{sin}\left(t\right)\\ {{x}^{\prime }}_{2}\left(t\right)=-\left(\eta +2\right){x}_{1}\left(t\right)+\left(\eta +1\right)\left({x}_{2}\left(t\right)-\mathrm{cos}\left(t\right)+\mathrm{sin}\left(t\right)\right)\end{array}$

with initial conditions ${x}_{1}\left(0\right)=2$, ${x}_{2}\left(0\right)=3$ and solution independent of $\eta$ :

${x}_{1}\left(t\right)=2{\text{e}}^{-t}+\mathrm{sin}\left(t\right)\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{and}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{x}_{2}\left(t\right)=2{\text{e}}^{-t}+\mathrm{cos}\left(t\right).$

The eigenvalues of the system are −1 and $\eta$, which enables its degree of stiffness to be regulated. For the case $\eta =-1000$ and $\epsilon =1$ proposed in  , the stiff problem is expressed as an oscillator, the IVP is:

${x}^{″}\left(t\right)+1001{x}^{\prime }\left(t\right)+1000x\left(t\right)=1001\mathrm{cos}\left(t\right)+999\mathrm{sin}\left(t\right),x\left(0\right)=2,{x}^{\prime }\left(0\right)=-1,$

with solution and derivative $x\left(t\right)=2{\text{e}}^{-t}+\mathrm{sin}\left(t\right)$, ${x}^{\prime }\left(t\right)=-2{\text{e}}^{-t}+\mathrm{cos}\left(t\right)$.

Applying the operator ${D}^{2}+{\beta }^{2}$, with $\beta =1$, in order to cancel the perturbation function, the following IVP is obtained:

${x}^{\left(iv\right)}\left(t\right)+1001{x}^{‴}\left(t\right)+1001{x}^{″}\left(t\right)+1001{x}^{\prime }\left(t\right)+1000x\left(t\right)=0,$

$x\left(0\right)=2,$

${x}^{\prime }\left(0\right)=-1,$

${x}^{″}\left(0\right)=2,$

${x}^{‴}\left(0\right)=-3.$

Figure 1 and Figure 2, contrast the graph of the decimal logarithm of absolute value of the relative error of the solution $x\left(t\right)$ and vs t, calculated using the Series method and Predictor-Corrector method with four T-functions, digits

Figure 1. Problem 1. $x\left(t\right)$ position with four T-functions (Series Method).

Figure 2. Problem 1. $x\left(t\right)$ position with four T-functions (Multistep Method).

100 and $h=0.9$, with the numerical integration codes MGEAR [msteppart], errorper = Float (1, −13), LSODE, $tol={10}^{-16}$, GEAR, errorper = Float (1, −18).

An application is the analysis of an SDOF (Simple Degree of Freedom) subjected to an earthquake excitation.

The importance of an SDOF analysis lies in that it best shows the interdependence between structure and its properties and the duration of earthquake.

5.2. Problem 2

This problem presents one of the Duffing type equations which model electrical and mechanical problems.

The simplest model of a nonlinear vibratory system is that of the free oscillations of a pendulum, which in the case of small oscillations can be studied using a differential Duffing equation, being the basis for the study of many phenomena of nonlinear dynamics.

Micro oscillators are a different option compared to quartz crystals, as they are easier to miniaturize for integration into an electronic circuit, behaving like a Duffing oscillator with an internal resonance.

It is also worth highlighting the application of this chaotic oscillator in the analysis of harmonic SNR (Signals with low signal to Noise Ratio).

The problem proposed by  , which is a linear oscillator with cubic perturbation function, will be considered:

${x}^{″}\left(t\right)+\alpha x\left(t\right)=\epsilon {x}^{3}\left(t\right),\text{\hspace{0.17em}}\text{\hspace{0.17em}}x\left(0\right)=1\text{ }\text{\hspace{0.17em}}\text{and}\text{\hspace{0.17em}}\text{ }{x}^{\prime }\left(0\right)=0,$ (17)

that admits a first integral $H\left(x\left(t\right),{x}^{\prime }\left(t\right)\right)=\frac{1}{2}\left(\alpha {x}^{2}\left(t\right)+{\left({x}^{\prime }\left(t\right)\right)}^{2}\right)-\frac{\epsilon }{4}{x}^{4}\left(t\right)$.

It has been selected this problem to test the behaviour of the multistep predictor-corrector method when the differential operator ${D}^{2}+{\beta }^{2}$ does not cancel the perturbation. For the case $\alpha =1$ and $\epsilon ={10}^{-3}$, applying the operator ${D}^{2}+{\beta }^{2}$ with $\beta =2$ to (17), the next IVP is obtained

${x}^{\left(\text{iv}\right)}\left(t\right)+5{x}^{″}\left(t\right)+4x\left(t\right)=\epsilon \left({D}^{2}+4\right){x}^{3}\left(t\right),$

$x\left(0\right)=1,$

${x}^{\prime }\left(0\right)=0,$

${x}^{″}\left(0\right)=\epsilon -1={10}^{-3}-1,$

${x}^{‴}\left(0\right)=0.$

The T-functions series method has the disadvantage that it needs to be adapted to each specific problem. For the integration of the oscillator (17), it is necessary, prior to the design of the computational algorithm, to establish the recurrence:

${a}_{0}={x}_{i},\text{\hspace{0.17em}}\text{\hspace{0.17em}}{a}_{1}={{x}^{\prime }}_{i},$

${a}_{k+2}=-\alpha {a}_{k}+\epsilon \underset{j=0}{\overset{k}{\sum }}\left[\left(\begin{array}{c}k\\ j\end{array}\right){a}_{k-j}\left(\underset{i=0}{\overset{j}{\sum }}\left(\begin{array}{c}j\\ i\end{array}\right){a}_{i}{a}_{j-1}\right)\right],0\le k\le m-2.$

The multistep numerical method avoid the adaptation to each specific problem, as occurs in the T-functions series method.

The accuracy of T-functions series method and the multistep method are similar. Figure 3 and Figure 4, contrast the graph of the decimal logarithm of absolute value of the absolute error of the $H\left(x\left(t\right),{x}^{\prime }\left(t\right)\right)$ vs t, calculated using the Series method and Predictor-Corrector method with ten T-functions, digits 40 and $h=0.01$, with the numerical integration codes MGEAR [msteppart], errorper = Float(1, −15), LSODE, $tol={10}^{-18}$, GEAR, errorper = Float(1, −13).

Figure 3. Problem 2. $x\left(t\right)$ position with ten T-functions (Series Method).

Figure 4. Problem 2. $x\left(t\right)$ position with ten T-functions (Multistep Method).

5.3. Problem 3

In Astrodynamics, obtaining precise solutions for the long-term movement of a planetary system or a satellite that orbits a planet is of continuous interest.

In this problem we apply the new multistep method to calculate numerically the position of an Earth artifical satellite   .

To make easier the evaluation of the method we have selected the problem of an equatorial satellite orbit when the perturbation comes from zonal harmonics J2 in B-F variables   expressed using four decupled oscillators with unit frequency.

$\left\{\begin{array}{l}{{x}^{″}}_{1}\left(t\right)+{x}_{1}\left(t\right)=0\\ {{x}^{″}}_{2}\left(t\right)+{x}_{2}\left(t\right)=0\\ {{x}^{″}}_{3}\left(t\right)+{x}_{3}\left(t\right)=0\\ {u}^{″}\left(t\right)+u\left(t\right)-\frac{\mu }{{c}^{2}}-\frac{12{J}_{2}}{{c}^{2}}{u}^{2}\left(t\right)=0\end{array}$

with initial conditions:

$\left\{\begin{array}{l}{x}_{1}\left(\pi \right)=-1\\ {x}_{2}\left(\pi \right)=0\\ {x}_{3}\left(\pi \right)=0\\ u\left(\pi \right)=\frac{\mu \left(1-e\right)}{{c}^{2}}\end{array}$

where $\left({x}_{1}\left(t\right),{x}_{2}\left(t\right),{x}_{3}\left(t\right)\right)$ are the three direction cosines, u is the inverse of the satellite radius using as independent variable the “true anomaly”, c is the angular momentum that it can be considered as a constant, $\mu$ is the reduced mass and e is the eccentricity of the orbit.

This problem is the main problem in the case of an Earth satellite orbiting at low or medium height, since the other acting perturbations are smaller.

It is also integrable and the qualitative behavior of the solution is known and a closed solution is available in terms of special functions, which allows the numerical solution obtained to be contrasted with the exact solution of the problem.

We consider orbits with null e and high eccentricity, $e=0.99$.

 $e=0,\frac{\mu }{{c}^{2}}=\frac{20}{21},\frac{{J}_{2}}{{c}^{2}}=\frac{10}{21000}.$

 $e=0.99,\frac{\mu }{{c}^{2}}=\frac{100}{20895},\frac{{J}_{2}}{{c}^{2}}=\frac{50}{20895000}.$

The first three equations can be integrated exactly, because they are not perturbed. The last equation will be integrated with the predictor-corrector multistep method.

The error has been calculated using the next first integral:

$H\left(u\left(t\right),{u}^{\prime }\left(t\right)\right)=\frac{1}{2}\left({u}^{2}\left(t\right)+{\left({u}^{\prime }\left(t\right)\right)}^{2}\right)-12\frac{{J}_{2}}{{c}^{2}}{u}^{3}\left(t\right)-\frac{\mu }{{c}^{2}}u\left(t\right).$

Figure 5 and Figure 6, contrast the graph of the decimal logarithm of absolute

Figure 5. Problem 3. Trajectory error of a circular equatorial J2-perturbed satellite orbit, integrate with seventeen T-functions.

Figure 6. Problem 3. Trajectory error of an equatorial J2-perturbed satellite orbit, integrate with seventeen T-functions and $e=0.99$.

value of the absolute error of the $H\left(u\left(t\right),{u}^{\prime }\left(t\right)\right)$ vs t, calculated using the Predictor-Corrector method with seventeen T-functions, digits 40 and $h=0.1$, with the numerical integration codes MGEAR [msteppart], errorper = Float(1, −15), LSODE, $tol={10}^{-18}$, GEAR, errorper = Float(1, −15).

6. Conclusion and Suggestions

It has been developed a multistep algorithm, based on the T-functions series method, which integrates exactly the non homogeneous problem when it is possible to cancel the perturbation function by a differential operator. If the differential operator does not cancel the perturbation function, it could be obtained a simple expression which would make easier the numerical integration of IVP. The coefficients of this algorithm are calculated by algebraic recurrences, obtained by substituting the derivatives of the perturbation function by divided differences, allowing the design of multistep type codes VSVO. The multistep method is convergent. This method may successfully compete with known integrators, implemented in MAPLE, as shown in the examples, where the gain in precision achieved has been demonstrated for different IVP’s.

Conflicts of Interest

The authors declare no conflicts of interest regarding the publication of this paper.

Cite this paper

Cortés-Molina, M. , García-Alonso, F. and Reyes, J. (2019) Numerical Integration of Forced and Damped Oscillators through a New Multistep Method. Journal of Applied Mathematics and Physics, 7, 2440-2458. doi: 10.4236/jamp.2019.710165.

  Franco, J.M. (2006) New Methods for Oscillatory Systems Based on ARKN Methods. Applied Numerical Mathematics, 56, 1040-1053. https://doi.org/10.1016/j.apnum.2005.09.005  Yang, H., Wu, X. and Xia, J. (2009) Extended RKN-Type Methods for Numerical Integration of Perturbed Oscillators. Computer Physics Communications, 180, 1777-1794. https://doi.org/10.1016/j.cpc.2009.05.010  Wu, X., You, X. and Xia, J. (2009) Order Conditions for ARKN Methods Solving Oscillatory Systems. Computer Physics Communications, 180, 2250-2257. https://doi.org/10.1016/j.cpc.2009.07.011  You, X., Zhao, J., Yang, H., Fang, Y. and Wu, X. (2014) Order Conditions for RKN Methods Solving General Second-Order Oscillatory Systems. Numerical Algorithms, 66, 147-176. https://doi.org/10.1007/s11075-013-9728-5  Stiefel, E.L. and Bettis, D.G. (1969) Stabilization of Cowell’s Method. Numerische Mathematik, 13, 154-175. https://doi.org/10.1007/BF02163234  Bettis, D.G. (1970) Numerical Integration of Products of Fourier and Ordinary Polynomials. Numerische Mathematik, 14, 421-434. https://doi.org/10.1007/BF02163028  Bettis, D.G. (1970) Stabilization of Finite Difference Methods of Numerical Integration. Celestial Mechanics, 2, 282-295. https://doi.org/10.1007/BF01235122  Stiefel, E.L. and Scheifele, G. (1971) Linear and Regular Celestial Mechanics. Springer, Berlin-Heldelberg, New York. https://doi.org/10.1007/978-3-642-65027-7  You, X., Zhang, Y. and Zhao, J. (2011) Trigonometrically-Fitted Scheifele Two-Step Methods for Perturbed Oscillators. Computer Physics Communications, 182, 1481-1490. https://doi.org/10.1016/j.cpc.2011.04.001  Reyes, J.A., García-Alonso, F., Ferrándiz, J.M. and Vigo-Aguiar, J. (2007) Numeric Multistep Variable Methods for Perturbed Linear System Integration. Applied Mathematics and Computation, 190, 63-79. https://doi.org/10.1016/j.amc.2007.01.017  García-Alonso, F., Reyes, J.A., Ferrándiz, J.M. and Vigo-Aguiar, J. (2009) Accurate Numerical Integration of Perturbed Oscillatory Systems in Two Frequencies. ACM Transactions on Mathematical Software TOMS, 36, Article 21. https://doi.org/10.1145/1555386.1555390  García-Alonso, F., Cortés-Molina, M., Villacampa, Y. and Reyes, J.A. (2016) Accurate Integration of Forced and Damped Oscillators. University Politehnica of Bucharest Scientific Bulletin—Series A, 78, 193-204.  Macdonald, I.G. (1995) Symmetric Functions and Hall Polynomials. Oxford Science Publications, Oxford.  Lambert, J.D. (1991) Numerical Methods for Ordinary Differential Systems. John Willey and Sons Ltd., New York.  Martín, P. and Farto, J.M. (1999) Improved Numerical Integration of Perturbed Oscillators via Average. Applied Mathematics and Computation, 99, 129-139. https://doi.org/10.1016/S0096-3003(97)10181-3  Vigo-Aguiar, J. and Ramos, H. (2015) On the Choice of the Frequency in Trigonometrically-Fitted Methods for Periodic Problems. Journal of Computational and Applied Mathematics, 277, 94-105. https://doi.org/10.1016/j.cam.2014.09.008  Ferrándiz, J.M. (1988) A General Canonical Transformation Increasing the Number of Variables with Applications to the Two-Body Problem. Celestial Mechanics, 41, 343-357. https://doi.org/10.1007/BF01238770  Ferrándiz, J.M., Sansaturio, M.E. and Pojman, J.R. (1992) Increased Accuracy of Computations in the Main Satellite Problem through Linearization Methods. Celestial Mechanics, 53, 347-364. https://doi.org/10.1007/BF00051816  Janin, G. (1974) Accurate Computation of Highly Eccentric Satellite Orbits. Celestial Mechanics, 10, 541-567. https://doi.org/10.1007/BF01229121  Martín, P. and Ferrándiz, J.M. (1997) Multistep Numerical Methods Based on Scheifele G-Functions with Application to Satellite Dynamics. SIAM Journal on Numerical Analysis, 34, 359-375. https://doi.org/10.1137/S003614299426505X  Vigo-Aguiar, J. and Ferrándiz, J.M. (1998) Higher-Order Variable-Step Algorithms Adapted to the Accurate Numerical Integration of Perturbed Oscillators. Computer in Physics, 12, 467-470. https://doi.org/10.1063/1.168717  Vigo-Aguiar, J. (1999) An Approach to Variable Coefficients Methods for Special Differential Equations. International Journal of Applied Mathematics, 1, 911-921.  Stetter, H.J. (1973) Analysis of Discretization Methods for Ordinary Differential Equations. Springer, Berlin. https://doi.org/10.1007/978-3-642-65471-8  Grigorieff, R.D. (1983) Stability of Multistep Methods on Variable Grids. Numerische Mathematik, 42, 359-377. https://doi.org/10.1007/BF01389580  Van de Vyver, H. (2007) An Adapted Explicit Hybrid Method of Numerov-Type for the Numerical Integration of Perturbed Oscillators. Applied Mathematics and Computation, 186, 1385-1394. https://doi.org/10.1016/j.amc.2006.07.129 