A Novel Method for Solving Ordinary Differential Equations with Artificial Neural Networks

Abstract

This research work investigates the use of Artificial Neural Network (ANN) based on models for solving first and second order linear constant coefficient ordinary differential equations with initial conditions. In particular, we employ a feed-forward Multilayer Perceptron Neural Network (MLPNN), but bypass the standard back-propagation algorithm for updating the intrinsic weights. A trial solution of the differential equation is written as a sum of two parts. The first part satisfies the initial or boundary conditions and contains no adjustable parameters. The second part involves a feed-forward neural network to be trained to satisfy the differential equation. Numerous works have appeared in recent times regarding the solution of differential equations using ANN, however majority of these employed a single hidden layer perceptron model, incorporating a back-propagation algorithm for weight updation. For the homogeneous case, we assume a solution in exponential form and compute a polynomial approximation using statistical regression. From here we pick the unknown coefficients as the weights from input layer to hidden layer of the associated neural network trial solution. To get the weights from hidden layer to the output layer, we form algebraic equations incorporating the default sign of the differential equations. We then apply the Gaussian Radial Basis function (GRBF) approximation model to achieve our objective. The weights obtained in this manner need not be adjusted. We proceed to develop a Neural Network algorithm using MathCAD software, which enables us to slightly adjust the intrinsic biases. We compare the convergence and the accuracy of our results with analytic solutions, as well as well-known numerical methods and obtain satisfactory results for our example ODE problems.

Share and Cite:

Okereke, R. , Maliki, O. and Oruh, B. (2021) A Novel Method for Solving Ordinary Differential Equations with Artificial Neural Networks. Applied Mathematics, 12, 900-918. doi: 10.4236/am.2021.1210059.

1. Introduction

The beginning of Neuro-computing is often taken to be the research article of McCulloch and Pitts [1] published in 1943, which showed that even simple types of neural networks could, in principle, compute any arithmetic or logical function, was widely read and had great influence. Other researchers, principally von Neumann, wrote a book [2] in which the suggestion was made that the research into the design of brain-like or brain-inspired computers might be of great interest and benefit to scientific and technological knowledge.

We present a new perspective for obtaining solutions of initial value problems using Artificial Neural Networks (ANN). We discover that neural network based model for the solution of ordinary differential equations (ODE) provides a number of advantages over standard numerical methods. Firstly, the neural network based solution is differentiable and is in closed analytic form. On the other hand most other techniques offer a discretized solution or a solution with limited differentiability. Secondly, the neural network based method for solving a differential equation provides a solution with very good generalization properties. The major advantage here is that our method reduces considerably the computational complexity involved in weight updating, while maintaining satisfactory accuracy.

Neural Network Structure

A neural network is an inter-connection of processing elements, units or nodes, whose functionality resemble that of the human neurons [3]. The processing ability of the network is stored in the connection strengths, simply called weights, which can be obtained by a process of adaptation to, a set of training patterns. Neural network methods can solve both ordinary and partial differential equations. Furthermore, it relies on the function approximation property of feed forward neural networks which results in a solution written in a closed analytic form. This form employs a feed forward neural network as a basic approximation element [4] [5]. Training of the neural network can be done either by any optimization technique which in turn requires the computation of the gradient the error with respect to the network parameters, by regression based model or by basis function approximation. In any of these methods, a trial solution of the differential equation is written as a sum of two parts, proposed by Lagaris [6]. The first part satisfies the initial or boundary conditions and contains no adjustable parameters. The second part contains some adjustable parameters that involves feed forward neural network and is constructed in a way that does not affect the initial or boundary conditions. Through the construction, the trial solution, initial or boundary conditions are satisfied and the network is trained to satisfy the differential equation. The general flowchart for neural network training (or learning) is given below in Figure 1.

2. Neural Networks as Universal Approximators

Artificial neural networks can make a nonlinear mapping from the inputs to the outputs of the corresponding system of neurons, which is suitable for analyzing the problem defined by initial/boundary value problems that have no analytical solutions or which cannot be easily computed. One of the applications of the multilayer feed forward neural network is the global approximation of real valued multivariable function in a closed analytic form. Namely such neural networks are universal approximators. It is discovered in the literature that multilayer feed forward neural networks with one hidden layer using arbitrary squashing functions are capable of approximating any Borel measurable function from one finite dimensional space to another with any desired degree of accuracy. This is made clear in the following theorem.

Universal Approximation Theorem

The universal approximation theorem for MLP was proved by Cybenko [7] and Hornik et al. [8] in 1989. Let ${I}_{n}$ represent an n-dimensional unit cube containing all possible input samples $x=\left({x}_{1},{x}_{2},\cdots ,{x}_{n}\right)$ with ${x}_{i}\in \left[0,1\right]$, $i=1,2,\cdots ,n$. Let $C\left({I}_{n}\right)$ be the space of continuous functions on ${I}_{n}$, given a continuous sigmoid function $\phi \left(\cdot \right)$, then the universal approximation theorem states that the finite sums of the form

${y}_{k}={y}_{k}\left(x,w\right)=\underset{i=1}{\overset{{N}_{2}}{\sum }}{w}_{ki}^{3}\phi \left(\underset{j=0}{\overset{n}{\sum }}{w}_{ki}^{2}{x}_{j}\right)\text{,}k=1,2,\cdots ,m$ (1)

are dense in $C\left({I}_{n}\right)$. This simply means that given any function $f\in C\left({I}_{n}\right)$ and $\epsilon >0$, there is a sum $y\left(x,w\right)$ of the above form that satisfies

$|y\left(x,w\right)-f\left(x\right)|<\epsilon ,\text{}\forall x\in {I}_{n}$. (2)

Figure 1. Network training flowchart.

Minimization of error function can also be considered as a procedure for training the neural network [9], where the error corresponding to each input vector $x$ is the value $f\left(x\right)$ which has to become zero. In computing this error value, we require the network output as well as the derivatives of the output with respect to the input vectors. Therefore, while computing error with respect to the network parameters, we need to compute not only the gradient of the network but also the gradient of the network derivatives with respect to its inputs. This process can be quite tedious computationally, and we briefly outline this in what follows.

3.1. Gradient Computation with Respect to Network Inputs

Next step is to compute the gradient with respect to input vectors, for this purpose let us consider a multilayer perceptron (MLP) neural network with n input units, a hidden layer with m sigmoid units and a linear output unit. For a given input vector $x=\left({x}_{1},{x}_{2},\cdots ,{x}_{n}\right)$ the network output is written:

$N\left(x,p\right)=\underset{i=1}{\overset{m}{\sum }}{v}_{j}\phi \left({z}_{j}\right)$, ${z}_{j}=\underset{i=1}{\overset{n}{\sum }}{w}_{ji}{x}_{i}+{u}_{j}$ (3)

${w}_{ji}$ denotes the weight from input unit 𝑖 to the hidden unit 𝑗, ${v}_{j}$ denotes weight from the hidden unit 𝑗 to the output unit, ${u}_{j}$ denotes the biases, and $\phi \left({z}_{j}\right)$ is the sigmoid activation function.

Now the derivative of networks output N with respect to input vector ${x}_{i}$ is:

$\frac{\partial }{\partial {x}_{i}}N\left(x,p\right)=\frac{\partial }{\partial {x}_{i}}\left(\underset{j=1}{\overset{m}{\sum }}{v}_{j}\phi \left({z}_{j}\right)\right)=\underset{j=1}{\overset{m}{\sum }}{v}_{j}{w}_{ji}{\phi }^{\left(1\right)}$ (4)

where ${\phi }^{\left(1\right)}\equiv \partial \phi \left(x\right)/\partial x$. Similarly, the kth derivative of N is computed as; ${\partial }^{k}N/\partial {x}_{i}^{k}=\underset{j=1}{\overset{m}{\sum }}{v}_{j}{w}_{ji}^{k}{\phi }_{j}^{\left( k \right)}$

Where ${\phi }_{j}\equiv \phi \left({z}_{j}\right)$ and ${\phi }^{\left(k\right)}$ denotes the kth order derivative of the sigmoid activation function.

3.2. Gradient Computation with Respect to Network Parameters

Network’s derivative with respect to any of its inputs is equivalent to a feed-forward neural network ${N}_{k}\left(x\right)$ with one hidden layer, having the same values for the weights ${w}_{ji}$ and thresholds ${u}_{j}$ and with each weight ${v}_{j}$ being replaced with ${v}_{j}{p}_{j}$. Moreover, the transfer function of each hidden unit is replaced with the kth order derivative of the sigmoid function. Therefore, the gradient of ${N}_{k}$ with respect to the parameters of the original network can easily obtained as:

3.3. Network Parameter Updation

After computation of derivative of the error with respect to the network parameter has been defined then the network parameters ${v}_{j}$, ${u}_{j}$ and ${w}_{ji}$ updation rule is given as,

${\nu }_{j}\left(t+1\right)={\nu }_{j}\left(t\right)+\mu \frac{\partial {N}_{k}}{\partial {\nu }_{j}}$, ${u}_{j}\left(t+1\right)={u}_{j}\left(t\right)+\eta \frac{\partial {N}_{k}}{\partial {u}_{j}}$, ${w}_{ji}\left(t+1\right)={w}_{ji}\left(t\right)+\gamma \frac{\partial {N}_{k}}{\partial {w}_{ji}}$ (5)

where $\mu$, $\eta$ and $\gamma$ are the learning rates, $i=1,2,\cdots ,n$ and $j=1,2,\cdots ,m$.

Once a derivative of the error with respect to the network parameters has been defined it is then straightforward to employ any optimization technique to minimize error function.

4. General Formulation for Differential Equations

Let us consider the following general differential equations which represent both ordinary and partial differential equations Majidzadeh [10]:

$G\left(x,\psi \left(x\right),\nabla \psi \left(x\right),{\nabla }^{2}\psi \left(x\right),\cdots \right)=0,\text{\hspace{0.17em}}\text{\hspace{0.17em}}\forall \text{\hspace{0.17em}}x\in D,$ (6)

subject to some initial or boundary conditions, where $x=\left({x}_{1},{x}_{2},\cdots ,{x}_{n}\right)\in {ℝ}^{n}$, $D\subset {ℝ}^{n}$ denotes the domain, and $\psi \left(x\right)$ is the unknown scalar-valued solution to be computed. Here, G is the function which defines the structure of the differential equation and $\nabla$ is a differential operator. Let ${\psi }_{t}\left(x,p\right)$ denote the trail solution with parameters (weights, biases) p. Tian Qiet al. [11], gave the following as the general formulation for the solution of differential equations (5) using ANN. Now, ${\psi }_{t}\left(x,p\right)$ may be written as the sum of two terms

${\psi }_{t}\left(x,p\right)=A\left(x\right)+F\left(x,N\left(x,p\right)\right),$ (7)

where $A\left(x\right)$ satisfies initial or boundary condition and contains no adjustable parameters, whereas $N\left(x,p\right)$ is the output of feed forward neural network with the parameters p and input data x. The function $F\left(x,N\left(x,p\right)\right)$ is actually the operational model of the neural network. Feed forward neural network (FFNN) converts differential equation problem to function approximation problem. The neural network $N\left(x,p\right)$ is given by;

$N\left(x,p\right)=\underset{j=1}{\overset{m}{\sum }}{v}_{j}\sigma \left({z}_{j}\right)$, ${z}_{j}=\underset{i=1}{\overset{n}{\sum }}{w}_{ji}{x}_{i}+{u}_{j}$ (8)

${w}_{ji}$ denotes the weight from input unit i to the hidden unit j, ${v}_{j}$ denotes weight from the hidden unit j to the output unit, ${u}_{j}$ denotes the biases, and $\sigma \left({z}_{j}\right)$ is the sigmoid activation function.

Neural Network Training

The neural network weights determine the closeness of predicted outcome to the desired outcome. If the neural network weights are not able to make the correct prediction, then only the biases need to be adjusted. The basis function we shall apply in this work in training the neural network is the sigmoid activation function given by

$\sigma \left({z}_{j}\right)={\left(1+{\text{e}}^{-{z}_{j}}\right)}^{-1}$. (9)

5. Method for Solving First Order Ordinary Differential Equation

Let us consider the first order ordinary differential equation below

${\psi }^{\prime }\left(x\right)=f\left(x,\psi \right),\text{}x\in \left[a,b\right]$ (10)

with initial condition $\psi \left(a\right)=A$. In this case, the ANN trial solution may be written as

${\psi }_{t}\left(x,p\right)=A+xN\left(x,p\right),$ (11)

where $N\left(x,p\right)$ is the neural output of the feed forward network with one input data x with parameters p. The trial solution ${\psi }_{t}\left(x,p\right)$ satisfies the initial condition. Now let us consider a first order differential equation:

${\psi }^{\prime }\left(x\right)-\psi =0,\text{}x\in \left[0,1\right],\text{}\psi \left(0\right)=1$ (12)

with trial solution:

${\psi }_{t}\left(x,p\right)=A+xN\left(x,p\right),$ (13)

where x is the input to neural network model and p represents the parameters—weights and biases.

${w}_{t}\left(0,p\right)=A+\left(0\right)N\left(0,p\right)=A=1$ $⇒{\psi }_{t}\left(x,p\right)=1+xN\left(x,p\right),$ (14)

To solve this problem using neural network, we shall employ a neural network architecture with three layers. One input layer with one neuron; one hidden layer with three neurons and one output layer with one output unit, as depicted below in Figure 2.

Each neuron is connected to other neurons of the previous layer through adaptable synaptic weights ${w}_{j}$ and biases ${u}_{j}$. Now, ${\psi }_{t}\left({x}_{i},p\right)=1+{x}_{i}N\left({x}_{i},p\right)$, with $N\left({x}_{i},p\right)=\underset{j=1}{\overset{3}{\sum }}{v}_{j}\sigma \left({z}_{j}\right)$, and

$\underset{j=i}{\overset{3}{\sum }}{v}_{j}\sigma \left({z}_{j}\right)={v}_{1}\sigma \left({z}_{1}\right)+{v}_{2}\sigma \left({z}_{2}\right)+{v}_{3}\sigma \left({z}_{3}\right)$, (15)

where ${z}_{1}={x}_{1}{w}_{11}+{u}_{1},\text{\hspace{0.17em}}{z}_{2}={x}_{1}{w}_{12}+{u}_{2},\text{\hspace{0.17em}}{z}_{3}={x}_{1}{w}_{13}+{u}_{3}$.

Figure 2. Schematic for $N\left(x,p\right)$.

Now, in solving ordinary differential equations, we assume a solution to the homogeneous part and approximate the function using SPSS model, which estimates the regression coefficients in the multiple regression mode. These coefficients are what we use as the weights from the input layer to the hidden layer. The condition placed on ${y}_{a}\left(x\right)=f\left(x\right)$, say, where ${y}_{a}\left(x\right)$ is the assumed solution is that $f\left(x\right)\ne 0$.

Any exponential function, $y\left(x\right)={\text{e}}^{\alpha x}$, where $\alpha \in ℝ$, is a part of solution to any first order ordinary differential equation. We regress that function using excel spreadsheet and SPSS model as follows:

Assuming we let ${y}_{a}\left(x\right)=f\left(x\right)={\text{e}}^{2x},x\in \left[0,1\right]$, be a solution to a given first order ordinary differential equation, ${y}^{\prime }-2y=f\left(x\right)$, defined on the given interval. Then dividing the interval into 10 equidistant points, we use excel spreadsheet to find values of ${y}_{a}\left(x\right)$ at all the points shown in Table 1, then use SPSS to regress and get the weights which we designate weights from input layer to hidden layer.

The above is followed by the display of the SPSS 20 output of the data in Table 2, from which we pick the weights.

Looking at Table 2, we see that the cubic curve fits perfectly the assumed solution. Therefore, we pick the coefficients: 2.413, 0.115 and 3.860 as the weights from input layer to the hidden layer.

The next task is to obtain the weights from hidden layer to the output layer. We shall find $f\left(x\right)$, a real function of a real valued vector $x={\left({x}_{1},{x}_{2},\cdots ,{x}_{d}\right)}^{\text{T}}$ and a set of functions, $\left\{{\phi }_{i}\left(x\right)\right\}$ called the elementary functions such that

$\stackrel{^}{f}\left(x,v\right)=\underset{i=1}{\overset{N}{\sum }}{v}_{i}{\phi }_{i}\left(x\right)$ (16)

is satisfied, where ${v}_{i}$ are real valued constants such that $|f\left(x\right)-\stackrel{^}{f}\left(x,v\right)|<\epsilon$. When one can find coefficients ${v}_{i}$ that make $\epsilon$ arbitrarily small for any function $f\left(.\right)$ over the domain of interest, we say that the elementary function set $\left\{{\phi }_{i}\left(.\right)\right\}$ has the property of universal approximation over the class of functions $f\left(.\right)$. There are many possible elementary functions we can choose from which we will show later. Now if the number of input vectors ${x}_{i}$ is made equal to the

Table 1. Table of values for the relationship between ya andx.

Table 2. Model summary and parameter estimates.

number of elementary functions ${\phi }_{i}\left(.\right)$, then the normal equations can be given as;

$\left[\begin{array}{ccc}{\phi }_{1}\left({x}_{1}\right)& \cdots & {\phi }_{N}\left({x}_{1}\right)\\ ⋮& \ddots & ⋮\\ {\phi }_{1}\left({x}_{N}\right)& \cdots & {\phi }_{N}\left({x}_{N}\right)\end{array}\right]\left[\begin{array}{c}{v}_{1}\\ ⋮\\ {v}_{N}\end{array}\right]=\left[\begin{array}{c}f\left({x}_{1}\right)\\ ⋮\\ f\left({x}_{N}\right)\end{array}\right]$ (17)

and the solution becomes $v={\varphi }^{-1}f$, where v becomes a vector with the coefficients, f is a vector composed of the values of the function at the N points, and $\varphi$ the matrix with entries given by values of the elementary functions at each of the N points in the domain. An important condition that must be placed in the elementary functions is that the inverse of $\varphi$ must exist. In general, there are many sets $\left\{{\phi }_{i}\left(.\right)\right\}$ with the property of universal approximation for a class of functions. We would prefer a set $\left\{{\phi }_{i}\left(.\right)\right\}$ over another $\left\{{\gamma }_{i}\left(.\right)\right\}$ if $\left\{{\phi }_{i}\left(.\right)\right\}$ provides a smaller error ε for a pre-set value of N. This means that the speed of convergence of the approximation is also an important factor in the selection of the basis. As we mentioned earlier, there are many possible elementary basis functions we can choose from. A typical example of basis function is the Gaussian basis specified by:

$G\left(z\right)=G\left(‖x-{c}_{i}‖\right)=\mathrm{exp}\left(-{z}^{2}\right);\text{\hspace{0.17em}}i=1,2,\cdots ,N$

However in neuro-computing, the most popular choice for elementary functions is the radial basis function (RBFs) where ${\phi }_{i}\left(x\right)$ is given;

${\phi }_{i}\left(x\right)=\mathrm{exp}\left(-\frac{{|x-{x}_{i}|}^{2}}{2{\sigma }^{2}}\right)$ (18)

where ${\sigma }^{2}$ is the variance of input vectors x. It is this last basis function that we shall adopt in generating our weights from hidden layer to output layer. At this point we shall divide a given interval into a certain number of points equidistant from each other and choose input vectors ${x}_{i}$ to conform with the number of elementary functions ${\phi }_{i}\left(.\right)$ to make the basis function implementable. Here N = 3 and the solution $v={\varphi }^{-1}f$ is given by;

$\left[\begin{array}{c}{v}_{1}\\ {v}_{2}\\ {v}_{3}\end{array}\right]={\left[\begin{array}{ccc}{\phi }_{1}\left({x}_{1}\right)& {\phi }_{2}\left({x}_{1}\right)& {\phi }_{3}\left({x}_{1}\right)\\ {\phi }_{1}\left({x}_{2}\right)& {\phi }_{2}\left({x}_{2}\right)& {\phi }_{3}\left({x}_{2}\right)\\ {\phi }_{1}\left({x}_{3}\right)& {\phi }_{2}\left({x}_{3}\right)& {\phi }_{3}\left({x}_{3}\right)\end{array}\right]}^{-1}\left[\begin{array}{c}{f}_{1}\\ {f}_{2}\\ {f}_{3}\end{array}\right]$ (19)

where $v,\varphi ,f$ are as defined before, and

${\phi }_{i}\left(x\right)=\mathrm{exp}\left(-\frac{{|x-{x}_{i}|}^{2}}{2{\sigma }^{2}}\right),\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{\sigma }^{2}=\frac{1}{N}\underset{i=1}{\overset{N}{\sum }}{\left({x}_{i}-\stackrel{¯}{x}\right)}^{2},\text{\hspace{0.17em}}\text{\hspace{0.17em}}\stackrel{¯}{x}=\frac{1}{N}\underset{i=1}{\overset{N}{\sum }}{x}_{i}$ (20)

Now to compute $f\left(x\right)$ depends on the nature of a given differential equation. For first, second or higher order homogeneous ordinary differential equations, we form linear, quadratic or higher order polynomial equations incorporating the default signs of the terms in the differential equations. For non-homogeneous ordinary differential equations, we use the forcing functions. When the weights of the neural network are obtained by these systematic ways, there is no need to adjust all the parameters in the network, as postulated by previous researchers, in order to achieve convergence. All that is required is a little adjustment of the biases, and these are fixed to lie in a given interval and convergence to a solution with an acceptable minimum error is achieved. When the problem of obtaining the parameters is settled, it becomes easy to solve any first, second or higher order ordinary differential equation using the neural network model appropriate to it. We shall restrict this study to first and second order linear ordinary homogeneous differential equations. To compute the prediction error we use the squared error function defined as follows:

$E=0.5{\left({\psi }_{d}-{\psi }_{p}\right)}^{2}$ (21)

where ${\psi }_{d}$ represents the desired output and ${\psi }_{p}$ the predicted output. The same procedure can be applied to second and third order ODE. For the second order initial value problem (IVP):

${\psi }^{″}\left(x\right)=f\left(x,\psi ,{\psi }^{\prime }\left(x\right)\right),\text{}\psi \left(0\right)=A,\text{}{\psi }^{\prime }\left(0\right)=B$ (22)

The trial solution is written

${\psi }_{t}\left(x\right)=A+Bx+{x}^{2}N\left(x,p\right)$ (23)

where $A,B\in ℝ$ and $N\left({x}_{i},p\right)=\underset{j=1}{\overset{3}{\sum }}{v}_{j}\sigma \left({z}_{j}\right)$, and for two point BC: $\psi \left(0\right)=A,\text{\hspace{0.17em}}\psi \left(1\right)=B$ the trial solution in this case is written: ${\psi }_{t}\left(x\right)=A\left(1-x\right)+Bx+x\left(1-x\right)N\left(x,p\right)$.

Now, we first demonstrate the computational complexity involved in adjusting all parameters in order to update the weights and getting a close approximation to the desired result. Subsequently, we proceed to our main results and analysis that displays the ease of computation achieved by our novel method of adjusting only the biases. The former adjustment or weight updation is done using the backpropagation algorithm. Therefore, we need to train the network so we can apply the backpropagation algorithm. The basis function we shall apply in this work in training the neural network is the sigmoid activation function given by Equation (9). In a simple neural model as depicted in Figure 3, where there is no hidden layer and $x=\left({x}_{1},{x}_{2}\right),w=\left({w}_{1},{w}_{2}\right)$, we shall assume some figures for illustration. Let the training data be given as $x=\left({x}_{1},{x}_{2}\right)=\left(0.1,0.2\right)$ ; desired output ${y}_{d}=0.02$ ; initial weights $w=\left({w}_{1},{w}_{2}\right)=\left(0.4,0.1\right)$ ; the bias $b=1.78$ and predicted output ${y}_{p}$. The diagram below shows the neural network training model for the sample data we are considering. Now we proceed to train the network to get the predicted output.

5.1. Network Training

First we compute $z={x}_{1}\cdot {w}_{1}+{x}_{2}\cdot {w}_{2}+b$, which is sum of products and bias, i.e.;

$z={x}_{1}\left({w}_{1}\right)+{x}_{2}\left({w}_{2}\right)+b=0.1\left(0.4\right)+0.2\left(0.1\right)+1.78=1.84$ (24)

Figure 3. Schematic for $N\left(x,p\right)$.

Next we apply z, as input to the activation function, which in this case is the sigmoid activation function:

$\sigma \left(z\right)={\left(1+{\text{e}}^{-z}\right)}^{-1}={\left(1+{\text{e}}^{-1.84}\right)}^{-1}=0.863$. Hence the predicted output ${y}_{p}=0.863$.

We have seen that the predicted output does not correspond to the desired output, therefore we have to train the network to reduce the prediction error. To compute the prediction error, we use the squared error (21) function defined as follows. Considering the predicted output we calculated above, the prediction error is:

$E=0.5{\left(0.02-0.863\right)}^{2}=0.355$. (25)

We observe that the prediction error is huge, so we must attempt to minimize it. We noted previously that the weights determine how close a predicted output is to the desired output. Therefore to minimize the error we have to adjust the weights. This can be achieved using the formula;

${w}_{n}={w}_{o}+\eta \left({y}_{d}-{y}_{p}\right)x$ (26)

where ${w}_{n}$ and ${w}_{o}$ represent new and old weights respectively. We update the weights using the following:

${w}_{o}$ : current weight (1.78, 0.4, 0.1);

$\eta$ : network learning rate = 0.01;

${y}_{d}$ : desired output = 0.02;

x: current input vector = (+1, 0.1, 0.2).

$\therefore \text{}{w}_{n}=\left[1.78,0.4,0.1\right]+0.01\left[0.02-0.863\right]\left[+1,0.1,0.2\right]=\left[1.772,0.399,0.098\right]$.

With this information we adjust the model and retrain the neural network to get;

$z=0.1\left(0.399\right)+0.2\left(0.098\right)+1.772=1.79$ $⇒\text{}\sigma \left(z\right)={\left(1+{\text{e}}^{-1.79}\right)}^{-1}=0.857$

$\therefore \text{}E=0.5{\left(0.02-0.857\right)}^{2}=0.350$.

The error computation not only involves the outputs but also the derivatives of the network output with respect to its inputs. So, it requires computing the gradient of the network derivatives with respect to its inputs. Let us now consider a multilayered perceptron with one input node, a hidden layer with m nodes, and one output unit. For the given inputs $x=\left({x}_{1},{x}_{2},\cdots ,{x}_{n}\right)$, the output is given by

$N\left(x,p\right)=\underset{j=1}{\overset{m}{\sum }}{v}_{j}\sigma \left({z}_{j}\right)$ (27)

where ${z}_{j}={\sum }_{i=1}^{n}{w}_{ji}{x}_{i}+{u}_{j}$, ${w}_{ji}$ denotes the weight from input unit i to the hidden unit j, ${v}_{j}$ denotes weight from the hidden unit j to the output unit, ${u}_{j}$ denotes the biases, and $\sigma \left({z}_{j}\right)$ is the sigmoid activation function. The derivatives of $N\left(x,p\right)$ with respect to input ${x}_{i}$ is

$\frac{{\partial }^{k}N}{\partial {x}_{i}^{k}}=\underset{j=1}{\overset{m}{\sum }}{v}_{j}{w}_{ji}^{k}{\sigma }_{j}^{\left(k\right)},$ (28)

where $\sigma =\sigma \left({z}_{j}\right)$ and ${\sigma }^{\left(k\right)}$ denotes the kth order derivative of sigmoid function.

Let ${N}_{\theta }$ denote the derivative of the network with respect to its inputs and then we have the following relation

${N}_{\theta }={D}^{n}N=\sum {v}_{i}{p}_{i}{\sigma }_{i}^{\left(n\right)}$ ; ${p}_{j}=\underset{k=1}{\overset{n}{\prod }}{w}_{jk}^{{\lambda }^{k}}$, $k=\underset{i=1}{\overset{n}{\sum }}{\lambda }_{i}$ (29)

The derivative of ${N}_{\theta }$ with respect to other parameters may be obtained as

$\frac{\partial {N}_{\theta }}{\partial {v}_{j}}={p}_{j}{\sigma }_{j}^{\left(k\right)}$, $\frac{\partial {N}_{\theta }}{\partial {v}_{j}}={v}_{j}{p}_{j}{\sigma }_{j}^{\left(k+1\right)}$,

$\frac{\partial {N}_{\theta }}{\partial {w}_{ji}}={x}_{j}{v}_{j}{p}_{j}{\sigma }_{j}^{\left(k+1\right)}+{v}_{j}{\lambda }_{i}{w}_{ji}^{{\lambda }_{i}-1}\left(\underset{k=1,k\ne 1}{\prod }{w}_{ji}^{{\lambda }_{k}}\right){\sigma }_{j}^{\left(k\right)}$ (30)

Now after getting all the derivatives we can find out the gradient of error. Using general learning method for supervised training we can minimize the error to the desired accuracy. We illustrate the above using the first order ordinary differential equation below

${\psi }^{\prime }\left(x\right)=f\left(x,\psi \right),\text{}x\in \left[a,b\right]$ (31)

with initial condition $\psi \left(a\right)=A$. In this case, the ANN trial solution may be written as

${\psi }_{t}\left(x,p\right)=A+xN\left(x,p\right),$ (32)

where $N\left(x,p\right)$ is the neural output of the feed forward network with one input data x with parameters p. The trial solution ${\psi }_{t}\left(x,p\right)$ satisfies the initial condition. We differentiate the trial solution ${\psi }_{t}\left(x,p\right)$ to get

$\frac{\text{d}{\psi }_{t}\left(x,p\right)}{\text{d}x}=N\left(x,p\right)+x\frac{\text{d}N\left(x,p\right)}{\text{d}x},$ (33)

For evaluating the derivative term in the right hand side of (32), we use equations (6) and (26)–(31). The error function for this case may be formulated as

$E\left(p\right)=\underset{i=1}{\overset{n}{\sum }}{\left(\frac{\text{d}{w}_{t}\left({x}_{i},p\right)}{\text{d}{x}_{i}}-f\left({x}_{i},{w}_{t}\left({x}_{i},p\right)\right)\right)}^{2}$ (34)

The weights from input to hidden are modified according to the following rule

${w}_{ji}^{r+1}={w}_{ji}^{r}-\eta \left(\frac{\partial E}{\partial {w}_{ji}^{r}}\right)$ (35)

where

$\frac{\partial E}{\partial {w}_{ji}^{r}}=\frac{\partial }{\partial {w}_{ji}^{r}}\left(\underset{i=1}{\overset{n}{\sum }}{\left(\frac{\text{d}{w}_{t}\left({x}_{i},p\right)}{\text{d}{x}_{i}}-f\left({x}_{i},{w}_{t}\left({x}_{i},p\right)\right)\right)}^{2}\right)$ (36)

Here, 𝜂 is the learning rate and r is the iteration step. The weights from hidden to output layer may be updated in a similar formulation as done for input to hidden. Now going back to Equation (27), we recall that ${z}_{j}={\sum }_{i=1}^{n}{w}_{ji}{x}_{i}+{u}_{j}$

and $N\left(x,p\right)=\underset{j=1}{\overset{m}{\sum }}{v}_{j}\sigma \left({z}_{j}\right)$.This implies that;

$\begin{array}{c}\frac{\text{d}N\left(x,p\right)}{\text{d}x}=\frac{\text{d}}{\text{d}x}\underset{j}{\overset{m}{\sum }}{v}_{j}\sigma \left({w}_{ji}{x}_{i}+{u}_{j}\right)=\underset{j}{\overset{m}{\sum }}{v}_{j}\frac{\text{d}}{\text{d}x}\sigma \left({w}_{ji}{x}_{i}+{u}_{j}\right)\\ =\underset{j}{\overset{m}{\sum }}{v}_{j}\frac{\text{d}}{\text{d}x}\sigma \left({z}_{j}\right)=\underset{j}{\overset{m}{\sum }}{v}_{j}\frac{\text{d}}{\text{d}{z}_{j}}\sigma \left({z}_{j}\right)\cdot \frac{\text{d}{z}_{j}}{\text{d}x}\end{array}$ (37)

If the neural network model is a simple one as we saw in Section 3.3, then,

$N\left(x,p\right)=\sigma \left(z\right)=\sigma \left({x}_{1}{w}_{1}+{x}_{2}{w}_{2}+u\right)$ $⇒\frac{\text{d}N}{\text{d}{x}_{1}}=\frac{\partial \sigma }{\partial z}\cdot \frac{\text{d}z}{\text{d}{x}_{1}};\text{}\frac{\text{d}N}{\text{d}{x}_{2}}=\frac{\partial \sigma }{\partial z}\cdot \frac{\text{d}z}{\text{d}{x}_{2}}$

Now let us consider a first order differential equation:

${\psi }^{\prime }\left(x\right)-\psi =0,\text{}x\in \left[0,1\right],\text{}\psi \left(0\right)=1$ (38)

with trial solution:

${\psi }_{t}\left(x,p\right)=A+xN\left(x,p\right)$ (39)

where x is the input to neural network model and p represents the parameters—weights and biases.

${w}_{t}\left(0,p\right)=A+\left(0\right)N\left(0,p\right)=A=1$ $⇒{\psi }_{t}\left(x,p\right)=1+xN\left(x,p\right)$ (40)

To solve this problem using neural network (NN), we shall employ a NN architecture given in Figure 3. Now, ${\psi }_{t}\left({x}_{i},p\right)=1+{x}_{i}N\left({x}_{i},p\right)$, with $N\left({x}_{i},p\right)={\sum }_{j=1}^{3}{v}_{j}\sigma \left({z}_{j}\right)$, and

$\underset{j=1}{\overset{3}{\sum }}{v}_{j}\sigma \left({z}_{j}\right)={v}_{1}\sigma \left({z}_{1}\right)+{v}_{2}\sigma \left({z}_{2}\right)+{v}_{3}\sigma \left( z 3 \right)$

where ${z}_{1}={x}_{1}{w}_{11}+{u}_{1}$, ${z}_{2}={x}_{1}{w}_{12}+{u}_{2}$ and ${z}_{3}={x}_{1}{w}_{13}+{u}_{3}$.

If the neural network model is not able to predict correctly the solution of the differential equation with the given initial parameters—weights and biases, we need to find the prediction error given by

$E\left(p\right)=\underset{i=1}{\overset{n}{\sum }}{\left(\frac{\text{d}{w}_{t}\left({x}_{i},p\right)}{\text{d}{x}_{i}}-f\left({x}_{i},{w}_{t}\left({x}_{i},p\right)\right)\right)}^{2}$. (41)

If the prediction error does not satisfy an acceptable threshold, then the parameters need to be adjusted using the equation,

${w}_{ji}^{r+1}={w}_{ji}^{r}-\eta \left(\frac{\partial E}{\partial {w}_{ji}^{r}}\right)$, where $\frac{\partial E}{\partial {w}_{ji}^{r}}=\frac{\partial }{\partial {w}_{ji}^{r}}\underset{i=1}{\overset{n}{\sum }}{\left(\frac{\text{d}{w}_{t}\left({x}_{i},p\right)}{\text{d}{x}_{i}}-f\left({x}_{i},{w}_{t}\left({x}_{i},p\right)\right)\right)}^{2}$ (42)

Recall that: ${\psi }_{t}\left({x}_{i},p\right)=1+{x}_{i}N\left({x}_{i},p\right)$, and

$\frac{\text{d}{w}_{t}}{\text{d}{x}_{i}}\left({x}_{i},p\right)=\frac{\text{d}}{\text{d}{x}_{i}}\left(1+{x}_{i}N\left({x}_{i},p\right)\right)=N\left({x}_{i},p\right)+{x}_{i}\frac{\text{d}}{\text{d}{x}_{i}}N\left({x}_{i},p\right)$. (43)

$\begin{array}{l}\therefore \text{}\frac{\text{d}}{\text{d}{x}_{1}}N\left({x}_{1},p\right)=\frac{\text{d}}{\text{d}{x}_{1}}\underset{j=1}{\overset{3}{\sum }}{v}_{j}\sigma \left({z}_{j}\right)={v}_{j}\frac{\text{d}}{\text{d}{x}_{1}}\underset{j=1}{\overset{3}{\sum }}\sigma \left({z}_{j}\right)\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}={v}_{1}\frac{\text{d}\sigma }{\text{d}{z}_{1}}\frac{\text{d}{z}_{1}}{\text{d}{x}_{1}}+{v}_{2}\frac{\text{d}\sigma }{\text{d}{z}_{2}}\frac{\text{d}{z}_{2}}{\text{d}{x}_{1}}+{v}_{3}\frac{\text{d}\sigma }{\text{d}{z}_{3}}\frac{\text{d}{z}_{3}}{\text{d}{x}_{1}}\end{array}$

$\frac{\text{d}}{\text{d}{x}_{1}}N\left({x}_{1},p\right)={v}_{1}{\sigma }^{\prime }\left({z}_{1}\right){w}_{11}+{v}_{2}{\sigma }^{\prime }\left({z}_{2}\right){w}_{12}+{v}_{3}{\sigma }^{\prime }\left({z}_{3}\right){w}_{13}$ (44)

Putting Equations (42) and (43) into Equation (40) gives

$E\left(p\right)=\underset{i=1}{\overset{n}{\sum }}{\left(\left(N\left({x}_{i},p\right)+{x}_{i}\frac{\text{d}}{\text{d}{x}_{i}}N\left({x}_{i},p\right)\right)-\left(1+xN\left(x,p\right)\right)\right)}^{2}$

$\therefore \text{}E\left(p\right)={\left(\underset{j=1}{\overset{3}{\sum }}{v}_{j}\sigma \left({z}_{j}\right)+{x}_{1}\left(\frac{\text{d}}{\text{d}{x}_{1}}N\left({x}_{i},p\right)\right)-1-{x}_{1}\left(\underset{j=1}{\overset{3}{\sum }}{v}_{j}\sigma \left({z}_{j}\right)\right)\right)}^{2}$ (45)

$\begin{array}{l}⇒\text{}E\left(p\right)=\left({v}_{1}\sigma \left({z}_{1}\right)+{v}_{2}\sigma \left({z}_{2}\right)+{v}_{3}\sigma \left({z}_{3}\right)\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}+{x}_{1}\left({v}_{1}{\sigma }^{\prime }\left({z}_{1}\right){w}_{11}+v+{v}_{2}{\sigma }^{\prime }\left({z}_{2}\right){w}_{12}+{v}_{3}{\sigma }^{\prime }\left({z}_{3}\right){w}_{13}\right)\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}-1-{{x}_{1}\left({v}_{1}\sigma \left({z}_{1}\right)+{v}_{2}\sigma \left({z}_{2}\right)+{v}_{3}\sigma \left({z}_{3}\right)\right)\right)}^{2}\end{array}$ (46)

We noted earlier that when the neural network is not able to predict acceptable solution, that is, solution with minimum error, the weights and biases need to be adjusted. This involves complex derivatives with multivariable chain rule [11] [12]. From the ongoing, we need to compute the derivative of Equation (46) with respect to the weights and biases. That is:

$\begin{array}{c}\frac{\partial E}{\partial {w}_{ji}^{r}}=\frac{\partial }{\partial {w}_{ji}^{r}}\left({v}_{1}\sigma \left({z}_{1}\right)+{v}_{2}\sigma \left({z}_{2}\right)+{v}_{3}\sigma \left({z}_{3}\right)\\ \text{\hspace{0.17em}}+{x}_{1}\left({v}_{1}{\sigma }^{\prime }\left({z}_{1}\right){w}_{11}+v+{v}_{2}{\sigma }^{\prime }\left({z}_{2}\right){w}_{12}+{v}_{3}{\sigma }^{\prime }\left({z}_{3}\right){w}_{13}\right)\\ \text{\hspace{0.17em}}-1-{{x}_{1}\left({v}_{1}\sigma \left({z}_{1}\right)+{v}_{2}\sigma \left({z}_{2}\right)+{v}_{3}\sigma \left({z}_{3}\right)\right)\right)}^{2}\end{array}$ (47)

Similarly, to update the weights from the hidden layer to output layer, we compute $\partial E/\partial {v}_{j},j=1,2,3$. Finally, we update the biases by computing $\partial E/\partial {u}_{j},j=1,2,3$. Equation (47) together with $\partial E/\partial {v}_{j}$ and $\partial E/\partial {u}_{j}$ are used to update the weights and biases. The superscript (r) denotes the rth iteration.

It is important to note that the foregoing is necessary in order to achieve the number of iterations required. Sometimes, it may be necessary to do up to 30 or more iterations for the solution to converge within an acceptable threshold. It is on the basis of this complex derivatives involved in solving ODE with neural network, especially of higher order, and the many iterations required for convergence that motivated our search for a more efficient and accurate way of solving the given problem, but avoiding the inherent computational complexity.

5.3. Results

We begin with a couple of examples on first and second order differential equations.

5.4. Example

Consider the initial value problem;

${y}^{\prime }-\alpha y=0;\text{}y\left(0\right)=1,\text{}x\in \left[0,1\right],\text{}\alpha \equiv 1.$ (48)

This equation has been solved by Mall and Chakraverty [13]. As discussed in the previous section, the trial solution is given by;

${y}_{t}\left(x\right)=A+xN\left(x,p\right)$

Applying the initial condition gives $A=1$, therefore ${y}_{t}\left(x\right)=1+xN\left(x,p\right)$.

To obtain the weights from input to hidden layer, it is natural to assume ${y}_{a}\left(x\right)={\text{e}}^{x}$. We approximate this function using regression in SPSS. We shall train the network for 10 equidistant points in [0, 1], and then employ excel spreadsheet to find values of ${y}_{a}\left(x\right)={\text{e}}^{x}$ at all the points. This leads us to the following data in Table 3.

We then perform a curve-fit polynomial regression built into IBM SPSS 23 [14], for the function ${y}_{a}\left(x\right)={\text{e}}^{x}$. The output is displayed below in Table 4.

Table 4 displays the linear, quadratic and cubic regression of ${y}_{a}\left(x\right)={\text{e}}^{x}$. The quadratic and cubic curves show perfect goodness of fit, ${R}^{2}=1$. Using the cubic curve, we pick our weights from input layer to hidden layer as:

Table 3. Values of $\left(x,{y}_{a}\left(x\right)={\text{e}}^{x}\right)$ for problem (47).

Table 4. Model summary and parameter estimates.

${w}_{1}{}_{1}=1.016,\text{\hspace{0.17em}}{w}_{12}=0.423,\text{\hspace{0.17em}}{w}_{13}=0.279$. Now to compute the weights from hidden layer to the output layer, we find a function $\vartheta \left(x\right)$ such that $v={\varphi }^{-1}f$ ; $v,f$ and $\varphi$ are as defined in Section 3. In particular, $f\left(x\right)={\left(\vartheta \left({x}_{1}\right)\text{,}\vartheta \left({x}_{2}\right)\text{,}\vartheta \left({x}_{3}\right)\right)}^{\text{T}}$. We now form a linear function based on the default sign of the differential equation, i.e. $\vartheta \left(x\right)=ax-b$, where a is the coefficient of the derivative of y and b is the coefficient of y Thus;

$\vartheta \left(x\right)=x+1,\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}f\left(x\right)={\left(\vartheta \left({x}_{1}\right)\text{,}\vartheta \left({x}_{2}\right)\text{,}\vartheta \left({x}_{3}\right)\right)}^{\text{T}}={\left(1.1,\text{1}\text{.2,1}\text{.3}\right)}^{\text{T}}$

The neural architecture for the neural network is shown in Figure 2, so we let $N=3$. We take $x={\left(0.1,0.2,0.3\right)}^{\text{T}}$ and $f\left(x\right)={\left(1.1,1.2,1.3\right)}^{\text{T}}$. It then follows that

$v={\varphi }^{-1}f$, $⇒\text{}\left[\begin{array}{c}{v}_{1}\\ {v}_{2}\\ {v}_{3}\end{array}\right]={\left[\begin{array}{ccc}{\phi }_{1}\left({x}_{1}\right)& {\phi }_{2}\left({x}_{1}\right)& {\phi }_{3}\left({x}_{1}\right)\\ {\phi }_{1}\left({x}_{2}\right)& {\phi }_{2}\left({x}_{2}\right)& {\phi }_{3}\left({x}_{2}\right)\\ {\phi }_{1}\left({x}_{3}\right)& {\phi }_{2}\left({x}_{3}\right)& {\phi }_{3}\left({x}_{3}\right)\end{array}\right]}^{-1}\left[\begin{array}{c}{\vartheta }_{1}\\ {\vartheta }_{2}\\ {\vartheta }_{3}\end{array}\right]$ (49)

where

${\phi }_{i}\left({x}_{j}\right)=\mathrm{exp}\left(-\frac{{\left(|{x}_{i}-{x}_{j}|\right)}^{2}}{2{\sigma }^{2}}\right),\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}i=1,2,3;\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}j=1,2,3.$ (50)

Substituting the given values of the vectors x and f, we obtain the weights from the hidden layer to the output layer,

$\left[\begin{array}{c}{v}_{1}\\ {v}_{2}\\ {v}_{3}\end{array}\right]={\left[\begin{array}{ccc}\text{1}& \text{0}\text{.94}& \text{0}\text{.78}\\ \text{0}\text{.94}& \text{1}& \text{0}\text{.94}\\ \text{0}\text{.78}& \text{0}\text{.94}& \text{1}\end{array}\right]}^{-1}\left[\begin{array}{c}\text{1}\text{.1}\\ \text{1}\text{.2}\\ \text{1}\text{.3}\end{array}\right]=\left[\begin{array}{c}\text{5}\text{.17}\\ -\text{9}\text{.375}\\ \text{6}\text{.08}\end{array}\right]$, (51)

Therefore the weights from the hidden layer to the output layer are; ${v}_{1}=5.17,\text{\hspace{0.17em}}{v}_{2}=-9.375,\text{\hspace{0.17em}}{v}_{3}=6.08$.

The biases are fixed between –1 and 1. We now train the network with the available parameters using MathCAD 14 software [15] as follows:

$\begin{array}{l}{w}_{1}:=1.016\text{}{w}_{2}:=0.423\text{}{w}_{3}:=0.279\text{}x:=1\text{}\\ {v}_{1}:=5.17\text{}{v}_{2}:=6.08\text{}{u}_{1}:=1\text{}{u}_{2}:=0.2251\text{}{u}_{3}:=-0.1\\ {z}_{1}:={w}_{1}\cdot x+{u}_{1}=2.016\text{}{z}_{2}:={w}_{2}\cdot x+{u}_{2}=0.6481\text{}{z}_{3}:={w}_{3}\cdot x+{u}_{3}=0.179\\ \sigma \left({z}_{1}\right):={\left[1+\mathrm{exp}\left({z}_{1}\right)\right]}^{-1}=0.882467,\text{}\sigma \left({z}_{2}\right):={\left[1+\mathrm{exp}\left({z}_{2}\right)\right]}^{-1}=0.656582,\\ \sigma \left({z}_{3}\right):={\left[1+\mathrm{exp}\left({z}_{3}\right)\right]}^{-1}=0.544631\\ N:={v}_{1}\cdot \sigma \left({z}_{1}\right)+{v}_{2}\cdot \sigma \left({z}_{2}\right)+{v}_{3}\cdot \sigma \left({z}_{3}\right)=1.718251\\ {y}_{p}\left(x\right):=1+x\cdot N=2.718251,\text{}{y}_{d}\left(x\right):={\text{e}}^{x}=2.718282\\ E:=0.5\cdot {\left({y}_{d}\left(x\right)-{y}_{p}\left(x\right)\right)}^{2}=4.707964×{10}^{-10}\end{array}$.

Here yd and yp are respectively the desired output (exact solution) and the predicted output (trial solution). From the indicated error value, this is an acceptable accuracy. We compare our results with the neural network results obtained by Otadi and Mosleh [16] and find them to be in reasonable agreement. This is depicted in Table 5, as well as the graphical profile in Figure 4 below.

The perfect accuracy is evident in the graphical profile depicted in Figure 4.

5.5. Remark

In what follows, we consider a non-homogeneous second order linear differential equation. It is important to recall that for any second order non-homogeneous differential equation of the form ${y}^{″}\left(x\right)+a\left(x\right){y}^{\prime }+b\left(x\right)y=f\left(x\right)$, the non-homogeneous term $f\left(x\right)$ is termed the forcing function. In this section, we shall employ the forcing function to compute the weights from hidden layer to the output layer. This is made clear in the following example.

5.6. Example

Consider the initial value problem;

${y}^{″}-4y=24\mathrm{cos}\left(2x\right);\text{}y\left(0\right)=3,\text{}{y}^{\prime }\left(0\right)=4,\text{}x\in \left[0,1\right]$.

The trial solution is ${y}_{t}\left(x\right)=A+Bx+{x}^{2}N\left(x,p\right)$. Applying the initial condition gives $A=3,B=4$.

Therefore, ${y}_{t}\left(x\right)=3+4x+{x}^{2}N\left(x,p\right)$, ${y}_{a}\left(x\right)={\text{e}}^{-2x}+{\text{e}}^{-3x}$.

We use excel spreadsheet to find values of ${y}_{a}\left(x\right)$ at all the x points, as displayed in Table 6.

Using regression and SPSS model we find weights from input layer to hidden layer. From Table 7 and using the cubic curve fit with R2 = 1, we pick our weights from input layer to hidden layer as:

${w}_{1}{}_{1}=0.488,\text{\hspace{0.17em}}{w}_{12}=1.697,\text{\hspace{0.17em}}{w}_{13}=3.338$.

Now to compute the weights from hidden layer to the output layer, we use the function:

Table 5. Comparison of the results.

Figure 4. Plot of Y exact and Y predicted for Example 1.

$\begin{array}{l}\vartheta \left(x\right)=24\mathrm{cos}\left(2x\right),\\ \text{}f\left(x\right)={\left(\vartheta \left({x}_{1}\right)\text{,}\vartheta \left({x}_{2}\right)\text{,}\vartheta \left({x}_{3}\right)\right)}^{\text{T}}={\left(23.999854,23.999415\text{,23}\text{.998684}\right)}^{\text{T}}\end{array}$

With $x={\left(0.1,0.2,0.3\right)}^{\text{T}}$. Hence, the weights from the hidden layer to the output layer given by $v={\varphi }^{-1}f$ are;

$\left[\begin{array}{c}{v}_{1}\\ {v}_{2}\\ {v}_{3}\end{array}\right]={\left[\begin{array}{ccc}\text{1}& \text{0}\text{.94}& \text{0}\text{.78}\\ \text{0}\text{.94}& \text{1}& \text{0}\text{.94}\\ \text{0}\text{.78}& \text{0}\text{.94}& \text{1}\end{array}\right]}^{-1}\left[\begin{array}{c}23.999854\\ 23.999415\\ 23.998684\end{array}\right]$ $⇒\text{}\left[\begin{array}{c}{v}_{1}\\ {v}_{2}\\ {v}_{3}\end{array}\right]=\left[\begin{array}{c}112.489\\ -187.474\\ 112.483\end{array}\right]$

The weights from the hidden layer to the output layer are; ${v}_{1}=112.489,\text{\hspace{0.17em}}{v}_{2}=-187.474,\text{\hspace{0.17em}}{v}_{3}=112.483$.

The biases are fixed between −1 and 1. We now train the network with the available parameters using our MathCAD 14 algorithm as follows:

$\begin{array}{l}{\text{w}}_{\text{1}}:=0.488{\text{w}}_{\text{2}}:=1.697{\text{w}}_{3}:=3.338\text{x}:=1\text{}\\ {\text{v}}_{1}:=112.489{\text{v}}_{\text{2}}:=-187.474{\text{v}}_{3}:=112.483{\text{u}}_{1}:=1{\text{u}}_{2}:=1{\text{u}}_{3}:=-0.1691\\ {\text{z}}_{1}:={\text{w}}_{\text{1}}\cdot \text{x}+{\text{u}}_{1}=1.488{\text{z}}_{2}:={\text{w}}_{2}\cdot \text{x}+{\text{u}}_{2}=2.697{\text{z}}_{3}:={\text{w}}_{3}\cdot \text{x}+{\text{u}}_{3}=3.1689\\ \text{σ}\left({\text{z}}_{1}\right):={\left[1+\mathrm{exp}\left(-{\text{z}}_{1}\right)\right]}^{-1}=0.815778,\text{σ}\left({\text{z}}_{2}\right):={\left[1+\mathrm{exp}\left(-{\text{z}}_{2}\right)\right]}^{-1}=0.936849,\\ \text{σ}\left({\text{z}}_{3}\right):={\left[1+\mathrm{exp}\left(-{\text{z}}_{3}\right)\right]}^{-1}=0.959647\\ N:={\text{v}}_{1}\cdot \text{σ}\left({\text{z}}_{1}\right)+{\text{v}}_{2}\cdot \text{σ}\left({\text{z}}_{2}\right)+{\text{v}}_{3}\cdot \text{σ}\left({\text{z}}_{3}\right)=24.075112\\ {\text{y}}_{\text{p}}\left(\text{x}\right):=3+4\cdot \text{x}+{\text{x}}^{2}\cdot \text{N}=31.075112,\\ {\text{y}}_{\text{d}}\left(\text{x}\right):=4\cdot {\text{e}}^{2\cdot \text{x}}+2\cdot {\text{e}}^{-2\cdot \text{x}}-3\cdot \mathrm{cos}\left(2\cdot \text{x}\right)=31.075335\\ \text{E}:=0.5\cdot {\left({\text{y}}_{\text{d}}\left(\text{x}\right)-{\text{y}}_{\text{p}}\left(\text{x}\right)\right)}^{2}=2.502862×{10}^{-8}\end{array}$.

We compare the exact and approximate solution in Table 8. The accuracy is clearly depicted graphically in Figure 5.

Table 6. Values of $\left(x,{y}_{a}\left(x\right)={\text{e}}^{2x}+{\text{e}}^{-2x}\right)$.

Table 7. Model summary and parameter estimates.

Table 8. Comparison of the results.

Figure 5. Plot of Y exact and Y predicted for Example 2.

6. Conclusion

In this paper, we have presented a novel approach for solving first and second order linear ordinary differential equations with constant coefficients. Specifically, we employ a feed-forward Multilayer Perceptron Neural Network (MLPNN), but avoid the standard back-propagation algorithm for updating the intrinsic weights. This greatly reduces the computational complexity of the given problem. Our results are validated by the near perfect approximations achieved in comparison with the exact solutions, as well as demonstrating the function approximation capabilities of ANN. This then proves the efficiency of our neural network procedure. We employed Excel spreadsheet, IBM SPSS 23, and MathCAD 14 algorithm to achieve this task.

Conflicts of Interest

The authors declare no conflicts of interest regarding the publication of this paper.

 [1] McCulloch, W.S. and Pitts W. (1943) A Logical Calculus of the Ideas Immanent in Nervous Activity. The Bulletin of Mathematical Biophysics, 5, 115-133. https://doi.org/10.1007/BF02478259 [2] Neumann, J.V. (1951) The General and Logical Theory of Automata. Wiley, New York. [3] Graupe, D. (2007) Principles of Artificial Neural Networks. Vol. 6, 2nd Edition, World Scientific Publishing Co. Pte. Ltd., Singapore. [4] Rumelhart, D.E. and McClelland, J.L. (1986) Parallel Distributed Processing, Explorations in the Microstructure of Cognition I and II. MIT Press, Cambridge. https://doi.org/10.7551/mitpress/5236.001.0001 [5] Werbos, P.J. (1974) Beyond Recognition, New Tools for Prediction and Analysis in the Behavioural Sciences. Ph.D. Thesis, Harvard University, Cambridge. [6] Lagaris, I.E., Likas, A.C. and Fotiadis, D.I. (1997) Artificial Neural Network for Solving Ordinary and Partial Differential Equations. arXiv: physics/9705023v1. [7] Cybenco, G. (1989) Approximation by Superposition of a Sigmoidal Function. Mathematics of Control, Signals and Systems, 2, 303-314. https://doi.org/10.1007/BF02551274 [8] Hornic, K., Stinchcombe, M. and White, H. (1989) Multilayer Feedforward Networks Are Universal Approximators. Neural Networks, 2, 359-366. https://doi.org/10.1016/0893-6080(89)90020-8 [9] Lee, H. and Kang, I.S. (1990) Neural Algorithms for Solving Differential Equations. Journal of Computational Physics, 91, 110-131. https://doi.org/10.1016/0021-9991(90)90007-N [10] Majidzadeh, K. (2011) Inverse Problem with Respect to Domain and Artificial Neural Network Algorithm for the Solution. Mathematical Problems in Engineering, 2011, Article ID: 145608, 16 p. https://doi.org/10.1155/2011/145608 [11] Chen, R.T.Q., Rubanova, Y., Bettencourt, J. and Duvenaud, D. (2018) Neural Ordinary Differential Equations. arXiv: 1806.07366v1. [12] Okereke, R.N. (2019) A New Perspective to the Solution of Ordinary Differential Equations using Artificial Neural Networks. Ph.D Dissertation, Mathematics Department, Michael Okpara University of Agriculture, Umudike. [13] Mall, S. and Chakraverty, S. (2013) Comparison of Artificial Neural Network Architecture in Solving Ordinary Differential Equations. Advances in Artificial Neural Systems, 2013, Article ID: 181895. https://doi.org/10.1155/2013/181895 [14] IBM (2015) IBM SPSS Statistics 23 http://www.ibm.com [15] PTC (Parametric Technology Corporation) (2007) Mathcad Version 14. http://communications@ptc.com [16] Otadi, M. and Mosleh, M. (2011) Numerical Solution of Quadratic Riccati Differential Equations by Neural Network. Mathematical Sciences, 5, 249-257.