Numerical Solution of Parabolic in Partial Differential Equations (PDEs) in One and Two Space Variable ()

Mariam Almahdi Mohammed Mu’lla^{1}, Amal Mohammed Ahmed Gaweash^{2}, Hayat Yousuf Ismail Bakur^{3}

^{1}Department of Mathematics, University of Hafr Al-Batin (UoHB), Hafar Al-Batin, KSA.

^{2}Department of Mathematics, University of Taif, Taif, KSA.

^{3}Department of Mathematics, University of Kordofan, El-Obeid, Sudan.

**DOI: **10.4236/jamp.2022.102024
PDF
HTML XML
402
Downloads
2,469
Views
Citations

In this paper, we shall be concerned with the numerical solution of parabolic equations in one space variable and the time variable *t*. We expand Taylor series to derive a higher-order approximation for *U _{t}*. We begin with the simplest model problem, for heat conduction in a uniform medium. For this model problem, an explicit difference method is very straightforward in use, and the analysis of its error is easily accomplished by the use of a maximum principle. As we shall show, however, the numerical solution becomes unstable unless the time step is severely restricted, so we shall go on to consider other, more elaborate, numerical methods which can avoid such a restriction. The additional complication in the numerical calculation is more than offset by the smaller number of time steps needed. We then extend the methods to problems with more general boundary conditions, then to more general linear parabolic equations. Finally, we shall discuss the more difficult problem of the solution of nonlinear equations.

Keywords

Partial Differential Equations (PDEs), Homentropic, Spatial Derivatives with Finite Differences, Central Differences, Finite Differences

Share and Cite:

Mu’lla, M. , Gaweash, A. and Bakur, H. (2022) Numerical Solution of Parabolic in Partial Differential Equations (PDEs) in One and Two Space Variable. *Journal of Applied Mathematics and Physics*, **10**, 311-321. doi: 10.4236/jamp.2022.102024.

1. Introduction

Partial differential equations (PDEs) form the basis of very many mathematical models of physical, chemical and biological phenomena, and more recently their use has spread into economics, financial forecasting, image processing and other fields. There are other explicit numerical methods that can be applied to the 1-D heat or diffusion equation such as the Method of Lines which is used by Matlab and Mathematica [1] [2]. In calculating the quantities to a good approximation, there is a thin boundary layer near the wing surface where viscous forces are important and that outside this an inviscid flow can be assumed. Which we will assume is locally flat, we can model the flow by

$u\frac{\partial u}{\partial x}-v\frac{{\partial}^{2}u}{\partial {y}^{2}}=\frac{1}{\rho}\frac{\partial p}{\partial x},$ (1)

where u is the flow velocity in the direction of the tangential co-ordinate *x*, *y *is the normal co-ordinate, ν is the viscosity, ρ is the density and p the pressure; we have here neglected the normal velocity. This is a typical parabolic equation for

*u *with
$\frac{1}{\rho}\frac{\partial p}{\partial x}$ treated as a forcing term [3] [4]. Away from the wing, considered

just as a two-dimensional cross section, we can suppose the flow velocity to be inviscid and of the form
$\left({u}_{\infty}+u,v\right)$ where *u *and *v *are small compared with the flow speed at infinity,
${u}_{\infty}$ in the *x*-direction. One can often assume that the flow is irrational so that we have

$\frac{\partial v}{\partial x}-\frac{\partial u}{\partial y}=0,$ (2)

then combining the conservation laws for mass and the *x*-component of momentum, and retaining only first-order quantities while assuming homentropic flow, we can deduce the simple model

$\left(1-{M}_{\infty}^{2}\right)\frac{\partial v}{\partial x}+\frac{\partial u}{\partial y}=0$ (3)

where ${M}_{\infty}$ is the Mach number at infinity, ${M}_{\infty}=\frac{{u}_{\infty}}{{a}_{\infty}}$, and ${a}_{\infty}$ is the sound speed [5] [6].

2. Explicit Methods for 1-D Heat or Diffusion Equation

2.1. Difference Approximations for Derivative Terms in PDEs

We consider $U\left(x,t\right)$ for $0\le x\le a$, $0\le t\le T$.

Discrete time and spatial variable *x*:

$\Delta t=\frac{T}{m},\text{\hspace{0.17em}}\text{\hspace{0.17em}}\Delta x=\frac{a}{n+1},$

${t}_{k}=k\Delta t,\text{\hspace{0.17em}}\text{\hspace{0.17em}}0\le k\le m,\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{x}_{j}=j\Delta x,\text{\hspace{0.17em}}\text{\hspace{0.17em}}0\le j\le n+1$

Let ${U}_{j}^{k}=U\left({x}_{j},{t}_{k}\right)$

Consider Taylor series expansion for ${U}_{j}^{k+1}$ :

${U}_{j}^{k+1}={U}_{j}^{k}+\Delta t\frac{\partial {U}_{j}^{k}}{\partial t}+\frac{\Delta {t}^{2}}{2}\frac{{\partial}^{2}{U}_{j}^{k}}{\partial {t}^{2}}+O\left(\Delta {t}^{3}\right)$ (4)

If we only consider $O\left(\Delta t\right)$ terms in Equation (4) then we arrive at the forward difference in time approximation for ${U}_{t}$ :

$\frac{\partial {U}_{j}^{k}}{\partial t}=\frac{{U}_{j}^{k+1}-{U}_{j}^{k}}{\Delta t}+O(\; \Delta \; t\; )$

We can also derive a higher-order approximation for ${U}_{t}$ if we consider the Taylor series expansion for ${U}_{j}^{k-1}$ as well:

${U}_{j}^{k-1}={U}_{j}^{k}-\Delta t\frac{\partial {U}_{j}^{k}}{\partial t}+\frac{\Delta {t}^{2}}{2}\frac{{\partial}^{2}{U}_{j}^{k}}{\partial {t}^{2}}+O\left(\Delta {t}^{3}\right)$ (5)

(4) – (5) $\Rightarrow $ $\frac{\partial {U}_{j}^{k}}{\partial t}=\frac{{U}_{j}^{k+1}-{U}_{j}^{k-1}}{2\Delta t}+O\left(\Delta {t}^{2}\right)$ $\Rightarrow $ leap-frog

(or center difference) in time. This gives higher-order accuracy than forwarding difference. We can also perform similar manipulations to arrive at approximations for the second derivative ${U}_{tt}$ :

(4) + (5) $-2{U}_{j}^{k}$ $\Rightarrow $ $\frac{{\partial}^{2}{U}_{j}^{k}}{\partial {t}^{2}}=\frac{{U}_{j}^{k+1}-{U}_{j}^{k-1}-2{U}_{j}^{k}}{\Delta {t}^{2}}+O\left(\Delta {t}^{2}\right)$ $\Rightarrow $ central difference

The finite difference method makes use of the above approximations to solve Partial Differential Equations, PDEs numerically [7] [8] [9].

2.2. Central Differences

The central differences solve Partial Differential Equations, as well:

${\delta}_{t}v\left(x,t\right)=v\left(x,t+\frac{1}{2}\Delta t\right)-v\left(x,t-\frac{1}{2}\Delta t\right),$ (6-a)

${\delta}_{t}v\left(x,t\right)=v\left(x+\frac{1}{2}\Delta x,t\right)-v\left(x-\frac{1}{2}\Delta x,t\right),$ (6-b)

when the central difference operator is applied twice we obtain the very useful second-order central difference

${\delta}_{x}^{2}v\left(x,t\right)=v\left(x+\Delta x,t\right)-2v\left(x,t\right)+v\left(x-\Delta x,t\right)$ (7)

For first differences, it is often convenient to use the double interval central difference

${\Delta}_{0x}v\left(x,t\right)=\frac{1}{2}\left({\Delta}_{+x}+{\Delta}_{-x}\right)v\left(x,t\right)=\frac{1}{2}\left[v\left(x+\Delta x,t\right)-v\left(x-\Delta x,t\right)\right]$ (8)

A Taylor series expansion of the forward difference in *t *gives for the solution of:

${u}_{t}={u}_{xx}$ for $t>0,\text{\hspace{0.17em}}0<x<1$ (9)

$\begin{array}{c}{\Delta}_{+t}u\left(x,t\right)=u\left(x,t+\Delta t\right)-u\left(x,t\right)\\ ={u}_{t}\Delta t+\frac{1}{2}{u}_{tt}{\left(\Delta t\right)}^{2}+\frac{1}{6}{u}_{ttt}{\left(\Delta t\right)}^{3}+\cdots \end{array}$ (10)

By adding together, the Taylor series expansions in the *x *variable for
${\Delta}_{+x}u$, and
${\Delta}_{-x}u$, we see that all the odd powers of
${\Delta}_{x}$ cancel, giving

${\delta}_{x}^{2}u\left(x,t\right)={u}_{xx}{\left(\Delta x\right)}^{2}+\frac{1}{12}{u}_{xxxx}{\left(\Delta x\right)}^{4}+\cdots $ (11)

We can now define the truncation error of the scheme

${U}_{j}^{n+1}={U}_{j}^{n}+\mu \left({U}_{j+1}^{n}-2{U}_{j}^{n}+{U}_{j-1}^{n}\right)$ (12)

We first multiply the difference equation throughout by a factor, if necessary, so that each term is an approximation to the corresponding derivative in the differential equation. Here this step is unnecessary, provided that we use the form

$\frac{{U}_{j}^{n+1}-{U}_{j}^{n}}{\Delta t}=\frac{{U}_{j+1}^{n}-2{U}_{j}^{n}+{U}_{j-1}^{n}}{{\left(\Delta x\right)}^{2}}$ (13)

rather than (12) [10] [11] [12]. The truncation error is then the difference between the two sides of the equation, when the approximation ${U}_{j}^{n}$ is replaced throughout by the exact solution $u\left({x}_{j},{t}_{n}\right)$ of the differential equation. Indeed, at any point away from the boundary we can define the truncation error $T\left(x,t\right)$ [13] [14].

2.3. Definition

The truncation error $T\left(x,t\right)$ is define by:

$T\left(x,t\right)=\frac{{\Delta}_{+t}u\left(x,t\right)}{\Delta t}-\frac{{\delta}_{x}^{2}u\left(x,t\right)}{{\left(\Delta x\right)}^{2}}$ (14)

so that:

$\begin{array}{c}T\left(x,t\right)=\left({u}_{t}-{u}_{xx}\right)+\left(\frac{1}{2}{u}_{tt}\Delta t-\frac{1}{12}{u}_{xxxx}{\left(\Delta x\right)}^{2}\right)+\cdots \\ =\frac{1}{2}{u}_{tt}\Delta t-\frac{1}{12}{u}_{xxxx}{\left(\Delta x\right)}^{2}+\cdots \end{array}$ (15)

where these leading terms are called the principal part of the truncation error, and we have used the fact that u satisfies the differential equation [5] [15].

We have used Taylor series expansions to express the truncation error as an infinite series [16]. It is often convenient to truncate the infinite Taylor series, introducing a remainder term, for instance:

$\begin{array}{c}u\left(x,t+\Delta t\right)=u\left(x,t\right)+{u}_{t}\Delta t+\frac{1}{2}{u}_{tt}{\left(\Delta t\right)}^{2}+\frac{1}{6}{u}_{ttt}{\left(\Delta t\right)}^{3}+\cdots \\ =u\left(x,t\right)+{u}_{t}\Delta t+\frac{1}{2}{u}_{tt}\left(x,\eta \right){\left(\Delta t\right)}^{2},\end{array}$ (16)

where
$\eta $ lies somewhere between *t *and
$t+\Delta t$. If we do the same thing for the *x *expansion the truncation error becomes

$T\left(x,t\right)=\frac{1}{2}{u}_{tt}\left(x,\eta \right)\Delta t-\frac{1}{12}{u}_{xxxx}\left(\xi ,t\right){\left(\Delta x\right)}^{2}$ (17)

where $\xi \in \left(x-\Delta x,x+\Delta x\right)$, from which it follows that:

$\left|T\left(x,t\right)\right|\le \frac{1}{2}{M}_{tt}\Delta t+\frac{1}{12}{M}_{xxxx}{\left(\Delta x\right)}^{2}$ (18)

$=\frac{1}{2}\Delta t\left({M}_{tt}+\frac{1}{6}{M}_{xxxx}\Delta t\right),$ (19)

where
${M}_{tt}$ is a bound for
$\left|{u}_{tt}\right|$ and
${M}_{xxxx}$ is a bound for
$\left|{u}_{xxxx}\right|$. It is now clear why we assumed that the initial and boundary data for *u *were consistent, and why it is helpful if we can also assume that the initial data are sufficiently smooth.

For then we can assume that the bounds ${M}_{tt}$ and ${M}_{xxxx}$ hold uniformly over the closed domain $\left[0,1\right]\times \left[0,{t}_{F}\right]$ [9] [17].

For example, suppose the boundary conditions specify that *u *must vanish on the boundaries
$x=0$ and
$x=1$, and that u must take the value 1 on the initial line, where
$t=0$. Then the solution
$u\left(x,t\right)$ is obviously discontinuous at the corners, and in the full domain defined by
$0<x<1,t>0$ all its derivatives are unbounded, so our bound for the truncation error is useless over the full domain [18] [19].

3. Implicit Backward Euler Method for 1-D Heat Equation

Unconditionally stable (but usually slower than explicit methods). Implicit because it evaluates difference approximations to derivatives at next time step ${t}_{k+1}$ and not current time step we are solving for ${t}_{k}$.

${U}_{xx}\left({t}_{k+1},{x}_{j}\right)=\frac{{U}_{j+1}^{K+1}-2{U}_{j}^{k+1}+{U}_{j-1}^{k+1}}{\Delta {x}^{2}}$ (20)

${U}_{t}\left({t}_{k+1},{x}_{j}\right)=\frac{{U}_{j}^{k+1}-{U}_{j}^{k}}{\Delta t}$ (21)

${U}_{t}=\beta {U}_{xx}$ becomes:

$\begin{array}{c}{U}_{j}^{k}={U}_{j}^{k+1}-\frac{\beta \Delta t}{\Delta {x}^{2}}\left({U}_{j+1}^{K+1}-2{U}_{j}^{k+1}+{U}_{j-1}^{k+1}\right)\\ =\left(1+2s\right){U}_{j}^{k+1}-s\left({U}_{j+1}^{K+1}+{U}_{j-1}^{k+1}\right)\end{array}$ (22)

where $s=\frac{\beta \Delta t}{\Delta {x}^{2}}$ as before. We still need to solve for ${U}_{j}^{k+1}$ given ${U}_{j}^{k}$ is known. This requires solving a tridiagonal linear system of n equations [1] [20].

Again we let ${U}_{j}^{k+1}=U\left({x}_{j},{t}_{k}\right),\text{\hspace{0.17em}}{x}_{j}=j\Delta x,\text{\hspace{0.17em}}j=0,\cdots ,n+1$,

$\Delta x=\frac{a}{n+1},\text{\hspace{0.17em}}{t}_{k}=k\Delta t,\text{\hspace{0.17em}}k=0,\cdots ,m$ and $\Delta t=\frac{T}{m}$.

3.1. Numerical Implementation of the Implicit Back Ward Euler Method

We are solving the same problem: ${U}_{t}=\beta {U}_{xx},U\left(x,0\right)={U}_{j}^{0}=f(\; x\; j\; )$

3.2. Example

We are solving the same system again with the method of lines: ${U}_{t}=\beta {U}_{xx}$ where the initial conditions are $U\left(x,0\right)=\mathrm{sin}\left(2\pi x\right)+2x+1$

$0\le x\le 1,\text{\hspace{0.17em}}\text{\hspace{0.17em}}\beta ={10}^{-5},\text{\hspace{0.17em}}\text{\hspace{0.17em}}0\le t\le 12000$

boundary conditions are $U\left(0,t\right)=1$ and ${U}_{x}\left(1,t\right)=2$.

Again we have:

$\frac{\partial U}{\partial t}=AU+b$

we replace

${U}_{xx}=\frac{{U}_{j+1}-2{U}_{j}+{U}_{j-1}}{\Delta {x}^{2}}$

where $U\left({x}_{j},t\right)={U}_{j}\left(t\right),{x}_{j}=j\Delta x,0\le j\le n+1$

$\Delta x=\frac{a}{n+1}=\frac{1}{n+1},\text{\hspace{0.17em}}\text{\hspace{0.17em}}a=1$

with boundary conditions: $U\left(0,t\right)={U}_{0}\left(t\right)=1$

$\frac{\partial U}{\partial x}\left(1,t\right)=\frac{\partial {U}_{n+1}}{\partial x}\left(t\right)\simeq \frac{{U}_{n+1}-{U}_{n}}{\Delta x}=2\Delta {U}_{n+1}={U}_{n}+2\Delta x$ (23)

In matrix form for $n=3$ elements:

$\frac{\partial U}{\partial t}=\left(\begin{array}{c}{U}_{1}\\ {U}_{2}\\ {U}_{3}\end{array}\right)=\frac{\beta}{\Delta {x}^{2}}\left(\begin{array}{ccc}-2& 1& 0\\ 1& -2& 1\\ 0& 1& -1\end{array}\right)\left(\begin{array}{c}{U}_{1}\\ {U}_{2}\\ {U}_{3}\end{array}\right)+\frac{\beta}{\Delta {x}^{2}}\left(\begin{array}{c}1\\ 0\\ 2\Delta x\end{array}\right)$ (24)

$U=AU+b$

The Dirichlet boundary conditions is:

$U\left(0,t\right)=1={U}_{0}^{k},\text{\hspace{0.17em}}\text{\hspace{0.17em}}U\left(a,t\right)=U\left(1,t\right)=3={U}_{n+1}^{k}={U}_{4}^{k}$

For simplicity we consider only 4 elements in *x *in this example to find the matrix system we need to solve for:

We writing $-\left[{U}_{j+1}^{k+1}+{U}_{j-1}^{k+1}\right]+\left(1+2s\right){U}_{j}^{k+1}={U}_{j}^{k}$ as a matrix equation:

$\underset{\text{Tridiagonalmatrix}}{\underset{\ufe38}{\left(\begin{array}{ccc}1+2t& -s& 0\\ -s& 1+2s& -s\\ 0& -s& 1+2s\end{array}\right)}}\underset{\text{Solution}\text{\hspace{0.17em}}U\text{\hspace{0.17em}}\text{atnexttimestep}}{\underset{\ufe38}{\left(\begin{array}{c}{U}_{0}^{k+1}\\ {U}_{2}^{k+1}\\ {U}_{3}^{k+1}\end{array}\right)}}=\left(\begin{array}{c}{U}_{1}^{k}\\ {U}_{2}^{k}\\ {U}_{3}^{k}\end{array}\right)+s\underset{\text{givenfromb}\text{.c}\text{.}}{\underset{\ufe38}{\left(\begin{array}{c}{U}_{0}^{k+1}\\ 0\\ {U}_{4}^{k+1}\end{array}\right)}}$ (25)

$A{U}^{k+1}={U}^{k}+b\Rightarrow {U}^{k+1}={A}^{-1}\left[{U}^{k}+b\right]$

4. Heat Conservation Properties

Assume that in our model heat flow problem
${u}_{t}={u}_{xx}$ we define the total heat in the system at time *t *by:

$h\left(t\right)={\displaystyle {\int}_{0}^{1}u\left(x,t\right)\text{d}x}$ (26)

Then from the differential equation we have:

$\frac{\text{d}h}{\text{d}t}={\displaystyle {\int}_{0}^{1}{u}_{t}\text{d}x}={\displaystyle {\int}_{0}^{1}{u}_{tt}\text{d}x}={\left[{u}_{x}\right]}_{0}^{1}.$ (27)

This is not very helpful if we have Dirichlet boundary conditions: but suppose we are given Neumann boundary conditions at each end, say, ${u}_{x}\left(0,t\right)={g}_{0}\left(t\right)$ and ${u}_{x}\left(1,t\right)={g}_{1}\left(t\right)$. Then we have:

$\frac{\text{d}h}{\text{d}t}={g}_{1}\left(t\right)-{g}_{0}\left(t\right),$ (28)

so that*n *is given by integrating an ordinary differential equation [21] [22]. Now suppose we carry out a similar manipulation for the
$\theta $ -method equations:

${U}_{j}^{n+1}-{U}_{j}^{n}=\mu \left[\theta {\delta}_{x}^{2}{U}_{j}^{n+1}+\left(1-\theta \right){\delta}_{x}^{2}{U}_{j}^{n}\right],\text{\hspace{0.17em}}\text{\hspace{0.17em}}j=1,2,\cdots ,J-1$ (29)

introducing the total heat by means of a summation over the points for which (29) holds:

${H}^{n}={\displaystyle {\sum}_{j=1}^{J-1}\Delta x{U}_{j}^{n}}.$ (30)

Then, recalling from the definitions of the finite difference notation that

${\delta}_{x}^{2}{U}_{j}={\Delta}_{+x}{U}_{j}-{\Delta}_{+x}{U}_{j-1},$ (31)

we have:

${H}^{n+1}-{H}^{n}=\frac{\text{\Delta}t}{\text{\Delta}x}{\displaystyle {\sum}_{j=1}^{J-1}{\delta}_{x}^{2}}\left[\theta {U}_{j}^{n+1}+\left(1-\theta \right){U}_{j}^{n}\right]$ (32)

So, recalling from the definitions of the finite difference notation that

${\delta}_{x}^{2}{U}_{j}={\Delta}_{+x}{U}_{j}-{\Delta}_{+x}{U}_{j-1}$

We have

$\begin{array}{c}{H}^{n+1}-{H}^{n}=\frac{\Delta t}{\Delta x}{\displaystyle {\sum}_{1}^{J-1}{\delta}_{x}^{2}}\left[\theta {U}_{j}^{n+1}+\left(1-\theta \right){U}_{j}^{n}\right]\\ =\frac{\Delta t}{\Delta x}\left\{{\Delta}_{+x}\left[\theta {U}_{j-1}^{n+1}+\left(1-\theta \right){U}_{j-1}^{n}\right]-{\Delta}_{+x}\left[\theta {U}_{0}^{n+1}+\left(1-\theta \right){U}_{0}^{n}\right]\right\}.\end{array}$ (33)

The rest of the analysis will depend on how the boundary condition is approximated. Consider the simplest case as in

$\frac{{U}_{1}^{n}-{U}_{0}^{n}}{\Delta x}={\alpha}^{n}{U}_{0}^{n}+{g}^{n}$ (34)

namely we set

${U}_{1}^{n}-{U}_{0}^{n}=\Delta x{g}_{0}^{n},\text{\hspace{0.17em}}\text{\hspace{0.17em}}{U}_{J}^{n}-{U}_{J-1}^{n}=\Delta x{g}_{1}^{n}.$

${H}^{n+1}-{H}^{n}=\Delta x\left[\theta \left({g}_{1}^{n+1}-{g}_{0}^{n+1}\right)+\left(1-\theta \right)\left({g}_{1}^{n}-{g}_{0}^{n}\right)\right]$ (35)

as an approximation to (28) this approximation may be very accurate, even though we have seen that *U ^{n}* may not give a good pointwise approximation to

In particular, if
${g}_{0}$ and
${g}_{1}$ are independent of *t *the change in *H *in one time step exactly equals that in
$h\left(t\right)$ [5] [17] [23].

To make the most of this matching we should relate (30) as closely as possible to (27) [13].

If *u *and *U *were constants that would suggest we take
$\left(J-1\right)\Delta x=1$, rather than
$J\Delta x=1$ as we have been assuming; and we should compare
${U}_{j}^{n}$ with

${u}_{j}^{n}=\frac{1}{\Delta x}{\displaystyle {\int}_{\left(j-1\right)\Delta x}^{j\Delta x}u}\left(x,{t}_{n}\right)\text{d}x,\text{\hspace{0.17em}}\text{\hspace{0.17em}}j=1,2,\cdots ,J-1$ (36)

And that it is centered at $\left(j-\frac{1}{2}\right)\Delta x$ and we have

$h\left({t}_{n}\right)={\displaystyle {\sum}_{1}^{J-1}\Delta x{u}_{j}^{n}}$. (37)

Note that this interpretation matches very closely the scheme that we were led to in

$\frac{{U}_{1}^{n}-{U}_{0}^{n}}{\Delta x}=\frac{1}{2}{\alpha}^{n}\left({U}_{0}^{n}+{U}_{1}^{n}\right)+{g}^{n}$

by analyzing the truncation error. It would also mean that for initial condition we should take
${U}_{j}^{0}={u}_{j}^{0}$ as defined by (36) [10] [12]. Then for time-independent boundary conditions, we have
${H}^{n}=h\left({t}_{n}\right)$ for all *n*. Moreover, it is easy to see that the function:

$\stackrel{^}{u}=\left(x,t\right)=\left({g}_{1}-{g}_{0}\right)t+\frac{1}{2}\left({g}_{1}-{g}_{0}\right){x}^{2}+{g}_{0}x+C$ (38)

with any constant *C* satisfies the differential equation, and the two boundary conditions [7] [11] [17].

5. Numerical Test Problems

This section is concerning to the numerical test problems and their visualization.

Problem. Let the coupled system of FPDEs given as

${\Delta}_{x}^{1.8}U\left(x,y\right)-{\Delta}_{xy}^{2}V\left(x,y\right)-4{\Delta}_{y}^{1.8}U\left(x,y\right)=\varphi \left(x,y\right)$

${\Delta}_{x}^{1.8}V\left(x,y\right)-6{\Delta}_{xy}^{2}U\left(x,y\right)+3{\Delta}_{y}^{1.8}{\Delta}_{x}^{1.8}V\left(x,y\right)=\theta \left(x,y\right)$

$U\left(0,y\right)={U}^{\prime}\left(0,y\right)=0,$

$V\left(0,y\right)={V}^{\prime}\left(0,y\right)=0,$

such that the external functions $\varphi \left(x,y\right)$ and $\theta \left(x,y\right)$ are given as

$\begin{array}{c}\varphi \left(x,y\right)=27{x}^{2}{y}^{2}9\left(y-1\right){\left(x-1\right)}^{2}\left(7xy-2y-3\right)-4{x}^{4}{y}^{3}\left(4+3y\right)\\ \text{\hspace{0.17em}}\text{\hspace{0.05em}}-0.016{x}^{2.5}{y}^{4}{\left(y-1\right)}^{3}\left(125{x}^{2}-175x+56\right),\end{array}$

$\begin{array}{c}\theta \left(x,y\right)=-36{x}^{2}{y}^{3}\left(y-1\right){\left(x-1\right)}^{3}[1-2{x}^{2}{\left(\frac{y-1}{x-1}\right)}^{2}-\frac{3{x}^{2}\left(y-1\right)}{2{\left(x-1\right)}^{2}}\\ \text{\hspace{0.05em}}\text{\hspace{0.17em}}+\frac{3\left(y-1\right)}{2y}-\frac{x\left(y-1\right)}{x-1}-\frac{y-1}{4x}-4x{\left(\frac{y-1}{x-1}\right)}^{2}-3xy\left(\frac{y-1}{x-1}\right)]\\ \text{\hspace{0.05em}}\text{\hspace{0.17em}}-0.071{x}^{1.2}{y}^{3}{\left(y-1\right)}^{2}\left[1250{x}^{3}-2625{x}^{2}+1680x-308\right].\end{array}$

The exact solution of the above system is

$U\left(x,y\right)={\left(xy\left(1-x\right)\right)}^{2}{\left(1-y\right)}^{3},$

Table 1. Absolute error at various values of $\left(x,y\right)$ for $K=10,12$ in $U\left(x,y\right)$ and $V\left(x,y\right)$ of problem.

$V\left(x,y\right)=xy\left(1-y\right){\left(xy-{x}^{2}y-x{y}^{2}\right)}^{2}.$

We evaluate the approximate solution of Problem with our proposed method. [8] [18]. The comparison between exact and approximate solution, while the absolute error corresponds to scale level $K=10$. We have also computed absolute error at various scale levels and different points of the spaces as given in Table 1.

6. Conclusion

In this paper, we have developed an efficient numerical technique. We shall be concerned with the numerical solution of parabolic equations in one space variable and the time variable *t*. We begin with the simplest model problem, for heat conduction in a uniform medium. For this model problem, an explicit difference method is very straightforward in use, and the analysis of its error is easily accomplished by the use of a maximum principle, or by Euler Method, the numerical solution becomes unstable unless the time step is severely restricted, so we shall go on to consider other, more elaborate, numerical methods which can avoid such a restriction. The additional complication in the numerical calculation is more than offset by the smaller number of time steps needed. We then extended the methods to problems with more general boundary conditions, then to more general linear parabolic equations. Finally, we shall discuss the more difficult problem of the solution of nonlinear equations.

Acknowledgements

I would like to thank my supervisor, Dr. Muhsin Hassan Abdallah who was a great help to me and also I thank my husband Bashir Alfadol Albdawi without whose help, I could not have written this paper.

Conflicts of Interest

The authors declare no conflicts of interest regarding the publication of this paper.

[1] |
Douglas, J. and Rachford, H.H. (1956) On the Numerical Solution of the Heat Conduction Problems in Two and Three Space Variables. Transactions of the American Mathematical Society, 82, 421-439. https://doi.org/10.1090/S0002-9947-1956-0084194-4 |

[2] |
Peaceman, D.W. and Rachford, H.H. (1955) The Numerical Solution of Parabolic and Elliptic Differential Equations. Journal of the Society for Industrial and Applied Mathematics, 3, 28-41. https://doi.org/10.1137/0103003 |

[3] | Young, D.M. (1971) Iterative Solutions of Large Linear Systems. Academic Press, New York. |

[4] | Whitham, G.B. (1974) Linear and Nonlinear Waves. Wiley-Interscience, New York. |

[5] | Godunov, S.K. (1959) A Finite Difference Method for the Numerical Computation of Discontinuous Solutions of the Equations of Fluid Dynamics. Matematicheskii Sbornik, 47, 271-306. |

[6] | LeVeque, R.J. (1992) Numerical Methods for Conservation Laws, Lectures in Mathematics ETH Zürich. 2nd Edition, Birkhäuser Verlag, Basel. |

[7] | Braess, D. (2001) Finite Elements: Theory, Fast Solvers, and Applications in Solid Mechanics. 2nd Edition, Cambridge University Press, Cambridge. |

[8] | Adams, R.A. and Fournier, J.J.F. (2003) Sobolev Spaces. Vol. 140, Pure and Applied Mathematics. 2nd Edition, Elsevier, Amsterdam. |

[9] |
Ern, A. and Guermond, J.L. (2004) Theory and Practice of Finite Elements. Vol. 159, Applied Mathematical Sciences. Springer-Verlag, New York. https://doi.org/10.1007/978-1-4757-4355-5 |

[10] |
Buchanan, M.L. (1963) A Necessary and Sufficient Condition for Stability of Difference Schemes for Initial Value Problems. Journal of the Society for Industrial and Applied Mathematics, 11, 919-935. https://doi.org/10.1137/0111067 |

[11] | Ames, W.F. (1965) Nonlinear Partial Differential Equations in Engineering, Vol. I. Academic Press, New York. |

[12] | Haroske, D.D. and Triebel, H. (2008) Distributions, Sobolev Spaces, Elliptic Equations (EMS Textbooks in Mathematics). European Mathematical Society (EMS), Zürich. |

[13] | Ciarlet, P.G. (1978) The Finite Element Method for Elliptic Problems. Studies in Mathematics and its Applications. North Holland Publishing Co., Amsterdam. |

[14] |
Brandt, A. (1977) Multi-Level Adaptive Solutions to Boundary-Value Problems. Mathematics of Computation, 31, 333-390. https://doi.org/10.1090/S0025-5718-1977-0431719-X |

[15] |
John, F. (1952) On the Integration of Parabolic Equations by Difference Methods. Communications on Pure and Applied Mathematics, 5, 155-211. https://doi.org/10.1002/cpa.3160050203 |

[16] |
Crank, J. and Nicolson, P. (1947) A Practical Method for Numerical Evaluation of Solutions of Partial Differential Equations of the Heat-Conduction Type. Mathematical Proceedings of the Cambridge Philosophical Society, 43, 50-67. https://doi.org/10.1017/S0305004100023197 |

[17] | Carrier, G.F. and Pearson, C.E. (1976) Partial Differential Equations. Academic Press, New York. |

[18] | Courant, R. and Hilbert, D. (1962) Methods of Mathematical Physics. Vol. 2, Partial Differential Equations. Wiley-Interscience, New York. |

[19] | Kreiss, H.O. and Lorenz, J. (1989) Initial-Boundary Value Problems and the Navier-Stokes Equations. Academic Press, San Diego. |

[20] |
Colella, P. and Woodward, P.R. (1984) The Piecewise Parabolic Method (PPM) for Gas-Dynamical Simulations. Journal of Computational Physics, 54, 174-201. https://doi.org/10.1016/0021-9991(84)90143-8 |

[21] |
Roos, H.G., Stynes, M. and Tobiska, L. (1996) Numerical Methods for Singularly Perturbed Differential Equations. Springer, Berlin. https://doi.org/10.1007/978-3-662-03206-0 |

[22] | Elman, H.C., Silvester, D.J. and Wathen, A.J. (2004) Finite Elements and Fast Iterative Solvers. Oxford University Press, Oxford. |

[23] | LeVeque, R.J. and Trefethen, L.N. (1988) Fourier Analysis of the SOR Iteration. IMA Journal of Numerical Analysis, 8, 273-279. |

Journals Menu

Contact us

+1 323-425-8868 | |

customer@scirp.org | |

+86 18163351462(WhatsApp) | |

1655362766 | |

Paper Publishing WeChat |

Copyright © 2024 by authors and Scientific Research Publishing Inc.

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.