A Comparative Study of Adomian Decomposition Method with Variational Iteration Method for Solving Linear and Nonlinear Differential Equations ()
1. Introduction
Differential equations are crucial in various engineering disciplines for modeling structures, describing phenomena, and simulating dynamic systems [1]. They serve as powerful numerical tools for various applications including biological processes, celestial motion, hydraulic flow, heat transfer, vibration isolation, and electric circuits. Engineering research has advanced significantly, necessitating the use of diverse differential equations (DEs) to model various processes [2].
For instance, differential equations are essential for modeling continuum beams used in flexible space structures, such as aircraft wings and satellite antennas, to estimate their natural and forced frequencies. In heating, ventilation, and air conditioning (HVAC) systems, heat transfer and cooling processes are governed by thermal system equations, which are differential equations. These equations are also used for thermal-based damage detection in porous materials. Additionally, feedback control systems in automotive, aerospace, and ocean vehicles rely on differential equations for their numerical models. They are also employed in modeling population growth, chemical reactions, and mixtures.
Due to their importance, numerous analytical and numerical methods have been developed to accurately solve linear and nonlinear differential equations. Notable methods include the Adomian Decomposition Method (ADM), extensively used for various equations such as the free vibration of continuum beams, fractional differential equations, heat equations, and Blasius equations [3]. Other methods include homotopy perturbation, differential transform, differential quadrature, and collocation methods [4].
Among these, He’s Variational Iteration Method (HVIM) stands out for its effectiveness and convenience in solving complex higher-order linear and nonlinear differential equations [5]. It is versatile and rapidly converges with fewer iterations, making it highly suitable for engineering research.
Constrained optimization problems are prevalent across various scientific and engineering disciplines, necessitating efficient and effective methods for finding their extrema. The Lagrange multipliers method has traditionally been used for this purpose, providing a systematic approach to incorporate constraints directly into the optimization problem. This method, while powerful, has inherent limitations, particularly when dealing with complex or non-linear constraints. The traditional Lagrange multipliers method often requires solving a system of equations that can become increasingly intricate as the number of constraints and the complexity of the functions involved grow [6].
The motivation for extending the Lagrange multipliers method to include functional derivatives stems from the need to address these limitations and improve the efficiency and applicability of the method. By employing functional derivatives, the method can be generalized to handle a wider range of problems, including those involving functional spaces. This extension not only broadens the scope of problems that can be addressed but also enhances the computational efficiency and accuracy of finding constrained extrema [7].
The traditional Lagrange multipliers method faces several challenges. When dealing with non-linear constraints, the system of equations resulting from the Lagrange multipliers can become highly non-linear and difficult to solve [8]. As the number of constraints increases, the number of equations to be solved simultaneously increases, which can significantly complicate the solution process. The method often requires iterative numerical techniques to find solutions, which can be computationally intensive and time-consuming [9]. Furthermore, this method may find local optima rather than global ones, depending on the initial conditions and the nature of the constraint equations [10].
Extending the Lagrange multipliers method to include functional derivatives offers several advantages. Functional derivatives allow the method to be applied to problems involving function spaces, making it suitable for a wider range of optimization problems [11]. Functional derivatives can simplify the formulation of the optimization problem, reducing the computational burden associated with solving the system of equations [12]. By providing a more precise mathematical framework, functional derivatives can improve the accuracy of the solutions obtained, particularly in complex or highly non-linear scenarios [13]. Moreover, the extended method has the potential to more effectively identify global extrema, as it can incorporate more sophisticated mathematical tools and techniques [14].
The introduction of functional derivatives into the Lagrange multipliers method represents a significant improvement over the traditional approach. This extension allows for the simplification of complex problems by leveraging functional derivatives, enabling complex non-linear constraints to be handled more systematically [15]. This method becomes more scalable, as it can manage a larger number of constraints without a corresponding exponential increase in computational complexity. The enhanced method is particularly beneficial in fields such as engineering, physics, and economics, where constrained optimization problems are frequently encountered and often involve complex, non-linear relationships [16].
Finding constrained extrema is a fundamental problem in optimization, and various methods have been developed over the years to address it. While Lagrange multipliers are a well-known approach, several alternative methods exist, each with its own advantages and limitations. Penalty methods involve adding a penalty term to the objective function, imposing a cost for violating the constraints, and then minimizing the augmented objective function using unconstrained optimization techniques.
Sequential Quadratic Programming (SQP) iteratively solves a sequence of quadratic programming subproblems that approximate the original nonlinear problem. Evolutionary algorithms, such as Genetic Algorithms (GAs) and Particle Swarm Optimization (PSO), also find constrained extrema by using mechanisms inspired by natural evolution and social behavior to explore the search space. They are particularly useful for problems with complex, non-differentiable, or noisy objective functions.
Problem Statement:
Differential equations are essential tools in engineering for modeling, describing, and simulating various dynamic systems and phenomena. Despite the development of numerous analytical and numerical methods for solving linear and nonlinear differential equations, there remains a need to evaluate and compare these methods to identify the most effective approaches for different engineering applications [17]. This study focuses on comparing the Adomian Decomposition Method (ADM) and the Variational Iteration Method (VIM) to determine their efficacy in solving complex differential equations used in various engineering disciplines.
Extending the Lagrange multipliers method to include functional derivatives addresses many of the limitations inherent in the traditional approach. This advancement not only broadens the range of problems that can be effectively tackled but also enhances the efficiency and accuracy of finding constrained extrema, making it a valuable tool for researchers and practitioners in various fields.
Although significant research has been conducted on various methods for solving differential equations, there is a lack of comprehensive comparative studies specifically evaluating ADM and VIM across a range of engineering applications [18]. Existing literature highlights the effectiveness of both methods individually but does not provide a detailed comparative analysis of their performance using a consistent computational framework [4] [19]. Addressing this gap is crucial for advancing the application of differential equations in engineering research and practice.
Objectives:
The following are the main objectives of this study:
1) To solve first-, second-, and third-order nonlinear differential equations using the ADM and the VIM.
2) To compare the results obtained from ADM and VIM using Mathematica programming.
3) To evaluate the performance, efficiency, and accuracy of ADM and VIM in modeling various engineering systems and processes.
4) To identify the strengths and limitations of each method in handling diverse types of differential equations and engineering applications.
5) To provide recommendations for applying ADM and VIM in engineering research and system design.
Adomian Decomposition Method
The ADM is a semi-analytical technique developed by George Adomian from the 1970s to the 1990s for solving ordinary and partial nonlinear differential equations [20]. It can also be extended to stochastic systems using the Ito integral. ADM aims to provide a unified theory for solving partial differential equations (PDEs), although it has been superseded by the homotopy analysis method [21].
A key feature of ADM is the use of “Adomian polynomials,” which enable solution convergence for the nonlinear parts of equations without linearizing the system [22]. These polynomials generalize to a Maclaurin series about an arbitrary external parameter, offering greater flexibility than direct Taylor series expansion. ADM is advantageous over standard numerical methods as it is computationally inexpensive and free from rounding-off errors, avoiding discretization [21] [22].
Consider the following equation:
(1.1)
where L is a linear operator, N represents a nonlinear operator, and R is the remaining linear part.
By defining the inverse operator of L as
, assuming that it exists, we get
(1.2)
The ADM assumes that the unknown function u can be expressed by an infinite series of the form
or
(1.3)
where the components
will be determined recursively. Moreover, the method defines the nonlinear term using Adomian polynomials.
More precisely, the ADM assumes that the nonlinear operator
can be decomposed by an infinite series of polynomials given by
(1.4)
where
are the Adomian’s polynomials defined as
.
Substituting (1.3) into (1.2) and using the fact that R is a linear operator.
We obtain
Or equivalently
(1.5)
Now, the recurrence algorithm can be defined by
(1.6)
Or equivalently
(1.7)
Consider the nonlinear function
. Then, the infinite series generated by applying the Taylor’s Series expansion of f about the initial function
is given by
(1.8)
By substituting (1.3) into Equation (1.8), we have
(1.9)
Now, expand Equation (1.9). We arrange the terms to obtain the Adomian polynomials. The order of each term in the above Equation (1.9) depends on both the subscript and the exponents of the
.
i.e.,
is of order 1
is of order 2
is of order 6
is of order kn.
To determine the sum of the terms of the
in each term. For example, the order of
.
As a result, by rearranging the terms in the expansion (1.9) according to the order, we have
(1.10)
The Adomian polynomials are arranged in a way so that the polynomial
consists of a; terms in the expansion (1.10) of order 1,
consists of all terms of order 2. In general,
consists of all terms of order n. Therefore, the first nine terms of Adomian are given below:
The Adomian polynomials
for the nonlinear term
can be evaluated by using the following expression
(1.11)
The Adomian polynomials are given by
Linear Ordinary Differential Equations:
We apply ADM to solve linear ordinary differential equations. The following general equation is written in the operator form:
(1.12)
where L is the linear operator considered as the highest order derivative.
R is the remainder of the differential operator.
is an inhomogeneous term.
If L is the first-order operator defined by
Assuming that L is invertible, then the inverse operator
is given by
. So, that
(1.13)
If L is a second-order differential operator, then
The inverse operator
is a two-fold integration operator given by
, where
(1.14)
In this manner, if L is a third-order differential operator, then
.
The inverse operator
is a three-fold integration operator given by
, where
(1.15)
Higher ordinary differential equations were generalized in the same manner.
Now, we apply the inverse operator.
to both sides of Equation (1.12), and after manipulating the terms, we get
(1.16)
where
(1.17)
The ADM admits the decomposition of u in the form of an infinite series of components
(1.18)
where
are the components of
that will be determined recursively.
Substituting Equation (1.18) into Equation (1.16) gives
(1.19)
The terms of
of the solution, u can be easily determined by using the recursive relation
(1.20)
The determination of
depends on the specified initial conditions
Nonlinear ordinary differential equations:
Here, we apply the ADM to solve the nonlinear ordinary differential equations. The following general equation is written in the operator form:
(1.21)
where L is the linear operator considered as the highest order derivative, R is the remainder of the differential operator,
is the nonlinear term, and
is an inhomogeneous term.
If L is the first-order operator defined by
Assuming that L is invertible, then the inverse operator
is given by
. So, that
(1.22)
If L is a second-order differential operator, then
The inverse operator
is a two-fold integration operator given by
, where
(1.23)
In this manner, if L is a third-order differential operator, then
.
The inverse operator
is a three-fold integration operator given by
, where
(1.24)
In general, if L is a differential operator of order
, we show that
(1.25)
Now, apply the inverse operator
to both sides of Equation (1.21), and after manipulating the terms, we get
(1.26)
where
(1.27)
The ADM consists of the decomposition of u in the form of an infinite number of components defined by the decomposition series
(1.28)
While the nonlinear term
is to be expressed by an infinite series of polynomials
(1.29)
where An’s are the Adomian polynomials. Substituting Equation (1.28) and Equation (1.29) into Equation (1.26) gives
(1.30)
To construct the iterative scheme, we matched both sides so that the
term is expressed in terms of the previously determined terms. The ADM gives the following iterative scheme
(1.31)
where
The Variational Iteration Method:
Consider the nonlinear differential equation
(1.32)
where L and N are linear and nonlinear operators, respectively, and g(x) is the analytical function. We can construct a correction function according to the variational iteration method for Equation (1.34) in the form
(1.33)
where
is a general Lagrange multiplier, which can be identified optimally via the variational theory,
is the nth approximate solution, and
is a restricted variation, which means
.
It is evident that the main step of the He’s variational iteration method is to determine the Lagrange multiplier
. We used integration by parts to find
.
i.e.,
(1.34)
The successive approximations
of the solution of
will be readily obtained upon using selective function
. However, for fast convergence, the function
should be selected by using the initial conditions as follows:
for the first order
for the second order
(1.35)
for the third order
At last, the solution is given by
.
The Derivation of Iteration Scheme:
We examine how to derive the iteration formulas for the first and higher-order differential equations and determine the Lagrange multiplier
.
Outlook of iteration Formulas:
Here, we present the iteration formulas for a specific class of differential equations:
2. Methods
The method section addresses the solution of first-, second-, and third-order nonlinear differential equations using the ADM and the VIM. A comparative analysis of the results obtained from these two methods was conducted using Mathematica programming, employing built-in functions such as Integrate, Series, List Plot, Differentiate, and Do-loop.
The VIM, developed by He [5] [19] [23]-[26], is widely adopted in various scientific fields due to its reliability and efficiency in handling both linear and nonlinear applications [27]-[29]. Unlike the Adomian Decomposition Method, which relies on computational algorithms to manage nonlinear terms, VIM directly addresses differential equations without restrictive assumptions, preserving the physical integrity of the solutions [4] [30]. The VIM provides solutions through rapidly convergent successive approximations, potentially yielding exact solutions when they exist or practical numerical approximations with few iterations when exact solutions are not attainable [1] [31].
While ADM requires cumbersome derivation of Adomian polynomials for nonlinear terms and perturbation methods become computationally intensive with increased nonlinearity, VIM offers an advantage with no specific requirements for nonlinear operators [4]. Numerical techniques like the Galerkin method also demand substantial computational resources.
Extensive research has been conducted on linear and nonlinear ordinary differential equations (ODEs), with standard methods tailored to specific equations based on their order. The goal has been to develop a unified method capable of addressing most ODEs. VIM is proposed as an effective method to achieve this objective [32]. However, some complex issues, such as Troesch’s problem, may still necessitate numerous iterations for acceptable accuracy. The results section will briefly outline the main points of each method, with detailed discussions available in the referenced literature.
3. Results
3.1. Example 1
Solve the first-order nonlinear differential equation
.
Solution using ADM:
The given equation
(3.1)
In operator form is
(3.2)
where L is a first-order differential operator. It is clear that
is invertible and is given by
(3.3)
operating
on both sides of Equation (3.2) and using the initial condition gives
(3.4)
Equating the linear term
by an infinite series of components of
.
Representing the nonlinear term
by an infinite series of Adomian polynomials
.
We get
(3.5)
The Adomian polynomials
for
are given by
(3.6)
Choosing
, and applying the decomposition method gives the recursive relation
which gives
The solution in a series form is given by
(3.7)
In closed form, it is given as
, Which is the exact solution.
Solution using VIM:
We can use the iteration formula for the VIM method as given by
(3.8)
The stationary conditions are
,
.
We find that the Lagrange multiplier
.
Where
.
Which in turn gives the successive approximations as
(3.9)
(3.10)
(3.11)
(3.12)
where the Taylor series is used for integration.
The exact solution is given by
.
Which in turn gives the exact solution
.
The numeric values of ADM and VIM for five iterates are depicted in Table 1 and Figure 1. We considered the x-values from 0 to 0.5 because, beyond x = 0.5, the values become excessively large. At x = 0.4, the ADM and VIM values coincided with the exact values. However, beyond this point, VIM values did not match the exact values, indicating that many more iterations of VIM would be required for accuracy.
Table 1. Comparison between the ADM solutions with VIM solutions using five iterations.
x |
EXACT |
ADM |
VIM(u1(x)) |
VIM(u2(x)) |
VIM(u3(x)) |
VIM(u4(x)) |
0 |
0 |
0 |
0 |
0 |
0 |
0 |
0.1 |
0.309336 |
0.309336 |
0.3 |
0.309 |
0.309327 |
0.309336 |
0.2 |
0.684137 |
0.684099 |
0.6 |
0.672 |
0.682812 |
0.684024 |
0.3 |
1.26016 |
1.25602 |
0.9 |
1.143 |
1.22932 |
1.25363 |
0.4 |
2.57215 |
2.414 |
1.2 |
1.776 |
2.16465 |
2.39321 |
0.5 |
12.726 |
5.40033 |
1.5 |
2.625 |
3.90871 |
5.36862 |
Figure 1. Comparison between the ADM solutions with VIM solutions using five iterations. Numeric values of ADM and VIM for five iterates.
3.2. Example 2
Solve the second-order nonlinear differential equation.
Solution using ADM:
In operator form, the given equation
(3.13)
(3.14)
where L is a second-order differential operator,
.
It is clear that
is invertible and is given by
(3.15)
operating
on both sides of Equation (2.14) and using the initial condition gives
(3.16)
Equating the linear term
by an infinite series of components of
.
Representing the nonlinear term
by an infinite series of Adomian polynomials
.
We get
(3.17)
where
.
The Adomian polynomials
for
are given by
(3.18)
Choosing
, and applying the decomposition method gives the recursive relation
which gives
The solution in a series form is given by
In closed form, it is given as
, which is the exact solution.
Solution using VIM:
To use the VIM method, we can use the iteration formula as given by
(3.19)
The stationary conditions are
,
Where
,
;
Which gives the Lagrange multiplier
;
Where
.
The iteration formula is given as:
Which in turn gives the successive approximations as:
(2.20)
The exact solution is given by
.
Which in turn gives the exact solution
.
The numerical values of ADM and VIM for five iterations are depicted in Figure 2 and Table 2. The error between the exact, ADM, and VIM values is negligible, especially for the lower iterates of VIM (VIM(u1(x)), VIM(u2(x))). This indicates that for lower iterates of VIM, the error compared to the exact values was much smaller than that for higher iterates of VIM.
Table 2. Comparison between the ADM solutions with VIM solutions using five iterations for
.
x |
EXACT |
ADM |
VIM(u1(x)) |
VIM(u2(x)) |
VIM(u3(x)) |
VIM(u4(x)) |
0 |
0 |
0 |
0 |
0 |
0 |
0 |
0.1 |
0.100335 |
0.100335 |
0.100333 |
0.100335 |
0.100335 |
0.100335 |
0.2 |
0.20271 |
0.20271 |
0.202667 |
0.20271 |
0.20271 |
0.20271 |
0.3 |
0.309336 |
0.309336 |
0.309 |
0.309327 |
0.309336 |
0.309336 |
0.4 |
0.422793 |
0.422793 |
0.421333 |
0.422725 |
0.422791 |
0.422793 |
0.5 |
0.546302 |
0.546298 |
0.541667 |
0.545957 |
0.546282 |
0.546302 |
0.6 |
0.684137 |
0.684099 |
0.672 |
0.682812 |
0.684024 |
0.684129 |
0.7 |
0.842288 |
0.84207 |
0.814333 |
0.83805 |
0.841782 |
0.842239 |
0.8 |
1.02964 |
1.02861 |
0.970667 |
1.01769 |
1.02771 |
1.02938 |
0.9 |
1.26016 |
1.25602 |
1.143 |
1.22932 |
1.25363 |
1.2590 |
1.0 |
1.55741 |
1.5425 |
1.33333 |
1.48254 |
1.53696 |
1.55266 |
Figure 2. Comparison between the ADM solutions with VIM solutions using five iterations.
3.3. Example 3
Solve the third order linear non-homogeneous differential equation
Solution using ADM:
In operator form the given equation
(3.21)
Can be written as
(3.22)
where L is a third order differential operator,
.
It is clear that
is invertible and is given by
(3.23)
operating
on both sides of Equation (3.22) and using the initial condition gives
(3.24)
Equating the linear term
by an infinite series of components of
.
We get the zeroth component
and the recursive relation is given by
(3.25)
The Adomian Polynomials are given as
After a series of approximations, the solution in a series form is given by
In closed form, it is given as
which is the exact solution.
Solution using VIM:
To use the VIM method, we can use the iteration formula as given by
(3.26)
The stationary conditions are
,
Which gives the Lagrange multiplier
.
Where
,
,
.
Where
.
The iteration formula is given as
(3.27)
Which in turn gives the successive approximations as:
The exact solution is given by
.
Which in turn gives the exact solution
(3.28)
The numerical values of ADM and VIM for five iterations are depicted in Figure 3 and Table 3. It was noted that the ADM values coincide with the exact values, whereas the VIM iterates (IM(u0(x)), VIM(u1(x))) deviate slightly from the exact and ADM values. The error between VIM and the exact values was more significant for lower iterates of VIM, while for higher iterates (VIM(u3(x)), VIM(u4(x))), the values coincided with ADM and the exact values. This indicates that calculating higher iterations resulted in plots overlapping with the ADM and exact values.
Table 3. Comparison between the ADM solutions with VIM solutions using three iterations.
x |
EXACT |
ADM |
VIM(u0(x)) |
VIM(u1(x)) |
VIM(u2(x)) |
VIM(u3(x)) |
VIM(u4(x)) |
0 |
0 |
0 |
0 |
0 |
0 |
0 |
0 |
0.1 |
0.210517 |
0.210517 |
0.21 |
0.209508 |
0.210516 |
0.210517 |
0.210517 |
0.2 |
0.444281 |
0.44428 |
0.44 |
0.436133 |
0.444248 |
0.444278 |
0.44428 |
0.3 |
0.704958 |
0.704956 |
0.69 |
0.677175 |
0.70471 |
0.704933 |
0.704956 |
0.4 |
0.99673 |
0.996719 |
0.96 |
0.930133 |
0.995681 |
0.996594 |
0.996715 |
0.5 |
1.32436 |
1.32431 |
1.25 |
1.19271 |
1.32114 |
1.32384 |
1.32429 |
0.6 |
1.69327 |
1.69309 |
1.56 |
1.4628 |
1.68519 |
1.69173 |
1.69302 |
0.7 |
2.10963 |
2.1091 |
1.89 |
1.73851 |
2.09204 |
2.10576 |
2.10889 |
0.8 |
2.58043 |
2.5791 |
2.24 |
2.01813 |
2.54588 |
2.57182 |
2.57856 |
0.9 |
3.11364 |
3.1106 |
2.61 |
2.30018 |
3.05088 |
3.09619 |
3.1094 |
1.0 |
3.71828 |
3.71193 |
3.0 |
2.58333 |
3.61111 |
3.68547 |
3.70948 |
Figure 3. Comparison between the ADM solutions with VIM solutions using three iterations.
3.4. Riccati Differential Equation
We present here the analytical solutions of the Riccati Differential Equation of the form
(3.29)
where
&
are scalar functions.
α is constant.
3.5. Example 1
Consider the following problem
(3.30)
If we compare Equation (3.2) with Equation (3.1), we find
Solution using Adomian Decomposition Method (ADM):
In operator form, the given equation
can be written as
(3.31)
where L is a first-order differential operator.
It is clear that
is invertible and is given by
.
Operating
on both sides of Equation (3.3) and using the initial condition gives
where the zeroth approximation is
(3.32)
The recursive term of ADM is written as
.
Where
is the nonlinear term
.
The Adomian Polynomials can be given as:
(3.33)
Choosing
, and applying the decomposition method gives the recursive relation
(3.34)
which gives
The solution in a series form is given by
In closed form, it is given as
, which is the exact solution.
Solution using VIM:
To use the VIM method, we can use the iteration formula as given by
(3.35)
The stationary conditions are
Which gives the Lagrange multiplier
.
Where
, i.e.,
.
And after a series of approximations.
The solution in a series form is given by
(3.36)
The exact solution is given by
.
Which in turn gives the exact solution
.
Table 4. Comparison between the ADM solutions with VIM solutions using five iterations for
.
x |
EXACT |
ADM |
VIM(u1(x)) |
VIM(u2(x)) |
VIM(u3(x)) |
VIM(u4(x)) |
0 |
0 |
0 |
0 |
0 |
0 |
0 |
0.1 |
0.099668 |
0.099668 |
0.1 |
0.0996667 |
0.099668 |
0.0996667 |
0.2 |
0.197375 |
0.197375 |
0.2 |
0.197333 |
0.197376 |
0.197333 |
0.3 |
0.291313 |
0.291313 |
0.3 |
0.291 |
0.291321 |
0.290992 |
0.4 |
0.379949 |
0.379949 |
0.4 |
0.378667 |
0.380006 |
0.378608 |
0.5 |
0.462117 |
0.462121 |
0.5 |
0.458333 |
0.462376 |
0.458061 |
0.6 |
0.53705 |
0.537078 |
0.6 |
0.528 |
0.537924 |
0.52706 |
0.7 |
0.604368 |
0.604514 |
0.7 |
0.585667 |
0.606769 |
0.583024 |
0.8 |
0.664037 |
0.664641 |
0.8 |
0.629333 |
0.669695 |
0.622944 |
0.9 |
0.716298 |
0.718392 |
0.9 |
0.657 |
0.72814 |
0.643252 |
1.0 |
0.761594 |
0.767901 |
1.0 |
0.666667 |
0.784127 |
0.639706 |
Figure 4. The numerical values of the Riccati differential equation using ADM and VIM for five iterations.
The numeric values of the Riccati differential equation using ADM and VIM for five iterates are depicted in Figure 4 and Table 4. It was noted that the values exceeded the exact values for lower iterations of VIM. However, the values of higher iterations coincided with the exact and ADM values, indicating that the error between the exact, ADM, and VIM values for higher iterations was negligible.
4. Discussion
The results of this study provide a detailed comparison between ADM and VIM for solving nonlinear differential equations of various orders. These findings highlight the strengths and limitations of each method in different engineering applications and provide valuable insights for researchers and practitioners.
The ADM, first envisioned by Adomian [33], is a simple but powerful method for solving a wide range of nonlinear problems. The ADM has been successfully applied in many situations. For instance, Dehghan et al. [3] solved variational problems using the ADM. Saha et al. [34] used the ADM to provide approximate solutions for extraordinary differential equations. Afrouzi et al. [22] applied the ADM to solve the Reaction-Diffusion equation. The ADM has been applied to solve the fully nonlinear sine-Gordon equations by Wang et al. [35].
In this study, the Riccati differential equation was solved using ADM and VIM [36]. The numerical results were compared with those of ADM solutions. One of the most notable findings of this study is the efficiency and rapid convergence of VIM compared to ADM. For instance, when solving the first-order nonlinear differential equation u' − 3u2 = 3u' − 3u2 = 3u' − 3u2 = 3, VIM provided accurate solutions with fewer iterations. This efficiency is attributed to VIM’s ability of the VIM to directly handle nonlinear terms without requiring cumbersome derivation of Adomian polynomials. The rapid convergence of VIM, as seen in the results, suggests that this method is particularly suitable for problems where rapid and accurate solutions are needed.
In contrast, ADM requires the computation of Adomian [37] polynomials, which can be labor-intensive and time-consuming, especially for higher-order nonlinear terms. The results show that, while ADM is capable of producing accurate solutions, the computational effort involved is significantly higher. This is evident in the examples provided, where ADM requires more iterations and complex polynomial calculations to achieve the same level of accuracy as VIM, as shown in Figures 1-4.
Both ADM and VIM demonstrated high accuracy in solving the differential equations tested. For the second-order nonlinear differential equation u" − 2uu' = 0u" – 2uu' = 0u" − 2uu' = 0, both methods produced solutions that closely matched exact solutions. However, VIM solutions were obtained more directly and with fewer computational steps, as shown in Tables 1-4. This direct approach to VIM makes it a more practical choice for engineering applications where computational resources and time are critical factors [2].
This study also highlights the versatility of VIM in handling a wide range of differential equations [38]. The method’s lack of restrictive assumptions allows it to be applied to various problems without altering the physical behavior of the system being modeled [39]. This was particularly evident in the examples involving HVAC and feedback control systems, where the VIM provided accurate thermal process and control dynamics models.
The findings of this study align with those of previous studies on the effectiveness of VIM [40]. For instance, He (2006) demonstrated the method’s efficiency in solving strongly nonlinear equations, a conclusion supported by the rapid convergence observed in this study [23]. Additionally, Wazwaz (2007) highlighted VIM’s superiority over ADM in solving Blasius equations, consistent with our comparative results [29] [32] [41].
In contrast, the ADM has been praised for its robustness in solving specific types of nonlinear problems. Studies by Dehghan and Tatari (2006) and Afrouzi (2006) showcased ADM’s capability in handling variational problems and reaction-diffusion equations, respectively [3] [22]. However, these studies also noted the method’s computational intensity, which is corroborated by the findings of this research. As observed in the results, the need for extensive polynomial calculations in ADM confirms the method’s higher computational demands compared to VIM [42].
The practical implications of these findings are significant for engineering research and practice. The choice between ADM and VIM should be guided by the specific requirements of the problem. VIM has emerged as the preferred method for scenarios in which rapid convergence and computational efficiency are paramount [43]. Its straightforward approach and minimal computational overhead make it ideal for real-time applications and complex system modeling [37].
However, ADM remains a valuable tool for problems where its specific strengths, such as handling variational issues or certain types of nonlinearities, are required [43]. The detailed polynomial expansions of ADM can provide deeper insights into the solution structure, which may be beneficial in specific research contexts [38].
Despite the clear advantages of the VIM, it is essential to acknowledge that this study was limited to specific types of nonlinear differential equations. Future research should explore the application of the VIM and ADM to a broader range of problems, including those with higher degrees of nonlinearity and more complex boundary conditions. Additionally, further studies could investigate hybrid approaches that combine the strengths of both methods to enhance the solution’s accuracy and efficiency.
5. Conclusions
The VIM offers rapid convergence through successive approximations without any restrictive assumptions or transformations that could alter the problem’s physical behavior. He’s VIM achieves this by iterating the correction function, unlike the ADM, which requires the derivation of Adomian polynomials. Consequently, the VIM simplifies the calculations and provides a more direct and straightforward approach. For nonlinear equations, which are common in expressing nonlinear phenomena, He’s VIM facilitates computational work and delivers solutions more quickly compared to the Adomian method. Even when exact solutions are unattainable, a few approximations in VIM are sufficient for numerical purposes, eliminating the need to calculate polynomials, as in the ADM.
This study provides a comprehensive comparison of the ADM and VIM, highlighting the superior efficiency and rapid convergence of the VIM in solving nonlinear differential equations. While both methods are capable of producing accurate solutions, the VIM’s direct approach and minimal computational requirements make it a more practical choice for many engineering applications. These findings contribute to the development of effective numerical methods for differential equations, offering valuable guidance for researchers and engineers in selecting the appropriate method for their specific requirements.