Optimal Control and Bifurcation Issues for Lorenz-Rössler Model

Abstract

Optimal control is one of the most popular decision-making tools recently in many researches and in many areas. The Lorenz-Rössler model is one of the interesting models because of the idea of consolidation of the two models: Lorenz and össler. This paper discusses the Lorenz-Rössler model from the bifurcation phenomena and the optimal control problem (OCP). The bifurcation property at the system equilibrium is studied and it is found that saddle-node and Hopf bifurcations can be holed under some conditions on the parameters. Also, the problem of the optimal control of Lorenz-Rössler model is discussed and it uses the Pontryagin’s Maximum Principle (PMP) to derive the optimal control inputs that achieve the optimal trajectory. Numerical examples and solutions for bifurcation cases and the optimal controlled system are carried out and shown graphically to show the effectiveness of the used procedure.

Share and Cite:

Alwan, S. , Al-Mahdi, A. and Odhah, O. (2020) Optimal Control and Bifurcation Issues for Lorenz-Rössler Model. Open Journal of Optimization, 9, 71-85. doi: 10.4236/ojop.2020.93006.

1. Introduction

Prediction of any system’s development is a very important goal, especially in the case of chaotic systems which exist frequently in several real-life and various fields.

These systems are very important for the service of mankind. Those systems include psychology [1], secure communications [2], economics system modeling [3], medicine [4], etc. Due to the importance of these models, these deserved to be studied. Despite the recent trend of forecasting, this task is often not easy. The chaotic models usually need to study the optimization or the OCP. The Principle of Optimality means to take the best choice (procedure) to minimize the cost (maximize the profit) of the current time (stage) and all subsequent times (stages). The OCPs in general does not have a perfect solution, so solutions are often approximate, which is another aspect of difficulty and sensitivity.

Lorenz system is a reduced version of a larger system studied earlier by Barry Saltzman [5]. This model is a system of three non-linear ordinary differential equations, is extremely sensitive to perturbation of the initial conditions, and this system has strange attractors; thus this system has a chaotic behavior for different values of parameters and different initial conditions as it is in the famous butterfly attractor which is produced for the special values of parameters of (10, 28, 8/3) respectively as these parameters shown sequentially in the model in [6]. Rössler attractor is another famous attractor which is one of the products of the works of a German biochemist Otto Eberhard Rössler. There is a similarity between Lorenz and Rössler attractors, where the latter follows an outward spiral around two fixed points. Although each variable in the system is oscillating around specified values, the oscillations are chaotic [7]. There is a merging of the two models of Lorenz and Rössler in a non-additive form, and because the new additive system losses the chaotic behavior property, some switching between variables was done and some transactions were manipulated [6] [8]. Although the chaotic systems are difficult to predict in a long-term, this paper discusses the optimal control problem of the Lorenz-Rössler through some external inputs. It is important to mention that the Lorenz-Rössler which should be studied is that which is presented in [6] and [8]. We use some procedures in the optimal control problem to determine some sets of numbers of parameters and initial conditions that achieve the optimal behavior to the goal state.

The qualitative changes in the trajectories in the phase space due to the change in one or more control parameters are called bifurcations. The bifurcating study is possible for a one-dimensional system with one parameter. But it is difficult in higher-dimensional cases, especially with several parameters. so there is little research in this area [9].

In the next section, we provide a necessary mathematical introduction. The mathematical system of the Lorenz-Rössler model is presented and a brief discussion of the stability of the system is in Section 3. In Section 4, an analytical investigation of some cases of bifurcation is discussed and many diagrams for those cases are presented. In Section 5, the optimal control problem is discussed followed by many digital examples that were made through simulations. The conclusion is presented in Section 6.

2. Mathematical Introduction

It is known that the optimal control problem requires: A mathematical form for the system to be controlled, description of the constraints, determination of the goal to be accomplished, usually it is an additional boundary condition and determination for the performance measure [10] [11] [12]. Firstly, we consider the simplest example to find the shortest length between two specified locations A = ( t 0 , x 1 ) and B = ( t f , x 2 ) in R2. Consider the case of no constraints on the variables, and for simplicity, we assume that the minimal curve is given as the graph of a smooth function x ( t ) , t R . The problem now can be illustrated as minimizing the integral which is given by

J = A B 1 + ( d x / d t ) 2 d t = A B 1 + x ˙ 2 d t = A B g ( t , x ( t ) , x ˙ ( t ) ) d t (2.1)

where J is called a functional or the objective function, that must be minimized with respect to t. In a simple case of one dependent variable x ( t ) and no constraints, and conditionally that the optimal curve x ¯ ( t ) exist and unique, the general problem now is to find the optimal curve that minimizes the functional

J = t 0 t f g ( t , x ( t ) , x ˙ ( t ) ) d t (2.2)

where g is a continuous function in all its variables and has a continuous first and second orders partial derivatives with respect to all its variables. Moreover, t 0 and t f are fixed. If we consider a small variation in the carve, then

x ( t ) = x ¯ ( t ) + ϵ δ ( t ) t (2.3)

Therefore

x ˙ ( t ) = x ¯ ˙ ( t ) + ϵ δ ˙ ( t ) (2.4)

where ϵ is a small parameter and δ ( t ) is an arbitrary real function of t s.t. δ ( t 0 ) = δ ( t f ) = 0 . It is clear that the optimal curve x ¯ ( t ) is a member of the family (2.3) at ϵ = 0 . See Figure 1.

Thus, the functional in (2.2) can be written as

J = t 0 t f g ( t , x ¯ ( t ) + ϵ δ ( t ) , x ¯ ˙ ( t ) + ϵ δ ˙ ( t ) ) d t (2.5)

The necessary condition in order to be extremum function is [13]

d J d ϵ | ϵ = 0 = 0 (2.6)

Under the assumption that x and all its derivatives are continuous, with some mathematical processes and considering that x ( t ) = x ¯ ( t ) and x ˙ ( t ) = x ¯ ˙ ( t ) at ϵ = 0 , the condition (2.6) can be

δ ( t f ) g x ˙ | t f δ ( t 0 ) g x ˙ | t 0 + t 0 t f δ ( t ) ( g x d d t ( g x ˙ ) ) d t = 0 (2.7)

but at ϵ = 0 , δ ( t f ) = δ ( t 0 ) = 0 and δ ( t ) 0 t ] t 0 , t f [ , then

g x d d t ( g x ˙ ) = 0 , t ] t 0 , t f [ , s.t. x ( t 0 ) = x 0 and x ( t f ) = x f (2.8)

Equation (2.8) gives the necessary condition to minimize J and it is known as the Euler-Lagrange (E-L) equation, which associates with the vibrational problem (2.3).

Figure 1. The best bath between two points A and B that minimize the distance as a goal.

Let us now consider the n first ordinary differential equations (ODEs) as the following forms x ˙ ( t ) = ( x ˙ 1 , , x ˙ n ) c , and suppose that the optimal state x ¯ ( t ) = ( x ¯ 1 , , x ¯ n ) c exist and unique, that is the vector of n twice differentiable function. In this case, the functional (2.2) can take the following form

J ( x ) = t 0 t f g ( x ( t ) , x ˙ ( t ) , t ) d t , x ( t 0 ) = x 0 , x ( t f ) = x f (2.9)

Under the same conditions in the case of one dependent variable, the function g ( x ) which makes the integral (2.9) an extremum must satisfy the n simultaneous E-L equations, which are given by

g x i d d t ( g x ˙ i ) = 0 , t ] t 0 , t f [ , (2.10)

with the boundary conditions

x i ( t 0 ) = x i 0 and x i ( t f ) = x i f , i = 1 , 2 , , n (2.11)

See [11] and [13] for more details. Now, in the case when there are constraints on the states and control variables. The functional that need to maximize (minimize) takes the following form

J * ( x , u ) = G ( x f , t f ) + t 0 t f [ g 0 ( x , u , t ) + λ c ( g x ˙ ) ] d t (2.12)

where G : n × and g 0 : n × m × are real valued functions that can be selected to weight the terminal and transient performance respectively. G ( x f ) can be called the terminal cost, and g 0 can be the instantaneous loss per unit of time. λ c = ( λ 1 , , λ n ) is called Lagrange multipliers (L-m) vector, by integrating the term λ c x ˙ in (2.12) we get

J * = G ( X f , t f ) λ c x | t 0 t f + t 0 t f [ H + λ ˙ c x ] d t (2.13)

where

H = g 0 + λ c g (2.14)

is called the Hamiltonian function (H.f). Some-times H takes the form

H ( x , u , λ , λ 0 , t ) = λ 0 g 0 ( x , u , t ) + λ c g ( x , u , t ) (2.15)

where λ 0 0 and one can get λ 0 = 1 for maximization [12].

Theorem: Assume u * ( . ) is the optimal function that maximizes the objective function J * and x * ( . ) is the corresponding trajectory, then u * must satisfy the following conditions [11] [14]

H ( x * , u * , λ * , λ 0 , t ) H ( x * , u , λ * , λ 0 , t ) (2.16.1)

λ ˙ j ( t ) = H x j , λ j ( t f ) = G x j | t = t f , H u j | u * = 0 , ( u U , t [ t 0 , t f ] , j = 1 , 2 , , n ) (2.16.2)

This system consists of 2n nonlinear differential equations with n initial conditions x j ( t 0 ) and n terminal conditions λ j ( t f ) . For more details about this theorem and its proof see [11] and [14]. Notable: An additional equation is required if t f is indeterminate.

3. Lorenz-Rössler Mathematical

The Lorenz-Rössler system is a three-dimensional system with five parameters. This system is described by the following equations as presented in [6] and [8]

x ˙ 1 = a 1 ( x 2 x 1 ) x 2 x 1 x ˙ 2 = a 2 x 1 x 2 20 x 1 x 3 + x 1 + a 3 x 2 x ˙ 3 = 5 x 1 x 2 b 1 x 3 + b 2 + x 1 ( x 3 b 3 ) (3.1)

where x 1 , x 2 and x 3 are the state variables of the system, a 1 , a 2 , a 3 , b 1 , b 2 and b 3 are the system parameters. Clearly, the zero-state is not a solution of the system (3.1) because the system is not homogeneous, and with a few mathematical calculations we can be sure that this system has the following possible equilibrium states

E 1 = ( 0 , 0 , b 2 / b 1 ) (3.2)

E 2 = ( 0 , x 2 , b 2 / b 1 ) , a 1 = a 3 = 1 (3.3)

E 3 = ( x 1 , f ( x 1 ) , B ) (3.4)

where

f ( x 1 ) = A x 1 , A = a 1 + 1 a 1 1 , a 1 1 , B = 1 20 ( a 2 + 1 + A ( a 3 1 ) ) , (3.5)

x 1 = ( ( B b 3 ) ± ( ( B b 3 ) 2 20 A ( b 2 B b 1 ) ) 1 2 ) / 10 A (3.6)

It is easy to show that the system (3.1) under some conditions, is unstable at least at one of its steady-states, so be it E 1 . The Jacobian matrix W of the model (3.1) is given by

W = ( a 1 1 a 1 1 0 a 2 + 1 20 x 3 a 3 1 20 x 1 5 x 2 + x 3 b 3 5 x 1 x 1 b 1 ) , i , j = 1 , 2 , 3

And W valued at stationary state E 1 is given by

W 1 = ( ( a 1 + 1 ) a 1 1 0 a 2 + 1 20 b 2 / b 1 a 3 1 0 b 3 + b 2 / b 1 0 b 1 ) (3.7)

According the linear stability analysis and theory of linear differential equations, we strive to find the eigenvalues of W 1 . The determinant equation of W 1 is given by the following equation:

| λ I W 1 | = ( λ + b 1 ) [ λ 2 + θ 1 λ + θ 2 ] = 0 (3.8)

where

θ 1 = 2 + a 1 a 3 (3.9)

θ 2 = ( 1 + a 1 ) ( 1 a 3 ) ( a 1 1 ) ( a 2 + 1 20 b 2 / b 1 ) (3.10)

In general, the eigenvalues of W 1 are complex numbers. In this regard, we are not concerned with the values of the solutions of (3.8) but with their signs. Based on the linear stability theory, if there are at least one of the eigenvalues in (3.8) is positive, the equilibrium point E 1 is unstable. So, for the linear part in (3.8), the eigenvalue is λ 1 = b 1 < 0 , while for the quadratic polynomial part, and according of the Descartes’ rule of the number of the positive real roots of a polynomial, the quadratic polynomial in (3.8) has at least one positive root if θ 1 < 0 , i.e. a 3 > a 1 + 2 . That proof that the Lorenz-Rössler for different values of parameters is unstable at least at E 1 .

4. Bifurcation of Lorenz-Rössler System

In this section, we discuss the bifurcation phenomenon of the considered system. At the first equilibrium point, E 1 depends upon the following characteristic Equation (3.8), that can be rewritten as the following:

( λ + b 1 ) [ λ 2 + ( 2 + a 1 a 3 ) λ + 2 a 1 a 3 a 3 a 1 a 2 + a 2 + 20 b 2 ( a 1 1 ) / b 1 ] = 0 (4.1)

then the values of λ are λ 1 = b 1 , and

λ 2 , 3 = ( 2 + a 1 a 3 ) / 2 ± [ ( 2 + a 1 a 3 ) 2 4 { ( 2 a 1 a 3 a 3 a 2 a 1 + a 2 ) + 20 b 2 ( a 1 1 ) / b 1 } ] 1 2 / 2 (4.2)

The bifurcation phenomena arise when one or more of the eigenvalues equal to zero, by analyzing the values of the last two eigenvalues many cases hold:

Case 1: when ( 2 a 1 a 3 a 3 a 2 a 1 + a 2 ) + 20 b 2 ( a 1 1 ) / b 1 = 0 , then λ 2 = ( 2 + a 1 a 3 ) and λ 3 = 0 , this case is a Saddle-Node bifurcation (SNB).

We chose the parameter a 3 as a bifurcation parameter, with giving fixed values of the other parameters the bifurcation diagrams can be drawn as in Figure 2.

Case 2: when ( 2 a 1 a 3 a 3 a 2 a 1 + a 2 ) + 20 b 2 ( a 1 1 ) / b 1 > 0 and ( 2 + a 1 a 3 ) = 0 , then

Figure 2. Bifurcation diagrams where a 1 = 2 , a 2 = 1 , b 1 = 20 , b 2 = 5 , b 3 = 1 and a 3 is the bifurcation parameter.

λ 2 , 3 = i ( 4 { ( 2 a 1 a 3 a 3 a 2 a 1 + a 2 ) 20 b 2 ( a 1 1 ) / b 1 } ) 1 / 2 / 2

where Hopf bifurcation (HB) holds. Choosing b 1 as a bifurcation parameter with fixed values of the other parameters give a picture of this case in the bifurcation diagram as in Figure 3.

Next we chose the parameter a 3 as a bifurcation parameter, with giving fixed values of the other parameters the bifurcation diagrams can be drawn as in Figure 4.

5. Optimal Control Problem in a Period of Time

In the case of constraints on the control variables, Pontryagin maximum principleis considered as a design tool to get the best possible trajectory for a dynamical system by providing a necessary condition that must hold for an optimum, but not (in general) sufficient conditions [11]. So, the basic problem is to find the controllers that maximize/minimize the functional J in (2.12), which is called a co-state function and considered as a type of L-m. We use the PMP to find the best possible controllers with respect to a choice of specified measure.

Figure 3. Bifurcation diagrams where a 1 = 2 , a 2 = 1 , a 3 = 4 , b 2 = 10 , b 3 = 4.1 and b 1 is the bifurcation parameter.

The selected measure can be presented as the following forms:

minmize = 1 2 t 0 T i = 1 3 ( α i w i 2 + β i u i 2 ) d t (5.1)

Subject to:

The controlled system of (3.1) that is given by

x ˙ 1 = a 1 ( x 2 x 1 ) x 2 x 1 + e 1 x ˙ 2 = a 2 x 1 x 2 20 x 1 x 3 + x 1 + a 3 x 2 + e 2 x ˙ 3 = 5 x 1 x 2 b 1 x 3 + b 2 + x 1 ( x 3 b 3 ) + e 3 (5.2)

And the initial and terminal conditions

x i | t 0 = x i 0 , x i | T = x ¯ i , i = 1 , 2 , 3 (5.3)

where:

w i = ( x i x ¯ i ) and u i = ( e i e ¯ i ) (5.4)

- α i , β i , i = 1 , 2 , 3 are positive control constants.

- x ¯ is any steady-states of the system as E 1 , E 2 or E 3 that are defined in Equations (3.2)-(3.4).

Figure 4. Bifurcation diagrams where a 1 = 2 , a 2 = 1 , a 3 = 4 , b 1 = 10 , b 2 = 11 , b 3 = 4.1 and a 3 is the bifurcation parameter.

- e i are the controlling inputs that be determined by the PMP with respect to the optimality measure for the system (3.1) near its stead-states.

- e ¯ i are the optimal control inputs.

The selected measure or the objective function (5.1) represents the sum of squares of the deviations of x i from their goal levels x ¯ i and deviations of the control inputs e i from their goal levels e ¯ i , ( i = 0 , 1 , 2 ).

Now, our aim is to keep the system states x i , i = 1 , 2 , 3 to their goal levels x ¯ i and the control inputs e i to their goal levels (optimal controllers) e ¯ i over time as close as possible. Let us consider the following an additional variable as a replacement of the cost function (5.1)

x ˙ * ( t ) = 1 2 i = 1 3 ( α i w i 2 + β i u i 2 ) (5.5)

with the initial condition x * | t 0 = 0 and the terminal condition x * | T = .

Then, introduce the co-state variables γ = ( γ 1 , γ 2 , γ 3 , γ * ) c that are related to the state variables of the system (3.1) and the additional state variable (5.5) respectively. Then the H.f takes the following form

H = γ * x ˙ * + i = 1 3 γ i x ˙ i = i = 1 3 γ * 2 ( α i w i 2 + β i u i 2 ) + γ 1 [ a 1 ( x 2 x 1 ) x 2 x 1 + e 1 ] + γ 2 [ a 2 x 1 x 2 20 x 1 x 3 + x 1 + a 3 x 2 + e 2 ] + γ 3 [ 5 x 1 x 2 b 1 x 3 + b 2 + x 1 ( x 3 b 3 ) ] (5.6)

The Hamiltonian equations are given by:

γ * t = H x * = 0 (5.7)

γ i t = H x i , i = 1 , 2 , 3

From Equation (5.7), clearly γ * is a constant, so for minimization, we can choose γ * = 1 [15] [16]. Using Equations (5.6) and (5.7) with m = 2 for simplicity we can get the co-state differential equations as

γ ˙ 1 = α 1 w 1 + γ 1 ( a 1 + 1 ) γ 2 ( 1 + a 2 20 x 3 ) γ 3 ( 5 x 2 + x 3 b 3 ) (5.8)

γ ˙ 2 = α 2 w 2 γ 1 ( a 1 1 ) γ 2 ( a 3 1 ) 5 γ 3 x 1 (5.9)

γ ˙ 3 = α 3 w 3 + 20 γ 2 x 1 γ 3 ( x 1 b 1 ) (5.10)

For minimizing the H.f w.r.t e i , i through the conditions H / e i = 0 , we can get

e i = e ¯ i + γ i β i , i = 1 , 2 , 3 (5.11)

By substituting (5.11) in the controlled system in (3.1) with Equations (5.8)-(5.10) we get the following system of seven nonlinear differential equations

x ˙ 1 = a 1 ( x 2 x 1 ) x 2 x 1 + e ¯ 1 + γ 1 β 1

x ˙ 2 = a 2 x 1 x 2 20 x 1 x 3 + x 1 + a 3 x 2 + e ¯ 2 + γ 2 β 2

x ˙ 3 = 5 x 1 x 2 b 1 x 3 + b 2 + x 1 ( x 3 b 3 ) + e ¯ 3 + γ 3 β 3

x ˙ * = 1 2 i = 1 3 ( α i ( x i x ¯ i ) 2 + ( γ i 2 / β i ) ) (5.12)

γ ˙ 1 = α 1 ( x 1 x ¯ 1 ) + γ 1 ( a 1 + 1 ) γ 2 ( 1 + a 2 20 x 3 ) γ 3 ( 5 x 2 + x 3 b 3 )

γ ˙ 2 = α 2 ( x 2 x ¯ 2 ) γ 1 ( a 1 1 ) γ 2 ( a 3 1 ) 5 γ 3 x 1

γ ˙ 3 = α 3 ( x 3 x ¯ 3 ) + 20 γ 2 x 1 γ 3 ( x 1 b 1 )

with the following boundary conditions: x i | t 0 = x i 0 , x i | T = x ¯ i , γ i | T = 0 , i = 1 , 2 , 3 .

6. Numerical Simulation

In the following, some numerical solutions of the system in Equations (5.12), that display how the system states converge to the goal state in different cases, and how the co-state variables disappear at the end of time T.

- The optimal control to the stationary state E 1 = ( 0 , 0 , b 2 b 1 = 5 ) is shown in Figure 5.

- The optimal control to the stationary state E 3 = ( x 1 , A x 1 , B ) is shown in Figure 6 where

Figure 5. Optimal control of the system to E 1 = ( 0 , 0 , b 2 / b 1 = 5 ) at the parameters ( a 1 = 3 , a 2 = 4 , a 3 = 4 , b 1 = 2 , b 2 = 10 , b 3 = 3 ) and the constants ( α 1 = 2 , α 2 = 3 , α 3 = 4 , β 1 = 1 , β 2 = 0.5 , β 3 = 4 ) with the initial and terminal conditions: x 10 = 0.3 , x 20 = 0.6 , x 30 = 0.5 , x * 0 = 1 , γ 1 T = γ 2 T = γ 3 T = 0 , T = 10 .

Figure 6. Optimal control of the system to E 3 = ( 0.450 , 0.675 , 0.325 ) at the parameters ( a 1 = 5 , a 2 = 1 , a 3 = 4 , b 1 = 1 , b 2 = 0.01 , b 3 = 3 ) and the constants ( α 1 = 2 , α 2 = 6 , α 3 = 4 , β 1 = 1 , β 2 = 2 , β 3 = 3 ) with the initial and terminal conditions: x 10 = 0.3 , x 20 = 0.6 , x 30 = 0.5 , x * 0 = 1 , γ 1 T = γ 2 T = γ 3 T = 0 , T = 10 .

A = a 1 + 1 a 1 1 , a 1 1 , B = ( a 2 + 1 + A ( a 3 1 ) ) 20 , and

x 1 = ( ( B b 3 ) ± ( B b 3 ) 2 20 A ( b 2 B b 1 ) 1 2 ) / 10 A

Figure 5 and Figure 6 indicate that, according to the assumed values of the parameters, the optimal controlled state x 1 , x 2 , x 3 converge with time to the assumed goal level 0, 0, 5, respectively in Figure 5 and to 0.450, 0.675, 0.325 respectively in Figure 6. Also, in each case the co-state variables γ 1 , γ 2 , γ 3 disappear with time. The assumed goal levels are represented by the dotted lines. All results indicate the possibility of the optimal control of the Lorenz-Rössler system, and the PMP has shown excellent results in achieving the optimal behavior of the system.

7. Conclusion

Many studies can be implemented on the Lorenz-Rössler model, but in this paper, we have focused on the issues of the bifurcations and the optimal control problem of the system. The bifurcation analysis of the system at the equilibrium state ( 0 , 0 , b 2 / b 1 ) was discussed and it was found that a saddle-node bifurcation and a Hopf bifurcation can be holed under some conditions. Many bifurcation diagrams have verified those cases using examples that are showing graphically for some chosen parameters. The procedure of the Pontryagin Maximum Principle is considered to solve the optimal control problem. The optimal control inputs were analytically derived and it is found that they are functions of the co-state variables which disappear when the system arrives at the ideal state. Analytical methods are used to solve the necessary conditions, while the non-linear differential equations of the optimal controlled system are solved numerically by the math software Maple, and then some illustrative solutions are shown graphically.

Nomenclature

Conflicts of Interest

The authors declare no conflicts of interest regarding the publication of this paper.

References

[1] Robertson, R. and Combs, A. (1995) Chaos Theory in Psychology and the Life Sciences. Lawrence Erlbaum Associates, Mahwah.
[2] Guan, X.P., Fan, Z.P. and Chen, C.L. (2002) Chaos Control and Application in Secure Communication. National Defense Industry Press, Beijing.
[3] Guegan, D. (2009) Chaos in Economics and Finance. Annual Reviews in Control, 33, 89-93.
https://doi.org/10.1016/j.arcontrol.2009.01.002
[4] Kumar, A. and Hegde, B.M. (2012) Chaos Theory: Impact on and Applications in Medicine. Nitte University Journal of Health Science, 2, 93-99.
https://doi.org/10.1055/s-0040-1703623
[5] Saltzman, B. (1962) Finite Amplitude Free Convection as an Initial Value Problem. Journal of the Atmospheric Sciences, 19, 329-341.
https://doi.org/10.1175/1520-0469(1962)019<0329:FAFCAA>2.0.CO;2
[6] Edwin, A.U. (2013) Fuzzy Stabilization of a Coupled Lorenz-Rössler Chaotic System. Part-I. Academic Research International, 4, 185-194.
[7] Rössler, O.E. (1976) An Equation for Continuous Chaos. Physics Letters A, 57, 397-398.
https://doi.org/10.1016/0375-9601(76)90101-8
[8] Alsafasfeh, Q.H. and Al-Arni, M.S. (2011) A New Chaotic Behavior from Lorenz and Rössler Systems. Circuits and Systems, 2, 101-105.
https://doi.org/10.4236/cs.2011.22015
[9] Argyris, J. and Hase, M. (1994) An Exploration of Chaos: An Introduction for Natural Scientists and Engineers. North Holland Publishing Company, Amsterdam.
[10] Brogan, W.L. (1982) Modern Control Theory. Prentice-Hall, Inc., Upper Saddle River.
[11] Kirk, D.E. (1970) Optimal Control Theory: An Introduction. Prentice-Hall, Inc., Upper Saddle River.
[12] Sethi, S.P. (2000) Optimal Control Theory Applications to Management Science and Economics. Kluwer Academic Publishers, Berlin.
[13] Weinstock, R. (1974) Calculus of Variations: With Application to Physics and Engineering. Dover Publications, Inc., Mineola.
[14] Barnett, S. and Cameron, R.G. (1985), Introduction to Mathematical Control Theory. 2nd Edition, Clarendon Press, Oxford.
[15] El-Gohary, A. and Alwan, S. (2011) Estimation of Parameters and Optimal Control of the Genital Herpes Epidemic. Canadian Journal on Science and Engineering Mathematics, 2, 31-41.
[16] Alwan, S. and El-Gohary, A. (2011) Chaos, Estimation and Optimal Control of Habitat Destruction Model with Uncertain Parameters. Computer and Mathematics with Applications, 62, 4089-4099.
https://doi.org/10.1016/j.camwa.2011.09.058

Copyright © 2024 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.