Matrix Riccati Equations in Optimal Control

Abstract

In this paper, the matrix Riccati equation is considered. There is no general way for solving the matrix Riccati equation despite the many fields to which it applies. While scalar Riccati equation has been studied thoroughly, matrix Riccati equation of which scalar Riccati equations is a particular case, is much less investigated. This article proposes a change of variable that allows to find explicit solution of the Matrix Riccati equation. We then apply this solution to Optimal Control.

Share and Cite:

Ndiaye, M. (2024) Matrix Riccati Equations in Optimal Control. Applied Mathematics, 15, 199-213. doi: 10.4236/am.2024.153011.

1. Introduction

The Matrix Riccati equation named after the mathematician Jacopo Francesco Riccati [1] , can be written as

d Y d t = Y A ( t ) Y + Y B ( t ) + C ( t ) Y + D ( t ) (1)

where Y n × m , A ( t ) m × n , B ( t ) m × m , C ( t ) n × m and D ( t ) n × m .

The matrix Riccati equation arises in many branches of applied mathematics [2] including optimal control [3] , theory of stabilization, transport theory, quantum mechanics [4] [5] , physics, filtration of control systems, differential game [6] [7] , financial mathematics [8] , random processes, diffusion problems, non-unifornm transmission line and stochastic control. The Riccati equation can serve as an unifying link between linear quantum mechanics and other fields in physics, such as thermodynamics and cosmology, and new uses keep popping up as we discover new applications. Thus there is a need to extend previous findings to more complicated applications.

The scalar Riccati equation [9] defined as

d y d t = a ( t ) y 2 + b ( t ) y + c ( t )

is a particular case of the Matrix Riccati equation.

It is well-known that the change of variable y = d u d t a ( t ) u leads to the second

order linear differential equation [10] :

d 2 u d t 2 ( b ( t ) + d a ( t ) d t a ( t ) ) d u d t + a ( t ) c ( t ) u = 0

If b ( t ) + d a ( t ) d t a ( t ) and a ( t ) c ( t ) are constant functions then the scalar Riccati

equation can be solved analytically.

We can use the scalar Riccati equation as a blueprint for how we should move forward to solve analytically the matrix Riccati equation. I will be looking at adapting the method above to matrix Riccati equation which required some change to accommodate the matrix properties.

Control system and optimal control [3] [11] [12] is a fairly new field of mathematics, starting in the 1950s.

In a nutshell, Optimal Control extends the maximization/minimization process we learn in calculus to functions, or functionals. Where Calculus gave us a method of finding points ( x 1 , x 2 , , x n ) that maximize or minimize some function f ( x 1 , x 2 , , x n ) , optimal control theory deals with way of finding a control for a dynamical system over a period of time such that an objective function is optimized.

A terrible but helpful metaphor is that if we have an equation that approximates the shape of a mountain, Calculus methods can tell us the location of its highest peak and deepest crag where Optimal Control tells us which ridge to hide in order to get to the peak the fastest and by expanding the least energy.

Matrix Riccati equations can be used to solve some optimal control problem, for instance the Linear Quadratic Regulator (LQR) problem. Recall that the general analytical solutions for the matrix Riccati equation and the algebraic matrix Riccati equation are not available. Only special cases are treated particularly for the scalar Riccati equation. This explains why it is very hard for the (LQR) to find explicit formula for the controls and then approximations of the controls are more frequent which leads to errors. The particular case where the solution of the matrix Riccati equation approaches a constant has been studied [13] . In this case, we deal with the steady-state version referred to as the algebraic Riccati equation. Analytical solutions for algebraic Riccati equation are also intractable. In this paper, a change of variable is proposed in order to turn the matrix Riccati equation into a second order linear matrix equation. This change of variable was mainly inspired by some of the work done with the scalar Riccati equation. This will allow to find exact values for the controls.

This article is divided into six chapters.

In Chapter 2, a change of variable to the Matrix Riccati equation is proposed that turns it to a second order linear matrix differential equation.

In Chapter 3, we look at the field of optimal control which is a branch of mathematics that deals with finding a control for a dynamical system over a period of time such that an objective function is optimized.

In Chapter 4, we use the results from the two previous chapters to solve analytically an optimal control example.

Chapter 5 is dedicated to the conclusion.

2. A Solution to the Matrix Riccati Equation

Consider the matrix Riccati equation

d Y d t = Y A ( t ) Y + Y B ( t ) + C ( t ) Y + D ( t )

where Y n × m , A ( t ) m × n , B ( t ) m × m , C ( t ) n × m and D ( t ) n × m .

For the sake of brevity, we will assume that m = n so that Y , A , B , C and D n × n .

Proposition 1. If A is invertible, then

( A 1 ) = A 1 A A 1 (2)

where A represents the derivative with respect to t that is A = d A d t .

Proof. Since A 1 A = I where I is the identity matrix

( A 1 A ) = ( A 1 ) A + A 1 A = 0

therefore ( A 1 ) = A 1 A A 1 .

,

Theorem 1. If B = 0 and if A is invertible, then the matrix Riccati Equation (1) can be turned into the second order matrix linear differential equation.

U ( A C A + A A 1 ) U + A D U = 0 (3)

using the change of variable

Y = A 1 U U 1 (4)

where U is invertible.

Proof.

Y = ( A 1 ) U U 1 + A 1 U U 1 + A 1 U ( U 1 )

Using (2) on U,

Y = ( A 1 ) U U 1 + A 1 U U 1 + A 1 U U 1 U U 1

Since

Y = A 1 U U 1 U U 1 C A 1 U U 1 + D

then

( A 1 ) U U 1 + A 1 U U 1 + A 1 U U 1 U U 1 = A 1 U U 1 U U 1 C A 1 U U 1 + D

Using (2) on A and after simplification, we obtain

U ( A C A 1 + A A 1 ) U + A D U = 0

,

Example 1. Choosing A = ( e t 0 0 e t ) , B = 0 , C = I 2 × 2 , D = ( e t 0 0 e t )

After plugging into the equation, this gives us:

U 2 U + U = 0

After solving for U and using the change of variable (4) we obtain

Y = ( ( c 1 + c 2 + c 1 t ) ( c 7 t + c 8 ) + ( c 5 + c 6 + c 5 t ) ( c 5 t + c 6 ) e t ( ( c 1 t + c 2 ) ( c 7 t + c 8 ) ( c 3 t + c 4 ) ( c 5 t + c 6 ) ) ( c 3 + c 4 + c 3 t ) ( c 7 t + c 8 ) + ( c 7 + c 8 + c 7 t ) ( c 5 t + c 6 ) e t ( ( c 1 t + c 2 ) ( c 7 t + c 8 ) ( c 3 t + c 4 ) ( c 5 t + c 6 ) ) ( c 1 + c 2 + c 1 t ) ( c 3 t + c 4 ) ( c 5 + c 6 + c 5 t ) ( c 1 t + c 2 ) e t ( ( c 1 t + c 2 ) ( c 7 t + c 8 ) ( c 3 t + c 4 ) ( c 5 t + c 6 ) ) ( c 3 + c 4 + c 3 t ) ( c 7 t + c 8 ) ( c 7 + c 8 + c 7 t ) ( c 1 t + c 2 ) e t ( ( c 1 t + c 2 ) ( c 7 t + c 8 ) ( c 3 t + c 4 ) ( c 5 t + c 6 ) ) )

where c 1 , c 2 , c 3 , c 4 , c 5 , c 6 , c 7 , c 8 are constants given by the initial conditions.

The next theorem generalizes theorem 1.

Theorem 2. If B W = W B for all W n × n and Y = A 1 U U 1 then (2) can be turned into the equation:

U + ( B A C A A A 1 ) U + A D U = 0 (5)

The proof of theorem 2 is very similar to the proof of theorem 1.

3. Optimal Control Theory

The field of Optimal Control, as the name suggests, is a branch of mathematics that deals with analyzing a system to find solutions that cause it to behave optimally for the cost we are willing to pay. If a system is controllable given an initial state and some assumptions, then we can reach a desired state of the system by finding the appropriate control with minimum cost. Control is handled through a feedback into u that depends on the state of the system. The basic optimal control problem can be stated as follows: Given the system of differential equations along with an initial condition,

( S ) d x d t = f ( x ( t ) , u ( t ) ) , x ( t 0 ) = x 0 (6)

where x ( t ) is the state of the system, x ( t ) n , and u ( t ) m is the control.

The goal is to find a control u ( t ) over [ t 0 , t f ] which for any x 0 , minimizes the cost function

T ( x , u ) = t 0 t f L ( x ( t ) , u ( t ) ) d t

To better frame the optimal control problem, let’s consider a simple example.

Example 2. Consider the circuit below that consists of resistor, inductor and a source (RL circuit).

The circuit in Figure 1 is very frequent in electronic device for filtering signals.

The behavior of the resistor is specified by a constant R called resistance.

The behavior of the inductor is specified by a constant L called inductance.

i is the current across drop the circuit. u is the control and represents the voltage across the source.

According to the Kirchoffs law of current, the sum of the voltage drop across the circuit in Figure 1 equals the voltage across the source, therefore,

V R + V L = u

R i + L d i d t = u

Let i 0 be the initial value of the current that is i ( 0 ) = i 0 .

We deal with the system:

R i + L d i d t = u

i ( 0 ) = i 0

Suppose that we want to switch the current i from i 0 to another value i 1 at t = T that is i ( T ) = i 1 with a minimum cost.

The goal is expressed by the cost functional.

J ( i , u ) = 1 2 0 T c ( i ( t ) i 1 ) 2 d t + 1 2 0 T c u ( u ( t ) ) 2 d t

where c and c u are positive constants and T is the fixed final time T > 0 .

The first integral is the state cost and the second integral is the control cost.

We also can assume that that u belongs to a set of admissible control

U a d = { u L 2 ( [ 0, T ] ) | k 1 u k 2 , t [ 0,1 ] }

Figure 1. Circuit RL.

where k 1 and k 2 are constant real numbers.

Question: What values of u ( t ) allow this switch with minimum cost?

General Statement of the Optimal Control Problem: The Pontryagin Principle

The basic optimal control problem ( P ) can be stated as follow: Given the system of differential equations along with an initial condition:

d x d t = f ( x ( t ) , u ( t ) ) , x ( t 0 ) = x 0 (7)

where x ( t ) is the state of the system x ( t ) n and u ( t ) is the input of the system u ( t ) m .

The goal is to find a control u ( t ) over [ t 0 , t f ] that minimizes the cost functional

J ( x , u ) = t 0 t f L ( x ( t ) , u ( t ) ) d t .

To solve an optimal control problem, we can use the Pontryagin principle which represents some necessary conditions the optimal control u * ( t ) and the optimal state x * ( t ) need to satisfy.

Theorem 3 (Pontryagin’s maximum principle) If x * ( t ) and u * ( t ) are

optimal for the problem (P), then there exist a function λ ( t ) = ( λ 1 ( t ) λ 2 ( t ) λ n ( t ) ) and a

function H defined as:

H ( x ( t ) , u ( t ) , λ ) = λ T f ( x , u ) L ( x , u ) that satisfy the three properties.

a) H ( x * ( t ) , u * ( t ) , λ ( t ) ) H ( x * ( t ) , u ( t ) , λ ( t ) )

for all control u at each time t.

b) d λ d t = x H ( x * , u * , λ )

c) λ ( t f ) = 0

λ T represents the transpose of λ and H is called the Hamiltonian.

The Pontryagin’s maximum principle yields to the controls that represent the candidates for the optimal controls. Those candidates need to be tested. The following theorem gives a sufficient condition for a candidate to be optimal.

Theorem 4. Let U ( x 0 ) be the set of admissible controls of u and X an open subset of n . If there exists a function J 1 : X of class C 1 such that the three statements below are true.

i) If u U generates the solution x ( t ) of (7) and x ( t ) X for all

t [ t 0 , t 1 * ) , then lim t t 1 J 1 ( x ( t ) ) lim t t 1 * J 1 ( x * ( t ) ) = 0 , for some t 1 * t 1 .

ii) L ( x * ( t ) , u * ( t ) ) + g r a d T J 1 ( x * ( t ) ) f ( x * ( t ) , u * ( t ) ) = 0 for all t [ t 0 , t 1 * ) for some t 1 * t 1 .

iii) L ( x , u ) + g r a d T J 1 ( x ) f ( x , u ) 0 for all x X and u U .

Then the control u * ( t ) generating the solution x * ( t ) for all t [ t 0 , t 1 * ] with x * ( t 0 ) = x 0 , is optimal with respect to X.

The proof of the theorem 3 and 4 can be found in [3] .

Remark 1. The proof of theorem 4 suggests that the test function J 1 ( x ( t ) )

can be chosen so that: J 1 ( x 0 ) = t 0 t f L ( x ( t ) , u ( t ) ) .

Remark 2. In case we deal with a nonautonomous system that is a system in the form

d x d t = f ( x , t , u ) x ( t 0 ) = x 0

then we always can turn such system into an autonomous system.

We can define

x ^ = ( x 1 ( t ) x 2 ( t ) x n ( t ) x n + 1 ( t ) )

where x n + 1 ( t ) = t then we deal with the following autonomous system

d x ^ d t = ( f ( x ^ , u ) 1 ) = f ^ ( x ^ , u ) x ^ ( t 0 ) = x ^ 0 = ( x 0 , t 0 )

Also if the cost integrand depends on t that is J ( x , u ) = t 0 t f L ( x , t , u ) d t then we can rewrite the cost function: as

J ^ ( x ^ , u ) = t 0 t f L ^ ( x ^ , u ) d t

Now we can refine the maximum’s Pontryagin principle to the autonomous system

d x ^ d t = f ^ ( x ^ , u ) x ^ ( t 0 ) = x ^ 0

with cost function given by:

J ^ ( x ^ , u ) = t 0 t f L ^ ( x ^ , u ) d t

and then λ ^ = ( λ 1 ( t ) λ 2 ( t ) λ n ( t ) λ n + 1 ( t ) ) n + 1 .

4. The Linear-Quadratic Regulator (LQR)-Riccati Equation

We suppose that

f ( x ( t ) , u ( t ) ) = A ( t ) x ( t ) + B ( t ) u ( t )

then

f ^ ( x ^ ( t ) , u ( t ) ) = ( A ( t ) x ( t ) + B ( t ) u ( t ) 1 )

L ^ ( x ^ ( t ) , u ( t ) ) = 1 2 ( x T ( t ) Q ( t ) x ( t ) + u T ( t ) R ( t ) u ( t ) )

then

d x ^ d t = ( A ( t ) x ( t ) + B ( t ) u ( t ) 1 ) (8)

with initial state x ( t 0 ) = x 0 and the interval [ t 0 , t f ] is specified and

x ( t ) = ( x 1 ( t ) x 2 ( t ) x n ( t ) )

u ( t ) = ( u 1 ( t ) u 2 ( t ) u m ( t ) )

The cost to be minimized is:

J ^ ( x ^ , u ) = 1 2 t 0 t f ( x T ( t ) Q ( t ) x ( t ) + u T ( t ) R ( t ) u ( t ) ) d t where A n × n , B n × m ,

Q n × n R m × m .

The matrix Q is symmetric that is Q T = Q .

The matrix R is symmetric and positive definite that is x T R x > 0 if x 0 .

The functions A ( t ) , B ( t ) , Q ( t ) and R ( t ) are of class C 1 .

We can use the Pontryagin minimum principle to find u ( t ) .

The Hamiltinian H of the problem is given by:

H ( x ^ , u , λ ^ ) = λ ^ T ( A ( t ) x ( t ) + B ( t ) u ( t ) 1 ) 1 2 ( x T Q x + u T R u ) (9)

H ( x ^ , u , λ ^ ) = λ T A x + λ T B u + λ m + 1 1 2 x T Q x 1 2 u T R u (10)

The adjoint equations are:

λ ^ t = x ^ H

which implies

λ t = x H

Since Q and R are symmetric then

x ( x T Q x ) = 2 Q x x ( x T R x ) = 2 R x

therefore

λ t = A T ( t ) λ ( t ) + Q ( t ) x ( t )

Moreover, since there is non-constraint on u, therefore

u H = 0 (11)

so from (10)

B T λ R u = 0 (12)

therefore

u = R 1 ( t ) B T ( t ) λ ( t ) (13)

The goal is to express λ in term of x ( t ) . Let’s replace u ( t ) in the system of Equation (8), we obtain

d x d t = A ( t ) x ( t ) + B ( t ) R 1 ( t ) B T ( t ) λ ( t ) (14)

therefore, we get the following system with 2n variables

{ d x d t = A ( t ) x ( t ) + B ( t ) R 1 ( t ) B T ( t ) λ ( t ) d λ d t = Q ( t ) x ( t ) A T ( t ) λ ( t ) (15)

that has a unique solution ( x ( t ) , λ ( t ) ) given an initial condition.

Using the matrix representation,

d d t ( x ( t ) λ ( t ) ) = H ( t ) ( x ( t ) λ ( t ) ) (16)

where

H ( t ) = ( A ( t ) B ( t ) R 1 B T ( t ) Q ( t ) A T ( t ) ) (17)

therefore

( x ( t ) λ ( t ) ) = M ( t , t 0 ) ( x ( t 0 ) λ ( t 0 ) )

where

M ( t , t 0 ) = e t 0 t H ( τ ) d τ

In particular

( x ( t f ) λ ( t f ) ) = M ( t f , t ) ( x ( t ) λ ( t ) )

for all t [ t 0 , t f ] .

Dividing M ( t f , t ) into blocks of n × n matrices

( x ( t f ) λ ( t f ) ) = ( M 11 ( t f , t ) M 12 ( t f , t ) M 21 ( t f , t ) M 22 ( t f , t ) ) ( x ( t ) λ ( t ) ) (18)

where M i j are n × n matrices 1 i , j 2 .

Therefore

x ( t f ) = M 11 ( t f , t ) x ( t ) + M 12 ( t f , t ) λ ( t )

λ ( t f ) = 0 = M 21 ( t f , t ) x ( t ) + M 22 ( t f , t ) λ ( t ) .

Since λ ( t ) is unique then M 22 ( t f , t ) must be invertible, therefore

λ ( t ) = M 22 1 ( t f , t ) M 21 ( t f , t ) x ( t )

Let P ( t ) = M 22 1 ( t f , t ) M 21 ( t f , t ) .

So

λ ( t ) = P ( t ) x ( t ) (19)

From (13), u ( t ) = R 1 ( t ) B T ( t ) P ( t ) x ( t ) .

Finding P(t)

Taking the derivative in both sides of the Equation (19), and using (14) we obtain

d λ ( t ) d t = d P ( t ) d t x ( t ) P ( t ) A ( t ) x ( t ) + P ( t ) A ( t ) x ( t ) + P ( t ) B ( t ) R 1 ( t ) B T ( t ) P ( t ) x ( t ) . (20)

Using (15) and (20), we get the following equation:

d P ( t ) d t + P ( t ) A ( t ) P ( t ) B ( t ) R 1 ( t ) B T ( t ) P ( t ) + Q ( t ) u ( t ) = 0. (21)

This equation holds for all t 0 t t f .

So P ( t ) is a solution of solution of the Matrix Riccati equation:

d P ( t ) d t = P ( t ) B ( t ) R 1 ( t ) B T ( t ) P ( t ) A T ( t ) P ( t ) P ( t ) A ( t ) Q ( t ) (22)

satisfying the initial condition P ( t f ) = 0 .

Theorem 5. The function u * ( t ) = R 1 ( t ) B T ( t ) P ( t ) x * ( t ) is the optimal solution at x 0 and the minimum value of J is given by:

J m i n = 1 2 ( x * ) T ( t 0 ) P ( t 0 ) x * ( t 0 )

where x * is the corresponding optimal solution of (7).

Proof. First notice that P ( t ) is symmetric that is P ( t ) T = P ( t ) . Let’s take the transpose in both side of the Equation (22), we obtain:

d P T ( t ) d t = P T ( t ) B ( t ) ( R 1 ) T ( t ) B T ( t ) P T P T ( t ) A ( t ) A T ( t ) P ( t ) Q T ( t ) .

Since R and Q are symmetric then

d P T ( t ) d t = P T ( t ) B ( t ) R 1 ( t ) B T ( t ) P T P T ( t ) A ( t ) A T ( t ) P ( t ) Q ( t )

which shows that P T ( t ) is also a solution of (22) satisfying P T ( t f ) = 0 .

Since the solution of (22) along with initial condition P T ( t f ) = 0 , is unique therefore

P T ( t ) = P ( t ) .

Now to show that J m i n = 1 2 x T ( x 0 ) P ( x 0 ) x ( x 0 ) , we first can show that

d d t ( x T P x ) = ( x T Q ( t ) x + u T R ( t ) u )

d d t ( x T P x ) = d x T d t P x + x T d P d t x + x T P d x d t .

Since P is symmetric then we can easily verify that ( d x d t ) T P x = x T P d x d t .

Therefore

d d t ( x T P x ) = 2 x T P d x d t + x T d P d t x

Using (15), (19) and (22)

d d t ( x T P x ) = 2 x T P ( A x B R 1 B T P x ) + d d t ( x T P x ) A T P P A Q ) x

Since A T P = ( P A ) T then x T ( A T P + P A ) x = 2 x T P A x .

After cancellation,

d d t ( x T P x ) = x T P B R 1 B T P x x T Q x

Since u = R 1 ( t ) B ( t ) T P ( t ) x then

d d t ( x T P x ) = ( u T R u + x T Q x ) .

Taking the integral from t 0 to t f in both side and multiplying by 1 2 , we

obtain

1 2 x T ( t 0 ) P ( t 0 ) x ( t 0 ) = 1 2 t 0 t f ( u T R ( t ) u + x T Q ( t ) x ) d t (23)

which shows that J m i n = 1 2 x * T ( t 0 ) P ( t 0 ) x * ( t 0 ) .

To show that u * = R 1 ( t ) B ( t ) T P ( t ) x * is the optimal solution, we can verify the three conditions of the theorem 4.

From (23), we can choose a test function defined in n + 1 as

J 1 ( x , t ) = 1 2 x T P ( t ) x where ( x , t ) { ( x , t ) / t < t f } .

We can show that the test function satisfies the three conditions in theorem 4.

Since P ( t f ) = 0 then lim t t f J 1 ( x , t ) = lim t t f J 1 ( x * , t ) = 0 then condition (i) is satisfied.

For (ii), let

g ( u ) = L ^ ( x , u ) + g r a d T J 1 ( x , t ) f ^ ( x , u )

so

g ( u ) = 1 2 ( x T Q ( t ) x + u T R u ) + 1 2 g r a d T ( x T P x ) ( A ( t ) x + B ( t ) u 1 ) = 1 2 ( x T Q ( t ) x + u T R u ) + 1 2 ( 2 ( P ( t ) x ) T x T d P ( t ) d t x ) ( A ( t ) x + B ( t ) u 1 ) = 1 2 x T Q ( t ) x + 1 2 u T R u + x T P ( t ) A ( t ) x + x T P ( t ) B ( t ) u + 1 2 x T d P ( t ) d t x

Using (22)

g ( u ) = 1 2 x T Q ( t ) x + 1 2 u T R u + x T P ( t ) A ( t ) x + x T P ( t ) B ( t ) u + 1 2 ( x T P ( t ) B ( t ) R 1 ( t ) B T ( t ) P ( t ) x x T A T ( t ) P x x T P ( t ) A ( t ) x x T Q ( t ) x )

After simplification

g ( u ) = 1 2 u T R u + x T P ( t ) B ( t ) u + 1 2 x T P ( t ) B ( t ) R 1 ( t ) B T ( t ) P ( t ) x

The gradient of g ( u ) , u g ( u ) = R u + B T ( t ) P ( t ) x so u g ( u ) = 0 then u = R 1 B T ( t ) P ( t ) x

therefore u = R 1 B T ( t ) P ( t ) x is a critical value for g ( u ) .

The Hessian u 2 = R T which is positive definite, therefore g ( u ) has a global minimum at u * = R 1 B T ( t ) P ( t ) x * . It can be shown easily that g ( u * ) = 0 . This shows (ii).

Since g ( u ) has a global minimum at u * then g ( u ) g ( u * ) for all u and x therefore g ( u ) 0 this shows (iii).

,

5. Example

Consider the optimal control problem:

d x d t = A x + B u t [ 0,1 ]

J ( x , u ) = 1 2 0 1 ( x T Q x + u T R u ) d t

where

A ( t ) = 0 , B ( t ) = ( e t / 2 e t / 2 e t / 2 e t / 2 ) , R = 0 , Q = ( 1 2 e t 0 0 1 2 e t )

Find u that minimizes J ( x , u ) .

P ( t ) is a solution of the Riccati equation:

d P ( t ) d t = P B B T P Q

Let’s make the change P = ( B B T ) 1 W W 1 . So

P ( t ) = 1 2 e t W W 1 (24)

then

W W W = 0

The solution of the equation is given by:

W = ( k 1 e r 1 t + k 2 e r 2 t k 3 e r 1 t + k 4 e r 2 t k 5 e r 1 t + k 6 e r 2 t k 7 e r 1 t + k 8 e r 2 t ) (25)

where r 1 = 1 + 5 2 and r 2 = 1 5 2

W 1 = 1 Δ ( k 7 e r 1 t + k 8 e r 2 t k 3 e r 1 t k 4 e r 2 t k 5 e r 1 t k 6 e r 2 t k 1 e r 1 t + k 2 e r 2 t )

where Δ = ( k 1 k 7 k 3 k 5 ) ( e r 1 t + e r 2 t ) 2

W W 1 = 1 Δ ( ( k 1 k 7 k 3 k 5 ) r 1 e 2 r 1 t + ( k 2 k 8 k 4 k 5 ) r 2 e 2 r 2 t ( k 1 k 4 k 2 k 3 ) 5 e t + ( k 1 k 8 k 3 k 6 ) r 1 e t + ( k 2 k 7 k 4 k 5 ) r 2 e t ( k 7 k 1 k 5 k 3 ) r 1 e 2 r 1 + ( k 8 k 2 k 6 k 4 ) r 2 e 2 r 2 t ( k 6 k 7 k 5 k 8 ) 5 e t + ( k 7 k 2 k 5 k 4 ) r 1 e t + ( k 8 k 1 k 6 k 3 ) r 2 e t )

P ( t ) = 1 2 Δ ( ( k 1 k 7 k 3 k 5 ) r 1 e 5 t + ( k 2 k 8 k 4 k 5 ) r 2 e 5 t ( k 1 k 4 k 2 k 3 ) 5 + ( k 1 k 8 k 3 k 6 ) r 1 + ( k 2 k 7 k 4 k 5 ) r 2 ( k 7 k 1 k 5 k 3 ) r 1 e 5 t + ( k 8 k 2 k 6 k 4 ) r 2 e 5 t ( k 6 k 7 k 5 k 8 ) 5 + ( k 7 k 2 k 5 k 4 ) r 1 + ( k 8 k 1 k 6 k 3 ) r 2 )

since P ( t f ) = 0 that is P ( 1 ) = 0 then we conclude that

k 1 k 4 k 2 k 3 = 0 (26)

k 6 k 7 k 5 k 8 = 0 (27)

Combining the entries in the diagonal of P ( t ) , we obtain

k 7 k 2 k 5 k 4 k 8 k 1 + k 6 k 3 = 0 (28)

and

( k 1 k 7 k 3 k 5 ) r 1 e 5 + ( k 2 k 8 k 4 k 6 ) r 2 e 5 + k 1 k 8 k 3 k 6 = 0 (29)

Since we have four equations with eight variables, we can choose k 1 , k 3 , k 5 , k 7 as the free variables and solve for k 2 , k 4 , k 6 , k 8 .

From (25) and (26),

k 4 = k 2 k 3 k 1 k 6 = k 5 k 8 k 7

From (27),

k 2 = k 1 k 8 k 7

Since Δ = k 1 k 7 k 3 k 5 0 .

From (28),

k 8 2 r 2 e 5 + k 7 k 8 + k 7 2 r 1 e 5 = 0

Therefore k 8 = m k 7 where m = e 5 or m = 3 5 2 e 5

P ( t ) = k 1 k 7 k 3 k 5 2 Δ ( r 1 e 5 t + m 2 r 2 e 5 t + r 1 + r 2 0 0 r 1 e 5 t + m 2 r 2 e 5 t + r 1 + r 2 )

The final expression is given by:

P ( t ) = r 1 e 5 t + m 2 r 2 e 5 t 2 ( e r 1 t + e r 2 t ) 2 I

where I is the 2 × 2 identity.

This leads to two controls that satisfy the problem (P)

u ( t ) = r 1 e 5 t + m 2 r 2 e 5 t + 1 2 ( e r 1 t + e r 2 t ) 2 e t 2 ( 1 1 1 1 ) x ( t )

where m = e 5 or m = 3 5 2 e 5 .

6. Conclusions

In this paper, we apply the results we obtain from the Matrix Riccati Equation to optimal control. We provide an explicit control for a specific example in control optimal, the Linear-Quadratic-Regulator. Notice that more complicated examples could be considered.

One of the extensions of these results in this paper would be to apply them in other branches of mathematics for instance non-uniform transmission line, stochastic control and mathematical finance.

Conflicts of Interest

The author declares no conflicts of interest regarding the publication of this paper.

References

[1] https://mathshistory.st-andrews.ac.uk/Biographies/Riccati
[2] Reid, W.T. (1980) Riccati Differential Equations. Academic Press, New York.
[3] Leitman, G. (1981) The Calculus of Variations and Optimal control. Plenum Press, NY & London.
https://doi.org/10.1007/978-1-4899-0333-4
[4] Fraga, S., García de la Vega, J.M. and Fraga, E.S. (1999) The Schrodinger and Riccati Equations. In: Lecture Notes in Chemistry, Vol. 70, Springer-Verlag, Berlin.
[5] Scwabl, F. (1992) Quantum Mechanics. Springer, Berlin.
[6] Abou-Kandil, H., Freiling, G., Ionescu, V. and Jank, G. (2003) Matrix Riccati Equations in Control and Systems Theory. In: Systems & Control: Foundations & Applications, Birkhäuser, Basel, 299-410.
https://doi.org/10.1007/978-3-0348-8081-7
[7] Kalman, R.E and Bucy, R.S. (1961) New Results in Linear Filtering and Prediction Theory. Journal of Fluids Engineering, 83, 95-108.
https://doi.org/10.1115/1.3658902
[8] Boyle, P.P., Tian, W. and Guan, F. (2002) The Riccati Equation in Mathematical Finance. Journal of Symbolic Computation, 33, 343-355.
https://doi.org/10.1006/jsco.2001.0508
[9] Ndiaye, M. (2022) The Riccati Equation, Differential Transform, Rational Solutions and Applications. Applied Mathematics, 13, 774-792.
https://doi.org/10.4236/am.2022.139049
[10] Ince, E.L. (1956) Ordinary Differential Equations. Dover Publications, New York.
[11] Mack, J. and Strauss, A. (1981) Introduction to Optimal Control Theory. Springer Verlag, New York.
[12] Kirk, D.E. (2004) Optimal Control Theory: An Introduction. Dover Publications, New York.
[13] Kucera, V. (1973) A Review of the Matrix Riccati Equation. Kybernetica, 9, 42-61.

Copyright © 2024 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.