Matrix Riccati Equations in Optimal Control

Abstract

In this paper, the matrix Riccati equation is considered. There is no general way for solving the matrix Riccati equation despite the many fields to which it applies. While scalar Riccati equation has been studied thoroughly, matrix Riccati equation of which scalar Riccati equations is a particular case, is much less investigated. This article proposes a change of variable that allows to find explicit solution of the Matrix Riccati equation. We then apply this solution to Optimal Control.

Keywords

Share and Cite:

Ndiaye, M. (2024) Matrix Riccati Equations in Optimal Control. Applied Mathematics, 15, 199-213. doi: 10.4236/am.2024.153011.

1. Introduction

The Matrix Riccati equation named after the mathematician Jacopo Francesco Riccati [1] , can be written as

$\frac{\text{d}Y}{\text{d}t}=YA\left(t\right)Y+YB\left(t\right)+C\left(t\right)Y+D\left(t\right)$ (1)

where $Y\in {ℝ}^{n×m}$ , $A\left(t\right)\in {ℝ}^{m×n}$ , $B\left(t\right)\in {ℝ}^{m×m}$ , $C\left(t\right)\in {ℝ}^{n×m}$ and $D\left(t\right)\in {ℝ}^{n×m}$ .

The matrix Riccati equation arises in many branches of applied mathematics [2] including optimal control [3] , theory of stabilization, transport theory, quantum mechanics [4] [5] , physics, filtration of control systems, differential game [6] [7] , financial mathematics [8] , random processes, diffusion problems, non-unifornm transmission line and stochastic control. The Riccati equation can serve as an unifying link between linear quantum mechanics and other fields in physics, such as thermodynamics and cosmology, and new uses keep popping up as we discover new applications. Thus there is a need to extend previous findings to more complicated applications.

The scalar Riccati equation [9] defined as

$\frac{\text{d}y}{\text{d}t}=a\left(t\right){y}^{2}+b\left(t\right)y+c\left( t \right)$

is a particular case of the Matrix Riccati equation.

It is well-known that the change of variable $y=-\frac{\frac{\text{d}u}{\text{d}t}}{a\left(t\right)u}$ leads to the second

order linear differential equation [10] :

$\frac{{\text{d}}^{2}u}{\text{d}{t}^{2}}-\left(b\left(t\right)+\frac{\frac{\text{d}a\left(t\right)}{\text{d}t}}{a\left(t\right)}\right)\frac{\text{d}u}{\text{d}t}+a\left(t\right)c\left(t\right)u=0$

If $b\left(t\right)+\frac{\frac{\text{d}a\left(t\right)}{\text{d}t}}{a\left(t\right)}$ and $a\left(t\right)c\left(t\right)$ are constant functions then the scalar Riccati

equation can be solved analytically.

We can use the scalar Riccati equation as a blueprint for how we should move forward to solve analytically the matrix Riccati equation. I will be looking at adapting the method above to matrix Riccati equation which required some change to accommodate the matrix properties.

Control system and optimal control [3] [11] [12] is a fairly new field of mathematics, starting in the 1950s.

In a nutshell, Optimal Control extends the maximization/minimization process we learn in calculus to functions, or functionals. Where Calculus gave us a method of finding points $\left({x}_{1},{x}_{2},\cdots ,{x}_{n}\right)$ that maximize or minimize some function $f\left({x}_{1},{x}_{2},\cdots ,{x}_{n}\right)$ , optimal control theory deals with way of finding a control for a dynamical system over a period of time such that an objective function is optimized.

A terrible but helpful metaphor is that if we have an equation that approximates the shape of a mountain, Calculus methods can tell us the location of its highest peak and deepest crag where Optimal Control tells us which ridge to hide in order to get to the peak the fastest and by expanding the least energy.

Matrix Riccati equations can be used to solve some optimal control problem, for instance the Linear Quadratic Regulator (LQR) problem. Recall that the general analytical solutions for the matrix Riccati equation and the algebraic matrix Riccati equation are not available. Only special cases are treated particularly for the scalar Riccati equation. This explains why it is very hard for the (LQR) to find explicit formula for the controls and then approximations of the controls are more frequent which leads to errors. The particular case where the solution of the matrix Riccati equation approaches a constant has been studied [13] . In this case, we deal with the steady-state version referred to as the algebraic Riccati equation. Analytical solutions for algebraic Riccati equation are also intractable. In this paper, a change of variable is proposed in order to turn the matrix Riccati equation into a second order linear matrix equation. This change of variable was mainly inspired by some of the work done with the scalar Riccati equation. This will allow to find exact values for the controls.

In Chapter 2, a change of variable to the Matrix Riccati equation is proposed that turns it to a second order linear matrix differential equation.

In Chapter 3, we look at the field of optimal control which is a branch of mathematics that deals with finding a control for a dynamical system over a period of time such that an objective function is optimized.

In Chapter 4, we use the results from the two previous chapters to solve analytically an optimal control example.

Chapter 5 is dedicated to the conclusion.

2. A Solution to the Matrix Riccati Equation

Consider the matrix Riccati equation

$\frac{\text{d}Y}{\text{d}t}=YA\left(t\right)Y+YB\left(t\right)+C\left(t\right)Y+D\left( t \right)$

where $Y\in {ℝ}^{n×m}$ , $A\left(t\right)\in {ℝ}^{m×n}$ , $B\left(t\right)\in {ℝ}^{m×m}$ , $C\left(t\right)\in {ℝ}^{n×m}$ and $D\left(t\right)\in {ℝ}^{n×m}$ .

For the sake of brevity, we will assume that $m=n$ so that $Y,A,B,C$ and $D\in {ℝ}^{n×n}$ .

Proposition 1. If A is invertible, then

${\left({A}^{-1}\right)}^{\prime }=-{A}^{-1}{A}^{\prime }{A}^{-1}$ (2)

where ${A}^{\prime }$ represents the derivative with respect to t that is ${A}^{\prime }=\frac{\text{d}A}{\text{d}t}$ .

Proof. Since ${A}^{-1}A=I$ where I is the identity matrix

${\left({A}^{-1}A\right)}^{\prime }={\left({A}^{-1}\right)}^{\prime }A+{A}^{-1}{A}^{\prime }=0$

therefore ${\left({A}^{-1}\right)}^{\prime }=-{A}^{-1}{A}^{\prime }{A}^{-1}$ .

,

Theorem 1. If $B=0$ and if A is invertible, then the matrix Riccati Equation (1) can be turned into the second order matrix linear differential equation.

${U}^{″}-\left(AC{A}^{-}+{A}^{\prime }{A}^{-1}\right)U+ADU=0$ (3)

using the change of variable

$Y=-{A}^{-1}{U}^{\prime }{U}^{-1}$ (4)

where U is invertible.

Proof.

${Y}^{\prime }=-{\left({A}^{-1}\right)}^{\prime }{U}^{\prime }{U}^{-1}+{A}^{-1}{U}^{″}{U}^{-1}+{A}^{-1}{U}^{\prime }{\left({U}^{-1}\right)}^{\prime }$

Using (2) on U,

${Y}^{\prime }=-{\left({A}^{-1}\right)}^{\prime }{U}^{\prime }{U}^{-1}+{A}^{-1}{U}^{″}{U}^{-1}+{A}^{-1}{U}^{\prime }{U}^{-1}{U}^{\prime }{U}^{-1}$

Since

${Y}^{\prime }={A}^{-1}{U}^{\prime }{U}^{-1}{U}^{\prime }{U}^{-1}-C{A}^{-1}{U}^{\prime }{U}^{-1}+D$

then

$\begin{array}{l}-{\left({A}^{-1}\right)}^{\prime }{U}^{\prime }{U}^{-1}+{A}^{-1}{U}^{″}{U}^{-1}+{A}^{-1}{U}^{\prime }{U}^{-1}{U}^{\prime }{U}^{-1}\\ ={A}^{-1}{U}^{\prime }{U}^{-1}{U}^{\prime }{U}^{-1}-C{A}^{-1}{U}^{\prime }{U}^{-1}+D\end{array}$

Using (2) on A and after simplification, we obtain

${U}^{″}-\left(AC{A}^{-1}+{A}^{\prime }{A}^{-1}\right){U}^{\prime }+ADU=0$

,

Example 1. Choosing $A=\left(\begin{array}{cc}{e}^{t}& 0\\ 0& {e}^{t}\end{array}\right)$ , $B=0$ , $C={I}_{2×2}$ , $D=\left(\begin{array}{cc}{e}^{-t}& 0\\ 0& {e}^{-t}\end{array}\right)$

After plugging into the equation, this gives us:

${U}^{″}-2{U}^{\prime }+U=0$

After solving for U and using the change of variable (4) we obtain

$Y=-\left(\begin{array}{cc}\frac{\left({c}_{1}+{c}_{2}+{c}_{1}t\right)\left({c}_{7}t+{c}_{8}\right)+\left({c}_{5}+{c}_{6}+{c}_{5}t\right)\left({c}_{5}t+{c}_{6}\right)}{{e}^{t}\left(\left({c}_{1}t+{c}_{2}\right)\left({c}_{7}t+{c}_{8}\right)-\left({c}_{3}t+{c}_{4}\right)\left({c}_{5}t+{c}_{6}\right)\right)}& \frac{\left({c}_{3}+{c}_{4}+{c}_{3}t\right)\left({c}_{7}t+{c}_{8}\right)+\left({c}_{7}+{c}_{8}+{c}_{7}t\right)\left({c}_{5}t+{c}_{6}\right)}{{e}^{t}\left(\left({c}_{1}t+{c}_{2}\right)\left({c}_{7}t+{c}_{8}\right)-\left({c}_{3}t+{c}_{4}\right)\left({c}_{5}t+{c}_{6}\right)\right)}\\ \frac{-\left({c}_{1}+{c}_{2}+{c}_{1}t\right)\left({c}_{3}t+{c}_{4}\right)-\left({c}_{5}+{c}_{6}+{c}_{5}t\right)\left({c}_{1}t+{c}_{2}\right)}{{e}^{t}\left(\left({c}_{1}t+{c}_{2}\right)\left({c}_{7}t+{c}_{8}\right)-\left({c}_{3}t+{c}_{4}\right)\left({c}_{5}t+{c}_{6}\right)\right)}& \frac{-\left({c}_{3}+{c}_{4}+{c}_{3}t\right)\left({c}_{7}t+{c}_{8}\right)-\left({c}_{7}+{c}_{8}+{c}_{7}t\right)\left({c}_{1}t+{c}_{2}\right)}{{e}^{t}\left(\left({c}_{1}t+{c}_{2}\right)\left({c}_{7}t+{c}_{8}\right)-\left({c}_{3}t+{c}_{4}\right)\left({c}_{5}t+{c}_{6}\right)\right)}\end{array}\right)$

where ${c}_{1},{c}_{2},{c}_{3},{c}_{4},{c}_{5},{c}_{6},{c}_{7},{c}_{8}$ are constants given by the initial conditions.

The next theorem generalizes theorem 1.

Theorem 2. If $BW=WB$ for all $W\in {ℝ}^{n×n}$ and $Y=-{A}^{-1}{U}^{\prime }{U}^{-1}$ then (2) can be turned into the equation:

${U}^{″}+\left(B-AC{A}^{\prime }-{A}^{\prime }{A}^{-1}\right){U}^{\prime }+ADU=0$ (5)

The proof of theorem 2 is very similar to the proof of theorem 1.

3. Optimal Control Theory

The field of Optimal Control, as the name suggests, is a branch of mathematics that deals with analyzing a system to find solutions that cause it to behave optimally for the cost we are willing to pay. If a system is controllable given an initial state and some assumptions, then we can reach a desired state of the system by finding the appropriate control with minimum cost. Control is handled through a feedback into u that depends on the state of the system. The basic optimal control problem can be stated as follows: Given the system of differential equations along with an initial condition,

$\left(S\right)\text{\hspace{0.17em}}\frac{\text{d}x}{\text{d}t}=f\left(x\left(t\right),u\left(t\right)\right),\text{\hspace{0.17em}}\text{\hspace{0.17em}}x\left({t}_{0}\right)={x}_{0}$ (6)

where $x\left(t\right)$ is the state of the system, $x\left(t\right)\in {ℝ}^{n}$ , and $u\left(t\right)\in {ℝ}^{m}$ is the control.

The goal is to find a control $u\left(t\right)$ over $\left[{t}_{0},{t}_{f}\right]$ which for any ${x}_{0}$ , minimizes the cost function

$T\left(x,u\right)={\int }_{{t}_{0}}^{{t}_{f}}\text{ }\text{ }L\left(x\left(t\right),u\left(t\right)\right)\text{d}t$

To better frame the optimal control problem, let’s consider a simple example.

Example 2. Consider the circuit below that consists of resistor, inductor and a source (RL circuit).

The circuit in Figure 1 is very frequent in electronic device for filtering signals.

The behavior of the resistor is specified by a constant R called resistance.

The behavior of the inductor is specified by a constant L called inductance.

i is the current across drop the circuit. u is the control and represents the voltage across the source.

According to the Kirchoffs law of current, the sum of the voltage drop across the circuit in Figure 1 equals the voltage across the source, therefore,

${V}_{R}+{V}_{L}=u$

$Ri+L\frac{\text{d}i}{\text{d}t}=u$

Let ${i}_{0}$ be the initial value of the current that is $i\left(0\right)={i}_{0}$ .

We deal with the system:

$Ri+L\frac{\text{d}i}{\text{d}t}=u$

$i\left(0\right)={i}_{0}$

Suppose that we want to switch the current $i$ from ${i}_{0}$ to another value ${i}_{1}$ at $t=T$ that is $i\left(T\right)={i}_{1}$ with a minimum cost.

The goal is expressed by the cost functional.

$J\left(i,u\right)=\frac{1}{2}{\int }_{0}^{T}\text{ }\text{ }c{\left(i\left(t\right)-{i}_{1}\right)}^{2}\text{d}t+\frac{1}{2}{\int }_{0}^{T}\text{ }\text{ }{c}_{u}{\left(u\left(t\right)\right)}^{2}\text{d}t$

where c and ${c}_{u}$ are positive constants and T is the fixed final time $T>0$ .

The first integral is the state cost and the second integral is the control cost.

We also can assume that that u belongs to a set of admissible control

${U}_{ad}=\left\{u\in {L}^{2}\left(\left[0,T\right]\right)|{k}_{1}\le u\le {k}_{2},t\in \left[0,1\right]\right\}$

Figure 1. Circuit RL.

where ${k}_{1}$ and ${k}_{2}$ are constant real numbers.

Question: What values of $u\left(t\right)$ allow this switch with minimum cost?

General Statement of the Optimal Control Problem: The Pontryagin Principle

The basic optimal control problem $\left(\mathcal{P}\right)$ can be stated as follow: Given the system of differential equations along with an initial condition:

$\frac{\text{d}x}{\text{d}t}=f\left(x\left(t\right),u\left(t\right)\right),\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}x\left({t}_{0}\right)={x}_{0}$ (7)

where $x\left(t\right)$ is the state of the system $x\left(t\right)\in {ℝ}^{n}$ and $u\left(t\right)$ is the input of the system $u\left(t\right)\in {ℝ}^{m}$ .

The goal is to find a control $u\left(t\right)$ over $\left[{t}_{0},{t}_{f}\right]$ that minimizes the cost functional

$J\left(x,u\right)={\int }_{{t}_{0}}^{{t}_{f}}\text{ }\text{ }L\left(x\left(t\right),u\left(t\right)\right)\text{d}t.$

To solve an optimal control problem, we can use the Pontryagin principle which represents some necessary conditions the optimal control ${u}^{*}\left(t\right)$ and the optimal state ${x}^{*}\left(t\right)$ need to satisfy.

Theorem 3 (Pontryagin’s maximum principle) If ${x}^{*}\left(t\right)$ and ${u}^{*}\left(t\right)$ are

optimal for the problem (P), then there exist a function $\lambda \left(t\right)=\left(\begin{array}{c}{\lambda }_{1}\left(t\right)\\ {\lambda }_{2}\left(t\right)\\ ⋮\\ {\lambda }_{n}\left(t\right)\end{array}\right)$ and a

function H defined as:

$H\left(x\left(t\right),u\left(t\right),\lambda \right)={\lambda }^{\text{T}}f\left(x,u\right)-L\left(x,u\right)$ that satisfy the three properties.

a) $H\left({x}^{*}\left(t\right),{u}^{*}\left(t\right),\lambda \left(t\right)\right)\ge H\left({x}^{*}\left(t\right),u\left(t\right),\lambda \left( t \right)\right)$

for all control u at each time t.

b) $\frac{\text{d}\lambda }{\text{d}t}=-{\nabla }_{x}H\left({x}^{*},{u}^{*},\lambda \right)$

c) $\lambda \left({t}_{f}\right)=0$

${\lambda }^{\text{T}}$ represents the transpose of $\lambda$ and H is called the Hamiltonian.

The Pontryagin’s maximum principle yields to the controls that represent the candidates for the optimal controls. Those candidates need to be tested. The following theorem gives a sufficient condition for a candidate to be optimal.

Theorem 4. Let $U\left({x}_{0}\right)$ be the set of admissible controls of u and X an open subset of ${ℝ}^{n}$ . If there exists a function ${J}_{1}:X\to ℝ$ of class ${C}^{1}$ such that the three statements below are true.

i) If $u\in U$ generates the solution $x\left(t\right)$ of (7) and $x\left(t\right)\in X$ for all

$t\in \left[{t}_{0},{t}_{1}^{*}\right)$ , then ${\mathrm{lim}}_{t\to {t}_{1}}{J}_{1}\left(x\left(t\right)\right)\le {\mathrm{lim}}_{t\to {t}_{1}^{*}}{J}_{1}\left({x}^{*}\left(t\right)\right)=0$ , for some ${t}_{1}^{*}\ge {t}_{1}$ .

ii) $L\left({x}^{*}\left(t\right),{u}^{*}\left(t\right)\right)+gra{d}^{\text{T}}{J}_{1}\left({x}^{*}\left(t\right)\right)f\left({x}^{*}\left(t\right),{u}^{*}\left(t\right)\right)=0$ for all $t\in \left[{t}_{0},{t}_{1}^{*}\right)$ for some ${t}_{1}^{*}\ge {t}_{1}$ .

iii) $L\left(x,u\right)+gra{d}^{\text{T}}{J}_{1}\left(x\right)f\left(x,u\right)\ge 0$ for all $x\in X$ and $u\in U$ .

Then the control ${u}^{*}\left(t\right)$ generating the solution ${x}^{*}\left(t\right)$ for all $t\in \left[{t}_{0},{t}_{1}^{*}\right]$ with ${x}^{*}\left({t}_{0}\right)={x}_{0}$ , is optimal with respect to X.

The proof of the theorem 3 and 4 can be found in [3] .

Remark 1. The proof of theorem 4 suggests that the test function ${J}_{1}\left(x\left( t \right)\right)$

can be chosen so that: ${J}_{1}\left({x}_{0}\right)={\int }_{{t}_{0}}^{{t}_{f}}L\left(x\left(t\right),u\left(t\right)\right)$ .

Remark 2. In case we deal with a nonautonomous system that is a system in the form

$\frac{\text{d}x}{\text{d}t}=f\left(x,t,u\right)$ $x\left({t}_{0}\right)={x}_{0}$

then we always can turn such system into an autonomous system.

We can define

$\stackrel{^}{x}=\left(\begin{array}{c}{x}_{1}\left(t\right)\\ {x}_{2}\left(t\right)\\ ⋮\\ {x}_{n}\left(t\right)\\ {x}_{n+1}\left( t \right)\end{array}\right)$

where ${x}_{n+1}\left(t\right)=t$ then we deal with the following autonomous system

$\frac{\text{d}\stackrel{^}{x}}{\text{d}t}=\left(\begin{array}{c}f\left(\stackrel{^}{x},u\right)\\ 1\end{array}\right)=\stackrel{^}{f}\left(\stackrel{^}{x},u\right)$ $\stackrel{^}{x}\left({t}_{0}\right)={\stackrel{^}{x}}_{0}=\left({x}_{0},{t}_{0}\right)$

Also if the cost integrand depends on t that is $J\left(x,u\right)={\int }_{{t}_{0}}^{{t}_{f}}\text{ }\text{ }L\left(x,t,u\right)\text{d}t$ then we can rewrite the cost function: as

$\stackrel{^}{J}\left(\stackrel{^}{x},u\right)={\int }_{{t}_{0}}^{{t}_{f}}\text{ }\text{ }\stackrel{^}{L}\left(\stackrel{^}{x},u\right)\text{d}t$

Now we can refine the maximum’s Pontryagin principle to the autonomous system

$\frac{\text{d}\stackrel{^}{x}}{\text{d}t}=\stackrel{^}{f}\left(\stackrel{^}{x},u\right)$ $\stackrel{^}{x}\left({t}_{0}\right)={\stackrel{^}{x}}_{0}$

with cost function given by:

$\stackrel{^}{J}\left(\stackrel{^}{x},u\right)={\int }_{{t}_{0}}^{{t}_{f}}\text{ }\text{ }\stackrel{^}{L}\left(\stackrel{^}{x},u\right)\text{d}t$

and then $\stackrel{^}{\lambda }=\left(\begin{array}{c}{\lambda }_{1}\left(t\right)\\ {\lambda }_{2}\left(t\right)\\ ⋮\\ {\lambda }_{n}\left(t\right)\\ {\lambda }_{n+1}\left(t\right)\end{array}\right)\in {ℝ}^{n+1}$ .

4. The Linear-Quadratic Regulator (LQR)-Riccati Equation

We suppose that

$f\left(x\left(t\right),u\left(t\right)\right)=A\left(t\right)x\left(t\right)+B\left(t\right)u\left( t \right)$

then

$\stackrel{^}{f}\left(\stackrel{^}{x}\left(t\right),u\left(t\right)\right)=\left(\begin{array}{c}A\left(t\right)x\left(t\right)+B\left(t\right)u\left(t\right)\\ 1\end{array}\right)$

$\stackrel{^}{L}\left(\stackrel{^}{x}\left(t\right),u\left(t\right)\right)=\frac{1}{2}\left({x}^{\text{T}}\left(t\right)Q\left(t\right)x\left(t\right)+{u}^{\text{T}}\left(t\right)R\left(t\right)u\left( t \right)\right)$

then

$\frac{\text{d}\stackrel{^}{x}}{\text{d}t}=\left(\begin{array}{c}A\left(t\right)x\left(t\right)+B\left(t\right)u\left(t\right)\\ 1\end{array}\right)$ (8)

with initial state $x\left({t}_{0}\right)={x}_{0}$ and the interval $\left[{t}_{0},{t}_{f}\right]$ is specified and

$x\left(t\right)=\left(\begin{array}{c}{x}_{1}\left(t\right)\\ {x}_{2}\left(t\right)\\ ⋮\\ {x}_{n}\left( t \right)\right)\end{array}$

$u\left(t\right)=\left(\begin{array}{c}{u}_{1}\left(t\right)\\ {u}_{2}\left(t\right)\\ ⋮\\ {u}_{m}\left( t \right)\right)\end{array}$

The cost to be minimized is:

$\stackrel{^}{J}\left(\stackrel{^}{x},u\right)=\frac{1}{2}{\int }_{{t}_{0}}^{{t}_{f}}\left({x}^{\text{T}}\left(t\right)Q\left(t\right)x\left(t\right)+{u}^{\text{T}}\left(t\right)R\left(t\right)u\left(t\right)\right)\text{d}t$ where $A\in {ℝ}^{n×n}$ , $B\in {ℝ}^{n×m}$ ,

$Q\in {ℝ}^{n×n}$ $R\in {ℝ}^{m×m}$ .

The matrix Q is symmetric that is ${Q}^{\text{T}}=Q$ .

The matrix R is symmetric and positive definite that is ${x}^{\text{T}}Rx>0$ if $x\ne 0$ .

The functions $A\left(t\right),B\left(t\right),Q\left(t\right)$ and $R\left(t\right)$ are of class ${C}^{1}$ .

We can use the Pontryagin minimum principle to find $u\left(t\right)$ .

The Hamiltinian H of the problem is given by:

$H\left(\stackrel{^}{x},u,\stackrel{^}{\lambda }\right)={\stackrel{^}{\lambda }}^{\text{T}}\left(\begin{array}{c}A\left(t\right)x\left(t\right)+B\left(t\right)u\left(t\right)\\ 1\end{array}\right)-\frac{1}{2}\left({x}^{\text{T}}Qx+{u}^{\text{T}}Ru\right)$ (9)

$H\left(\stackrel{^}{x},u,\stackrel{^}{\lambda }\right)={\lambda }^{\text{T}}Ax+{\lambda }^{\text{T}}Bu+{\lambda }_{m+1}-\frac{1}{2}{x}^{\text{T}}Qx-\frac{1}{2}{u}^{\text{T}}Ru$ (10)

$\frac{\partial \stackrel{^}{\lambda }}{\partial t}=-{\nabla }_{\stackrel{^}{x}}H$

which implies

$\frac{\partial \lambda }{\partial t}=-{\nabla }_{x}H$

Since Q and R are symmetric then

${\nabla }_{x}\left({x}^{\text{T}}Qx\right)=2Qx$ ${\nabla }_{x}\left({x}^{\text{T}}Rx\right)=2Rx$

therefore

$\frac{\partial \lambda }{\partial t}=-{A}^{\text{T}}\left(t\right)\lambda \left(t\right)+Q\left(t\right)x\left( t \right)$

Moreover, since there is non-constraint on u, therefore

${\nabla }_{u}H=0$ (11)

so from (10)

${B}^{\text{T}}\lambda -Ru=0$ (12)

therefore

$u={R}^{-1}\left(t\right){B}^{\text{T}}\left(t\right)\lambda \left(t\right)$ (13)

The goal is to express $\lambda$ in term of $x\left(t\right)$ . Let’s replace $u\left(t\right)$ in the system of Equation (8), we obtain

$\frac{\text{d}x}{\text{d}t}=A\left(t\right)x\left(t\right)+B\left(t\right){R}^{-1}\left(t\right){B}^{\text{T}}\left(t\right)\lambda \left(t\right)$ (14)

therefore, we get the following system with 2n variables

$\left\{\begin{array}{l}\frac{\text{d}x}{\text{d}t}=A\left(t\right)x\left(t\right)+B\left(t\right){R}^{-1}\left(t\right){B}^{\text{T}}\left(t\right)\lambda \left(t\right)\\ \frac{\text{d}\lambda }{\text{d}t}=Q\left(t\right)x\left(t\right)-{A}^{\text{T}}\left(t\right)\lambda \left(t\right)\end{array}$ (15)

that has a unique solution $\left(x\left(t\right),\lambda \left(t\right)\right)$ given an initial condition.

Using the matrix representation,

$\frac{\text{d}}{\text{d}t}\left(\begin{array}{c}x\left(t\right)\\ \lambda \left(t\right)\end{array}\right)=H\left(t\right)\left(\begin{array}{c}x\left(t\right)\\ \lambda \left(t\right)\end{array}\right)$ (16)

where

$H\left(t\right)=\left(\begin{array}{cc}A\left(t\right)& B\left(t\right){R}^{-1}{B}^{\text{T}}\left(t\right)\\ Q\left(t\right)& -{A}^{\text{T}}\left(t\right)\end{array}\right)$ (17)

therefore

$\left(\begin{array}{c}x\left(t\right)\\ \lambda \left(t\right)\end{array}\right)=M\left(t,{t}_{0}\right)\left(\begin{array}{c}x\left({t}_{0}\right)\\ \lambda \left({t}_{0}\right)\end{array}\right)$

where

$M\left(t,{t}_{0}\right)={e}^{{\int }_{{t}_{0}}^{t}H\left(\tau \right)\text{d}\tau }$

In particular

$\left(\begin{array}{c}x\left({t}_{f}\right)\\ \lambda \left({t}_{f}\right)\end{array}\right)=M\left({t}_{f},t\right)\left(\begin{array}{c}x\left(t\right)\\ \lambda \left( t \right)\right)\end{array}$

for all $t\in \left[{t}_{0},{t}_{f}\right]$ .

Dividing $M\left({t}_{f},t\right)$ into blocks of $n×n$ matrices

$\left(\begin{array}{c}x\left({t}_{f}\right)\\ \lambda \left({t}_{f}\right)\end{array}\right)=\left(\begin{array}{cc}{M}_{11}\left({t}_{f},t\right)& {M}_{12}\left({t}_{f},t\right)\\ {M}_{21}\left({t}_{f},t\right)& {M}_{22}\left({t}_{f},t\right)\end{array}\right)\left(\begin{array}{c}x\left(t\right)\\ \lambda \left(t\right)\end{array}\right)$ (18)

where ${M}_{ij}$ are $n×n$ matrices $1\le i,j\le 2$ .

Therefore

$x\left({t}_{f}\right)={M}_{11}\left({t}_{f},t\right)x\left(t\right)+{M}_{12}\left({t}_{f},t\right)\lambda \left( t \right)$

$\lambda \left({t}_{f}\right)=0={M}_{21}\left({t}_{f},t\right)x\left(t\right)+{M}_{22}\left({t}_{f},t\right)\lambda \left(t\right).$

Since $\lambda \left(t\right)$ is unique then ${M}_{22}\left({t}_{f},t\right)$ must be invertible, therefore

$\lambda \left(t\right)=-{M}_{22}^{-1}\left({t}_{f},t\right){M}_{21}\left({t}_{f},t\right)x\left( t \right)$

Let $P\left(t\right)={M}_{22}^{-1}\left({t}_{f},t\right){M}_{21}\left({t}_{f},t\right)$ .

So

$\lambda \left(t\right)=-P\left(t\right)x\left(t\right)$ (19)

From (13), $u\left(t\right)=-{R}^{-1}\left(t\right){B}^{\text{T}}\left(t\right)P\left(t\right)x\left(t\right)$ .

Finding P(t)

Taking the derivative in both sides of the Equation (19), and using (14) we obtain

$\begin{array}{c}\frac{\text{d}\lambda \left(t\right)}{\text{d}t}=-\frac{\text{d}P\left(t\right)}{\text{d}t}x\left(t\right)-P\left(t\right)A\left(t\right)x\left(t\right)+P\left(t\right)A\left(t\right)x\left(t\right)\\ \text{\hspace{0.17em}}\text{ }+P\left(t\right)B\left(t\right){R}^{-1}\left(t\right){B}^{\text{T}}\left(t\right)P\left(t\right)x\left(t\right).\end{array}$ (20)

Using (15) and (20), we get the following equation:

$\frac{\text{d}P\left(t\right)}{\text{d}t}+P\left(t\right)A\left(t\right)-P\left(t\right)B\left(t\right){R}^{-1}\left(t\right){B}^{\text{T}}\left(t\right)P\left(t\right)+Q\left(t\right)u\left(t\right)=0.$ (21)

This equation holds for all ${t}_{0}\le t\le {t}_{f}$ .

So $P\left(t\right)$ is a solution of solution of the Matrix Riccati equation:

$\frac{\text{d}P\left(t\right)}{\text{d}t}=P\left(t\right)B\left(t\right){R}^{-1}\left(t\right){B}^{\text{T}}\left(t\right)P\left(t\right)-{A}^{\text{T}}\left(t\right)P\left(t\right)-P\left(t\right)A\left(t\right)-Q\left(t\right)$ (22)

satisfying the initial condition $P\left({t}_{f}\right)=0$ .

Theorem 5. The function ${u}^{*}\left(t\right)=-{R}^{-1}\left(t\right){B}^{\text{T}}\left(t\right)P\left(t\right){x}^{*}\left(t\right)$ is the optimal solution at ${x}_{0}$ and the minimum value of J is given by:

${J}_{min}=\frac{1}{2}{\left({x}^{*}\right)}^{\text{T}}\left({t}_{0}\right)P\left({t}_{0}\right){x}^{*}\left( t 0 \right)$

where ${x}^{*}$ is the corresponding optimal solution of (7).

Proof. First notice that $P\left(t\right)$ is symmetric that is $P{\left(t\right)}^{\text{T}}=P\left(t\right)$ . Let’s take the transpose in both side of the Equation (22), we obtain:

$\frac{\text{d}{P}^{\text{T}}\left(t\right)}{\text{d}t}={P}^{\text{T}}\left(t\right)B\left(t\right){\left({R}^{-1}\right)}^{\text{T}}\left(t\right){B}^{\text{T}}\left(t\right){P}^{\text{T}}-{P}^{\text{T}}\left(t\right)A\left(t\right)-{A}^{\text{T}}\left(t\right)P\left(t\right)-{Q}^{\text{T}}\left(t\right).$

Since R and Q are symmetric then

$\frac{\text{d}{P}^{\text{T}}\left(t\right)}{\text{d}t}={P}^{\text{T}}\left(t\right)B\left(t\right){R}^{-1}\left(t\right){B}^{\text{T}}\left(t\right){P}^{\text{T}}-{P}^{\text{T}}\left(t\right)A\left(t\right)-{A}^{\text{T}}\left(t\right)P\left(t\right)-Q\left( t \right)$

which shows that ${P}^{\text{T}}\left(t\right)$ is also a solution of (22) satisfying ${P}^{\text{T}}\left({t}_{f}\right)=0$ .

Since the solution of (22) along with initial condition ${P}^{\text{T}}\left({t}_{f}\right)=0$ , is unique therefore

${P}^{\text{T}}\left(t\right)=P\left(t\right)$ .

Now to show that ${J}_{min}=\frac{1}{2}{x}^{\text{T}}\left({x}_{0}\right)P\left({x}_{0}\right)x\left({x}_{0}\right)$ , we first can show that

$\frac{\text{d}}{\text{d}t}\left({x}^{\text{T}}Px\right)=-\left({x}^{\text{T}}Q\left(t\right)x+{u}^{\text{T}}R\left(t\right)u\right)$

$\frac{\text{d}}{\text{d}t}\left({x}^{\text{T}}Px\right)=\frac{\text{d}{x}^{\text{T}}}{\text{d}t}Px+{x}^{\text{T}}\frac{\text{d}P}{\text{d}t}x+{x}^{\text{T}}P\frac{\text{d}x}{\text{d}t}.$

Since P is symmetric then we can easily verify that ${\left(\frac{\text{d}x}{\text{d}t}\right)}^{\text{T}}Px={x}^{\text{T}}P\frac{\text{d}x}{\text{d}t}$ .

Therefore

$\frac{\text{d}}{\text{d}t}\left({x}^{\text{T}}Px\right)=2{x}^{\text{T}}P\frac{\text{d}x}{\text{d}t}+{x}^{\text{T}}\frac{\text{d}P}{\text{d}t}x$

Using (15), (19) and (22)

$\frac{\text{d}}{\text{d}t}\left({x}^{\text{T}}Px\right)=2{x}^{\text{T}}P\left(Ax-B{R}^{-1}{B}^{\text{T}}Px\right)+\frac{\text{d}}{\text{d}t}\left({x}^{\text{T}}Px\right)-{A}^{\text{T}}P-PA-Q\right)x$

Since ${A}^{\text{T}}P={\left(PA\right)}^{\text{T}}$ then ${x}^{\text{T}}\left({A}^{\text{T}}P+PA\right)x=2{x}^{\text{T}}PAx$ .

After cancellation,

$\frac{\text{d}}{\text{d}t}\left({x}^{\text{T}}Px\right)=-{x}^{\text{T}}PB{R}^{-1}{B}^{\text{T}}Px-{x}^{\text{T}}Qx$

Since $u=-{R}^{-1}\left(t\right)B{\left(t\right)}^{\text{T}}P\left(t\right)x$ then

$\frac{\text{d}}{\text{d}t}\left({x}^{\text{T}}Px\right)=-\left({u}^{\text{T}}Ru+{x}^{\text{T}}Qx\right).$

Taking the integral from ${t}_{0}$ to ${t}_{f}$ in both side and multiplying by $\frac{1}{2}$ , we

obtain

$\frac{1}{2}{x}^{\text{T}}\left({t}_{0}\right)P\left({t}_{0}\right)x\left({t}_{0}\right)=\frac{1}{2}{\int }_{{t}_{0}}^{{t}_{f}}\left({u}^{\text{T}}R\left(t\right)u+{x}^{\text{T}}Q\left(t\right)x\right)\text{d}t$ (23)

which shows that ${J}_{min}=\frac{1}{2}{x}^{*}{}^{\text{T}}\left({t}_{0}\right)P\left({t}_{0}\right){x}^{*}\left({t}_{0}\right)$ .

To show that ${u}^{*}=-{R}^{-1}\left(t\right)B{\left(t\right)}^{\text{T}}P\left(t\right){x}^{*}$ is the optimal solution, we can verify the three conditions of the theorem 4.

From (23), we can choose a test function defined in ${ℝ}^{n+1}$ as

${J}_{1}\left(x,t\right)=\frac{1}{2}{x}^{\text{T}}P\left(t\right)x$ where $\left(x,t\right)\in \left\{\left(x,t\right)/t<{t}_{f}\right\}$ .

We can show that the test function satisfies the three conditions in theorem 4.

Since $P\left({t}_{f}\right)=0$ then ${\mathrm{lim}}_{t\to {t}_{f}}{J}_{1}\left(x,t\right)={\mathrm{lim}}_{t\to {t}_{f}}{J}_{1}\left({x}^{*},t\right)=0$ then condition (i) is satisfied.

For (ii), let

$g\left(u\right)=\stackrel{^}{L}\left(x,u\right)+gra{d}^{\text{T}}{J}_{1}\left(x,t\right)\stackrel{^}{f}\left(x,u\right)$

so

$\begin{array}{c}g\left(u\right)=\frac{1}{2}\left({x}^{\text{T}}Q\left(t\right)x+{u}^{\text{T}}Ru\right)+\frac{1}{2}gra{d}^{\text{T}}\left({x}^{\text{T}}Px\right)\left(\begin{array}{c}A\left(t\right)x+B\left(t\right)u\\ 1\end{array}\right)\\ =\frac{1}{2}\left({x}^{\text{T}}Q\left(t\right)x+{u}^{\text{T}}Ru\right)+\frac{1}{2}\left(2{\left(P\left(t\right)x\right)}^{\text{T}}\text{ }\text{ }\text{ }\text{ }\text{ }{x}^{\text{T}}\frac{\text{d}P\left(t\right)}{\text{d}t}x\right)\left(\begin{array}{c}A\left(t\right)x+B\left(t\right)u\\ 1\end{array}\right)\\ =\frac{1}{2}{x}^{\text{T}}Q\left(t\right)x+\frac{1}{2}{u}^{\text{T}}Ru+{x}^{\text{T}}P\left(t\right)A\left(t\right)x+{x}^{\text{T}}P\left(t\right)B\left(t\right)u+\frac{1}{2}{x}^{\text{T}}\frac{\text{d}P\left(t\right)}{\text{d}t}x\end{array}$

Using (22)

$\begin{array}{c}g\left(u\right)=\frac{1}{2}{x}^{\text{T}}Q\left(t\right)x+\frac{1}{2}{u}^{\text{T}}Ru+{x}^{\text{T}}P\left(t\right)A\left(t\right)x+{x}^{\text{T}}P\left(t\right)B\left(t\right)u\\ \text{\hspace{0.17em}}+\frac{1}{2}\left({x}^{\text{T}}P\left(t\right)B\left(t\right){R}^{-1}\left(t\right){B}^{\text{T}}\left(t\right)P\left(t\right)x-{x}^{\text{T}}{A}^{\text{T}}\left(t\right)Px\\ \text{\hspace{0.17em}}-{x}^{\text{T}}P\left(t\right)A\left(t\right)x-{x}^{\text{T}}Q\left(t\right)x\right)\end{array}$

After simplification

$g\left(u\right)=\frac{1}{2}{u}^{\text{T}}Ru+{x}^{\text{T}}P\left(t\right)B\left(t\right)u+\frac{1}{2}{x}^{\text{T}}P\left(t\right)B\left(t\right){R}^{-1}\left(t\right){B}^{\text{T}}\left(t\right)P\left(t\right)x$

The gradient of $g\left(u\right)$ , ${\nabla }_{u}g\left(u\right)=Ru+{B}^{\text{T}}\left(t\right)P\left(t\right)x$ so ${\nabla }_{u}g\left(u\right)=0$ then $u=-{R}^{-1}{B}^{\text{T}}\left(t\right)P\left(t\right)x$

therefore $u=-{R}^{-1}{B}^{\text{T}}\left(t\right)P\left(t\right)x$ is a critical value for $g\left(u\right)$ .

The Hessian ${\nabla }_{u}^{2}={R}^{\text{T}}$ which is positive definite, therefore $g\left(u\right)$ has a global minimum at ${u}^{*}=-{R}^{-1}{B}^{\text{T}}\left(t\right)P\left(t\right){x}^{*}$ . It can be shown easily that $g\left({u}^{*}\right)=0$ . This shows (ii).

Since $g\left(u\right)$ has a global minimum at ${u}^{*}$ then $g\left(u\right)\ge g\left({u}^{*}\right)$ for all u and x therefore $g\left(u\right)\ge 0$ this shows (iii).

,

5. Example

Consider the optimal control problem:

$\frac{\text{d}x}{\text{d}t}=Ax+Bu$ $t\in \left[0,1\right]$

$J\left(x,u\right)=\frac{1}{2}{\int }_{0}^{1}\left({x}^{\text{T}}Qx+{u}^{\text{T}}Ru\right)\text{d}t$

where

$A\left(t\right)=0,\text{\hspace{0.17em}}\text{\hspace{0.17em}}B\left(t\right)=\left(\begin{array}{cc}{e}^{t/2}& -{e}^{t/2}\\ {e}^{t/2}& {e}^{t/2}\end{array}\right),\text{\hspace{0.17em}}\text{\hspace{0.17em}}R=0,\text{\hspace{0.17em}}\text{\hspace{0.17em}}Q=\left(\begin{array}{cc}\frac{1}{2}{e}^{-t}& 0\\ 0& \frac{1}{2}{e}^{-t}\end{array}\right)$

Find u that minimizes $J\left(x,u\right)$ .

$P\left(t\right)$ is a solution of the Riccati equation:

$\frac{\text{d}P\left(t\right)}{\text{d}t}=PB{B}^{\text{T}}P-Q$

Let’s make the change $P=-{\left(B{B}^{\text{T}}\right)}^{-1}{W}^{\prime }{W}^{-1}$ . So

$P\left(t\right)=-\frac{1}{2}{e}^{-t}{W}^{\prime }{W}^{-1}$ (24)

then

${W}^{″}-{W}^{\prime }-W=0$

The solution of the equation is given by:

$W=\left(\begin{array}{cc}{k}_{1}{e}^{{r}_{1}t}+{k}_{2}{e}^{{r}_{2}t}& {k}_{3}{e}^{{r}_{1}t}+{k}_{4}{e}^{{r}_{2}t}\\ {k}_{5}{e}^{{r}_{1}t}+{k}_{6}{e}^{{r}_{2}t}& {k}_{7}{e}^{{r}_{1}t}+{k}_{8}{e}^{{r}_{2}t}\end{array}\right)$ (25)

where ${r}_{1}=\frac{1+\sqrt{5}}{2}$ and ${r}_{2}=\frac{1-\sqrt{5}}{2}$

${W}^{-1}=\frac{1}{\Delta }\left(\begin{array}{cc}{k}_{7}{e}^{{r}_{1}t}+{k}_{8}{e}^{{r}_{2}t}& -{k}_{3}{e}^{{r}_{1}t}-{k}_{4}{e}^{{r}_{2}t}\\ -{k}_{5}{e}^{{r}_{1}t}-{k}_{6}{e}^{{r}_{2}t}& {k}_{1}{e}^{{r}_{1}t}+{k}_{2}{e}^{{r}_{2}t}\end{array}\right)$

where $\Delta =\left({k}_{1}{k}_{7}-{k}_{3}{k}_{5}\right){\left({e}^{{r}_{1}t}+{e}^{{r}_{2}t}\right)}^{2}$

${W}^{\prime }{W}^{-1}=\frac{1}{\Delta }\left(\begin{array}{cc}\left({k}_{1}{k}_{7}-{k}_{3}{k}_{5}\right){r}_{1}{e}^{2{r}_{1}t}+\left({k}_{2}{k}_{8}-{k}_{4}{k}_{5}\right){r}_{2}{e}^{2{r}_{2}t}& \left({k}_{1}{k}_{4}-{k}_{2}{k}_{3}\right)\sqrt{5}{e}^{t}\\ +\left({k}_{1}{k}_{8}-{k}_{3}{k}_{6}\right){r}_{1}{e}^{t}+\left({k}_{2}{k}_{7}-{k}_{4}{k}_{5}\right){r}_{2}{e}^{t}& \\ & \left({k}_{7}{k}_{1}-{k}_{5}{k}_{3}\right){r}_{1}{e}^{2{r}_{1}}+\left({k}_{8}{k}_{2}-{k}_{6}{k}_{4}\right){r}_{2}{e}^{2{r}_{2}t}\\ \left({k}_{6}{k}_{7}-{k}_{5}{k}_{8}\right)\sqrt{5}{e}^{t}& +\left({k}_{7}{k}_{2}-{k}_{5}{k}_{4}\right){r}_{1}{e}^{t}+\left({k}_{8}{k}_{1}-{k}_{6}{k}_{3}\right){r}_{2}{e}^{t}\end{array}\right)$

$P\left(t\right)=-\frac{1}{2\Delta }\left(\begin{array}{cc}\left({k}_{1}{k}_{7}-{k}_{3}{k}_{5}\right){r}_{1}{e}^{-\sqrt{5}t}+\left({k}_{2}{k}_{8}-{k}_{4}{k}_{5}\right){r}_{2}{e}^{\sqrt{5}t}& \left({k}_{1}{k}_{4}-{k}_{2}{k}_{3}\right)\sqrt{5}\\ +\left({k}_{1}{k}_{8}-{k}_{3}{k}_{6}\right){r}_{1}+\left({k}_{2}{k}_{7}-{k}_{4}{k}_{5}\right){r}_{2}& \\ & \left({k}_{7}{k}_{1}-{k}_{5}{k}_{3}\right){r}_{1}{e}^{-\sqrt{5}t}+\left({k}_{8}{k}_{2}-{k}_{6}{k}_{4}\right){r}_{2}{e}^{\sqrt{5}t}\\ \left({k}_{6}{k}_{7}-{k}_{5}{k}_{8}\right)\sqrt{5}& +\left({k}_{7}{k}_{2}-{k}_{5}{k}_{4}\right){r}_{1}+\left({k}_{8}{k}_{1}-{k}_{6}{k}_{3}\right){r}_{2}\end{array}\right)$

since $P\left({t}_{f}\right)=0$ that is $P\left(1\right)=0$ then we conclude that

${k}_{1}{k}_{4}-{k}_{2}{k}_{3}=0$ (26)

${k}_{6}{k}_{7}-{k}_{5}{k}_{8}=0$ (27)

Combining the entries in the diagonal of $P\left(t\right)$ , we obtain

${k}_{7}{k}_{2}-{k}_{5}{k}_{4}-{k}_{8}{k}_{1}+{k}_{6}{k}_{3}=0$ (28)

and

$\left({k}_{1}{k}_{7}-{k}_{3}{k}_{5}\right){r}_{1}{e}^{-\sqrt{5}}+\left({k}_{2}{k}_{8}-{k}_{4}{k}_{6}\right){r}_{2}{e}^{\sqrt{5}}+{k}_{1}{k}_{8}-{k}_{3}{k}_{6}=0$ (29)

Since we have four equations with eight variables, we can choose ${k}_{1},{k}_{3},{k}_{5},{k}_{7}$ as the free variables and solve for ${k}_{2},{k}_{4},{k}_{6},{k}_{8}$ .

From (25) and (26),

${k}_{4}=\frac{{k}_{2}{k}_{3}}{{k}_{1}}$ ${k}_{6}=\frac{{k}_{5}{k}_{8}}{{k}_{7}}$

From (27),

${k}_{2}=\frac{{k}_{1}{k}_{8}}{{k}_{7}}$

Since $\Delta ={k}_{1}{k}_{7}-{k}_{3}{k}_{5}\ne 0$ .

From (28),

${k}_{8}^{2}{r}_{2}{e}^{\sqrt{5}}+{k}_{7}{k}_{8}+{k}_{7}^{2}{r}_{1}{e}^{-\sqrt{5}}=0$

Therefore ${k}_{8}=m{k}_{7}$ where $m=-{e}^{-\sqrt{5}}$ or $m=\frac{3-\sqrt{5}}{2}{e}^{-\sqrt{5}}$

$P\left(t\right)=-\frac{{k}_{1}{k}_{7}-{k}_{3}{k}_{5}}{2\Delta }\left(\begin{array}{cc}{r}_{1}{e}^{-\sqrt{5}t}+{m}^{2}{r}_{2}{e}^{\sqrt{5}t}+{r}_{1}+{r}_{2}& 0\\ 0& {r}_{1}{e}^{-\sqrt{5}t}+{m}^{2}{r}_{2}{e}^{\sqrt{5}t}+{r}_{1}+{r}_{2}\end{array}\right)$

The final expression is given by:

$P\left(t\right)=-\frac{{r}_{1}{e}^{-\sqrt{5}t}+{m}^{2}{r}_{2}{e}^{\sqrt{5}t}}{2{\left({e}^{{r}_{1}t}+{e}^{{r}_{2}t}\right)}^{2}}I$

where I is the 2 × 2 identity.

This leads to two controls that satisfy the problem (P)

$u\left(t\right)=\frac{{r}_{1}{e}^{-\sqrt{5}t}+{m}^{2}{r}_{2}{e}^{\sqrt{5}t}+1}{2{\left({e}^{{r}_{1}t}+{e}^{{r}_{2}t}\right)}^{2}}{e}^{\frac{t}{2}}\left(\begin{array}{cc}1& 1\\ -1& 1\end{array}\right)x\left( t \right)$

where $m=-{e}^{-\sqrt{5}}$ or $m=\frac{3-\sqrt{5}}{2}{e}^{-\sqrt{5}}$ .

6. Conclusions

In this paper, we apply the results we obtain from the Matrix Riccati Equation to optimal control. We provide an explicit control for a specific example in control optimal, the Linear-Quadratic-Regulator. Notice that more complicated examples could be considered.

One of the extensions of these results in this paper would be to apply them in other branches of mathematics for instance non-uniform transmission line, stochastic control and mathematical finance.

Conflicts of Interest

The author declares no conflicts of interest regarding the publication of this paper.

 [1] https://mathshistory.st-andrews.ac.uk/Biographies/Riccati [2] Reid, W.T. (1980) Riccati Differential Equations. Academic Press, New York. [3] Leitman, G. (1981) The Calculus of Variations and Optimal control. Plenum Press, NY & London. https://doi.org/10.1007/978-1-4899-0333-4 [4] Fraga, S., García de la Vega, J.M. and Fraga, E.S. (1999) The Schrodinger and Riccati Equations. In: Lecture Notes in Chemistry, Vol. 70, Springer-Verlag, Berlin. [5] Scwabl, F. (1992) Quantum Mechanics. Springer, Berlin. [6] Abou-Kandil, H., Freiling, G., Ionescu, V. and Jank, G. (2003) Matrix Riccati Equations in Control and Systems Theory. In: Systems & Control: Foundations & Applications, Birkhäuser, Basel, 299-410. https://doi.org/10.1007/978-3-0348-8081-7 [7] Kalman, R.E and Bucy, R.S. (1961) New Results in Linear Filtering and Prediction Theory. Journal of Fluids Engineering, 83, 95-108. https://doi.org/10.1115/1.3658902 [8] Boyle, P.P., Tian, W. and Guan, F. (2002) The Riccati Equation in Mathematical Finance. Journal of Symbolic Computation, 33, 343-355. https://doi.org/10.1006/jsco.2001.0508 [9] Ndiaye, M. (2022) The Riccati Equation, Differential Transform, Rational Solutions and Applications. Applied Mathematics, 13, 774-792. https://doi.org/10.4236/am.2022.139049 [10] Ince, E.L. (1956) Ordinary Differential Equations. Dover Publications, New York. [11] Mack, J. and Strauss, A. (1981) Introduction to Optimal Control Theory. Springer Verlag, New York. [12] Kirk, D.E. (2004) Optimal Control Theory: An Introduction. Dover Publications, New York. [13] Kucera, V. (1973) A Review of the Matrix Riccati Equation. Kybernetica, 9, 42-61.