Scientific Research

An Academic Publisher

**Global Optimization for Solving Linear Non-Quadratic Optimal Control Problems** ()

^{}

Keywords

Share and Cite:

*Journal of Applied Mathematics and Physics*,

**4**, 1859-1869. doi: 10.4236/jamp.2016.410188.

1. Primal Problem.

In this paper, the notation represents a norm for the specified space concerned. The primal goal of this paper is to present a solution to the following optimal control problem (primal problem () in short).

() (1.1)

(1.2)

where is twice continuously differentiable on, , is twice continuously differentiable on,. In the control system, are given matrices in and respectively and α stands for a given vector in. We assume that

(1.3)

If is a positive definite quadratic form with respect to u and is a posi- tive semi-definite quadratic form with respect to x, then the problem () is a classical linear-quadratic optimal control problem [1] .

The rest of the paper is organized as follows. In Section 2, we focus on Pontryagin principle to yield a family of global optimizations on the adjoint variable. In Section 3, we deal with the global optimization for the Hamiltonian function. In Section 4, we show that there exists an optimal control to the primal () and present a mathe- matical programming. In Section 5 and 6, we discuss how to compute the global minimizer by a differential flow and present an algorithm for the numerical practice with the description of an example.

2. Pontryagin Principle

Associated with the optimal control problem (), let’s introduce the Hamiltonian fun- ction

(2.1)

with the state and adjoint systems

(2.2)

(2.3)

We know from Pontryagin principle [2] that if is an optimal control to the problem (), then it is an extremal control. Associated with the state variable and the adjoint variable, we have

(2.4)

(2.5)

and

(2.6)

Since in (2.6) the global optimization is processed on the variable u over for a given t, it is equivalent to deal with the optimization (for obtaining a global minimizer):

(2.7)

Therefore we turn to consider the following optimization with respect to a given parameter vector

(2.8)

In this paper, for a given adjoint variable, we solve the optimization (2.8) to create a function. Then in Hamiltonian boundary problem (2.2), (2.3) we replace the variable u with the function and solve the following equation

(2.9)

(2.10)

3. Global Optimization

In this section, for a given parameter vector, we deal with the following global optimization problem [3] [4]

(3.1)

to create a function.

It follows from (1.3) that there exist positive numbers and r such that

(3.2)

It follows from (1.3) that there exist positive numbers and r, such that, when

,

(3.3)

Without loss of generalization, we assume that

Lemma 3.1. For given, let, then the global problem is

equivalent to the the following global problem:

(3.4)

proof: Let be given. Since, it is clear that

and. Then, when, we have

On the other hand, for,

But, since, when we have

Since we have shown above that, for all,

, noting that, we have

The lemma has been proved.

Consequently, by Lemma 3.1 we conclude the following lemma.

Lemma 3.2. Let be a minimizer of. Then is a mini- mizer of over. Moreover, and

.

Remark 3.1. Since, , is the unique minimizer of

over. Then, it follows by Lemma 3.2 that is also the unique minimizer of over. Therefor is uniquely determined by the equ- ation

By elementary calculus [5] , the above equation defines an implicit function of the variable, denoted by which is continuously dependent of the parameter.

4. Hamiltonian Boundary Value Problem

In this section we solve the following Hamiltonian boundary value problem:

(4.1)

(4.2)

Equation (4.2) can be rewritten by the integral form

(4.3)

Substituting it into Equation (4.1), we have

(4.4)

In the following we show that Equation (4.4) has a solution, then together with (4.3) we obtain a solution to Hamiltonian boundary value problem (4.1), (4.2).

Since is twice continuously differentiable on, we may define

and

Let

Consider the ball centered at a in (regarding a as a function constantly equal to the vector a):

For a real number such that, define an operator

, which acts on each element to produce an image satisfying (noting that the integral in (4.4) needs the information of on the whole interval), for,

(4.5)

while for,

By an elementary estimation we have, for,

(4.6)

while for,

which implies that. It is also clear that G is a continuous and compact mapping. Then by Schauder fixed-point theorem, there is an element such that. It follows that is a solution to (4.4) for. For, let

(4.7)

By a traditional approach in the classical theory of ordinary differential equation, we see that the solution can be extended to. Then by (4.4), (4.3) we see that is a solution to Hamiltonian boundary value problem (4.1), (4.2). We conclude the following result.

Theorem 4.1. There exists a solution pair to Hamiltonian boundary value problem (4.1), (4.2).

Let and be a solution of the Hamiltonian boundary value problem (4.1), (4.2). Then by the definition of the Pontryagin extremal control, we conclude that is an extremal control to the primal problem ().

Remark 4.1. Moreover, noting that and

, by (2.1) we see that the Hamiltonian function is convex on the state and control variables respectively. Meanwhile, noting that does not depend on the state variable, by traditional optimal control theory, we know that the extremal control is also an optimal control to the optimal control problem ().

In other words, in the practice for solving (), we only need to compute a solution of the following differential boundary value problem:

(4.8)

(4.9)

(4.10)

We present a numerical method to deal with the differential boundary value Equation (4.9), Equation (4.10) as follows. Define a mesh by dividing the time interval evenly

Consider solving for, with the intended approximation of. For the requirement on the adjoint variable (due to the boundary condition of the differential boundary value Equation (4.9), Equation (4.10)), we consider the following difference equation:

Solving the differece equation above we can get the valyue. According to classical numerical analysis theory, the solution of above difference equation will converge to the solution of differential boundary value problem (4.8) - (4.10). Apparently, we need to compute numerically. It will be given in next section.

5. Computing h(l) by a Differential Flow

In this section we study how to compute. For a given parameter vector

, we solve the following global optimization problem

(5.1)

to create a function. In the following we will determine the value of by a differential flow.

Since the Hessen matrix function of is positive definite, by the classical theory of ordinary differential equation, for given, the following Cauchy initial value problem [3] [6] creates a unique flow:

(5.2)

such that

(5.3)

noting that since is the minimizer of over (Lemma 3.2). To explain the uniqueness of, we refer to the fact that

for. Thus, combining (5.3), is the unique solution of the equation.

In what follows we choose a real number such that, on,

By Brouwer Fixed-Point theorem ( [7] ), there is a point, such that

(5.4)

Moreover, we have

(5.5)

where the positive constant C is only dependent of the parameters. In the following there are several times of appearing the character C which may denote different positive constants only dependent of the parameters.

It follows from (5.4) that

(5.6)

By (5.3) and the uniqueness of the flow, we see that

(5.7)

and that the flow can also be got by the following Cauchy initial value problem

(5.8)

Certainly,. Although it is hard to get and exactly, we can com- pute numerically another vector instead of by the following result.

Theorem 5.1. Let the flow be got by the following backward differential equation

(5.9)

(5.10)

Then

(5.11)

(5.12)

where the positive constant C is only dependent of the parameters and is selected to be sufficiently large satisfying (5.4), (5.5).

Proof: When is sufficiently large, is near the origin. In a neighborhood of the origin, by (5.4), we have

and

Noting, consequently, we have

Let

we have

Thus, by (5.5),

(5.13)

where the positive constant C is only dependent of the parameters. By the way, we deduce that, noting that.

In the following we need to keep in mind that

(5.14)

By (5.13), (5.14), for sufficiently large, we have

(5.15)

noting that in the inequality process the value of the constant C has been changed several times but only dependent of given information like.

Since is a constant along the flow, noting that (5.9) (5.10) we have

Consequently for we have

Thus, by (5.15),

(5.16)

Further, noting that, we have

noting that and also noting that in the inequality process the value of the constant C takes different positive values which are only dependent of given information like. The theorem has been proved.

Remark 5.1. Comparing with, in the computation practice, we can solve the Cauchy initial problem (5.9) (5.10), instead of (5.8) to get as an approximation of.

In what follows, we give an algorithm to compute numerically in finding a discrete solution to Hamiltonian boundary value problem (4.1), (4.2).

Algorithm 5.1.

1) Given;

2) Given;

3) Get the flow by solving the following Cauchy initial problem

4) if, stop; Otherwise, go to 5);

5), go to 3).

Remark 5.2. For the step 3) of above algorithm, we present a numerical method to deal with the Cauchy initial problem as follows. Define a mesh by dividing the time interval evenly

Consider solving for, with the intended approximation of. We deal with the following difference equation.

6. A Description of an Example

Let’s consider to solve the following optimal control problem numericaly:

where state and control variables all take values in. Let. We have, , We have the following Hamiltonian boundary value problem and a global optimization problem:

We need to solve the following differential boundary value equation:

which yields the corresponding difference equation:

Noting that and, by Algorithm 5.1 and Remark 5.2, given positive integers (properly large) and positive real numbers (properly small), , we consider solving the following difference equation:

If, then the discrete solution of an optimal control

. Otherwise, let and and do the above difference equation again.

Conflicts of Interest

The authors declare no conflicts of interest.

[1] |
Sontag, E.D. (1998) Mathematical Control Theory: Deterministic Finite Dimensional Systems. 2nd Edition, Springer, New York. http://dx.doi.org/10.1007/978-1-4612-0577-7 |

[2] | Pontryagin, L.S. (1964) The Mathematical Theory of Optimal Processes. Pergamon Press, Oxford, UK. |

[3] | Zhu, J.H., Wu, D. and Gao, D. (2012) Applying the Canonical Dual Theory in Optimal Control Problems. Journal of Global Optimization, 54, 221-233. |

[4] |
Zhu, J.H. and Zhang, X. (2008) On Global Optimmizations with Polynomials. Optimization Letters, 2, 239-249. http://dx.doi.org/10.1007/s11590-007-0054-5 |

[5] | Boothby, W.M. (2007) An Introduction to Differential Manifolds and Riemannian Geometry. Elsevier Pte Ltd., Singapore. |

[6] |
Zhu, J.H. and Liu, G.H. (2014) Solution to Optimal Control by Canonical Differential Equation. International Journal of Control, 87, 21-24. http://dx.doi.org/10.1080/00207179.2013.819589 |

[7] |
Brown, R.F. (1988) Fixed Point Theory and Its Applications. American Mathematical Society, New York. http://dx.doi.org/10.1090/conm/072 |

Copyright © 2020 by authors and Scientific Research Publishing Inc.

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.