Load Disturbance Conditions for Current Error Feedback and Past Error Feedforward State-Feedback Iterative Learning Control

Abstract

Iterative learning control is a controlling tool developed to overcome periodic disturbances acting on repetitive systems. State-feedback ILC controller was designed based on the use of the small gain theorem. Stability conditions were reported in the case of past error and current error feedback schemes based on Singular values. Disturbances acting on the load of the system were reported for the case of past error feedforward only which kept the investigation of the current error feedback as an open question. This paper develops a comparison between the past error feedforward and current error feedback schemes disturbance conditions in singular values. As a result, the conditions found highly support the use of the past error over the current error feedback.

Share and Cite:

Alotaibi, A. , Alkandri, A. and Alsubaie, M. (2021) Load Disturbance Conditions for Current Error Feedback and Past Error Feedforward State-Feedback Iterative Learning Control. Intelligent Control and Automation, 12, 65-72. doi: 10.4236/ica.2021.122004.

1. Introduction

Robot manipulators can be provided to undertake a pick and place operation with repeated executions of a finite duration task. In gantry robot application, an object is collected from a fixed location point, transferred with predefined period to be placed at another specified location. Once job is complete, the robot returns to the starting location point to repeat the same task repeatedly. Thus, the objective is to follow a prescribed reference as closely as possible for as many times as possible before the need for resetting. Other applications arise having the same operation manner, such as chemical batch processes, petrochemical processes, and microelectronics manufacturing [1].

In this paper, a trial is the known terminology for each completed job, and the trial length is known to define the finite duration. The reference trajectory with finite time duration is denoted by r ( t ) over the period 0 t T < , where T denotes the trial length. After each completed trial, information generated is available to compute the control input to be applied to the next trial.

Let the integer k 0 denote the trial number and y k ( t ) the output on trial k, where, in this paper, attention is restricted to single-input–single-output systems with an immediate generalization to multi-input multi-output systems (MIMO). Moreover, the error on trial k is e k ( t ) = r ( t ) y k ( t ) .

Iterative learning control (ILC) is a developed controlling tool that had been especially devoted to systems operating in the mode described earlier [2]. The basic principle behind it is to construct a set of control inputs { u k } such that the error sequence { e k } converges to zero in k trial after trial. In industrial applications, it is acceptable to design the controller such that the error convergence to within a specified tolerance.

For any instance on the current trial, information from any instance on the complete previous trial can be used assuming all previous trial data is available; noncausal temporal information.

Several works reported the development of ILC control designs [3] [4]. Those introduced the development of the ILC principle starting from the early reported work in [2], where a D-type ILC law was introduced having the following form u k + 1 = u k + ϒ e ˙ k , with ϒ being the learning gain to be designed to further complicated designs. The papers [3] [4], review ILC designs, which form a good start for scholars.

There are two tracks to develop ILC laws, one where the design does not depend on the model of the dynamics to be controlled such as the Phase-lead type ILC. This holds the control law as u k + 1 ( p ) = u k ( p ) + ϒ e k ( p + λ ) , where λ > 0 is the phase-lead term and the system sampled with p denoting a sampling instant along a trial. This track of course faces cases where such control laws are insufficient to achieve the required control performance (or not capable of controlling the dynamics) and this has led to the development of model-based designs. [4] is a good starting point for the literature.

In this paper, we produce the missing disturbances condition for the current error feedback state ILC based on the framework reported in [5] and [6]. The framework assumes periodic disturbances acting on system input. The framework designed controller explicitly incorporates current error feedback, whereas the work of [7] presented a modified framework that incorporated past and current error feedback in designing the ILC controller. The idea behind the two designs depends on the internal model principle [8] with isolating the delay model and introducing a controller that stabilize the overall system around the delay operator. In both designs, past and current error cases, the disturbances acting on system input are not considered.

There are several studies that reported the uncertainty or disturbances issues such as those in [9] [10] [11], where either a robust adaptive control design was investigated for a class of nontriangular nonlinear systems in the case of unmodeled dynamics and stochastic disturbances or for an adaptive fuzzy tracking control design problem, as well, for uncertain nonstrict feedback nonlinear SISO systems.

[11] investigated the load disturbances for state feedback control in the duality framework for past error feedforward, whereas [7] did not consider such a problem within the framework that might affect the quality of the response obtained. This paper, extends the disturbance conditions in ILC design for the current error feedback case alongside the previously reported case, which concerns the past error feedforward case. A comparison is made on the results obtained for the new results which was left for the reader to dig for in [11].

The next section gives a brief background on the work of [7]. Then, the new results for past error feedforward and current error feedback in load disturbance presence are given. Finally, the overall conclusion and possible future investigation are clarified.

2. Background

Here we revise the ILC design introduced in [7] starting with considering a linear MIMO system S of m outputs, p inputs, and n states. The following space form S ( z ) = C ( z I n F ) 1 Ξ + D describes the overall transfer function in the state space form in discrete linear time-invariant, where the matrices f, Ξ, C, and D are of dimensions that keep the previous equation valid.

The system output y(z) of size m × o and u(z) is the system input of size p × o, then the output y(z) can be represented in the form of y(z) = S(z)u(z). It is known that, in ILC, the system operates a single trial of fixed time, then it resets to its original position waiting to start the next trial. Thus, a single trial with finite duration can be considered to express system dynamics over a single trial to be

x k ( i + 1 ) = F x k ( i ) + Ξ u k ( i ) , x k ( 0 ) = x 0 y k ( i ) = C x k ( i ) + D u k ( i ) , (1)

where 0 i N 1 with N defined as the number of trial samples. With no loss of generality, it is acceptable to consider the initial value x0 = 0 due to resetting condition. The form presented in Equation (1) can be defined in 2 dimensions propagation. This is reflected on the start of the ILC Field for both continuous time domain and discrete field, where the latter forms the natural base to the ILC interest due to its nature of storing data. Numerous ILC designs recently rely on lifting the discrete expression to be one notational form, the index trial notation, see [4]. The lifted expression starts with introducing the input and output supervectors; u and y in iteration index

u k = [ u k ( 0 ) , u k ( 1 ) , , u k ( N 1 ) ] T

y k = [ y k ( 0 ) , y k ( 1 ) , , y k ( N 1 ) ] T

A feedback connection is used to stabilize the system along the trial, where system stability is always a requirement in ILC designs. Now, system overall dynamics can be defined in the form of

y k = S u k (2)

with the lower triangular Toeplitz process matrix S , where its lower elements are the Markov parameters as

S = [ C Ξ 0 0 0 C F Ξ C Ξ 0 0 C F 2 Ξ C F Ξ C Ξ 0 C F N 1 Ξ C f N 2 Ξ C F N 3 Ξ C Ξ ] ,

The reference r ( t ) is also defined to hold the vector form in discrete space as

r = [ r ( 0 ) , r ( 1 ) , , r ( N 1 ) ] T

Through measuring the error, the ILC objective is to use the measured error as a forcing function added to previous trial input to produce next trial input signal such that the system output follows the reference trajectory accurately as the trial index tends to infinity.

[12] defined a periodic signal of length N in the discrete-time formation as

x w ( t k + 1 ) = F x w ( t k ) , x w ( t 0 ) = x w 0 w ( t k ) = C w x ( t k ) , (3)

where the N × N matrix Fw is given by

F w = [ 0 1 0 0 0 0 0 1 1 0 0 0 ]

and the 1 × N row vector Cw as

C w = [ 1 0 0 0 ]

The ILC in state feedback structure problem can be stated as follows. It is required to find a robust controller K ( z ) (with z denoting the discrete-time delay operator) for the robust periodic control problem such that, given an m× p transfer-function matrix S ( z ) with an input vector consists of the plant and a disturbance input, u = u S + u w , the output signal defined in (2) and a reference signal r ( t k ) = r ( t k + N ) , t k = 0 , Δ T , 2 Δ T , with N sampling time.

The objective is to design the controller K ( z ) such that, the overall closed-loop system is asymptotically stable, the tracking error e k = r y k tends to zero along the trial domain, and the two previous conditions are robust.

[7] extended the work presented by [5] [6] to design the ILC controller in several design schemes. The first was with state feedback

u ˜ ( i ) = K l [ x l , k ( i ) x k ( i ) ]

and the second was through output injection. Each case has two different stability conditions, depending on either using current error feedback or past error feedforward with the following stability condition always achieved

H ( z ) < 1 (4)

In this paper we are considering the state feedback case only. Thus, for H ( z ) being the overall transfer function around the delay model, G ( z ) is the overall transfer function of the system and S ( z ) is the plant model [5]. For the past error feedforward case, the stability condition is

H ( z ) = ( G ( z ) + S ( z ) ) G ( z ) 1 (5)

For the current error feedback, the stability condition is

H ( z ) = G ( z ) ( G ( z ) + S ( z ) ) 1 (6)

where G(z) in both cases is governed by the following

G ( z ) = [ D C l C ] ( z I N p + n [ F l 0 Ξ C l F ] + [ Ξ l D Ξ l ] K l ) 1 [ Ξ l D Ξ l ] + D D l .

The design presented in [7] did not consider the case where disturbance might act on the system load. [11] considered only the case with past error feedforward. Here in this paper, we investigate the case of current error feedback and compare the condition found to that found in [11].

3. Load Disturbance Limitation in State-Feedback ILC

As a start, [11] defined the system described in (1) in single-input–single-output case in term of load and measurement disturbances d k ( t ) and n k ( t ) , respectively, as

Ψ k ( t + δ ) = S ( q ) u k ( t ) + d k ( t ) y k ( t ) = Ψ k ( t ) + n k ( t ) , t = 0 , 1 , , n 1 (7)

where k represents the iteration index and q is being the forward shift operator. The time delay operator, δ , is inserted in the output equation. Without loss of generality, it is assumed that the process matrix S ( q ) has no delay. In the beginning of each trial, the state of the system is assumed to start from a fixed position. The number of samples in one iteration is N + δ .

Now, if a control action takes place at time t = 0 , the system will respond when t = δ . Thus, it is trivial to control the output Ψ k ( t ) at times δ t δ + N 1 , using the input u k ( t ) at times 0 t N 1 . The reference signal r ( t ) is then defined over the period δ t δ + N 1 , and the control problem would be as to let Ψ k ( t ) to follow r ( t ) as close as possible, where r ( t ) is the same for all trials.

Accordingly, the system in (7) can be described with the control input signal u k ( t ) defined earlier and the output Ψ k ( t ) for trial k can also be defined as

Ψ k = [ Ψ k ( δ ) , Ψ k ( δ + 1 ) , , Ψ k ( δ + N 1 ) ] T (8)

The load disturbance vector d k is analogous to u k ( t ) ; and the measurement disturbance n k , the measured output vector y k , and the reference vector r are defined analogous to (8). Required assumptions are made about d k and n k where 1) their mean is zero, weakly stationary random variables with bounded variance; 2) they are uncorrelated with each other; and 3) they are uncorrelated between iterations. Examining load disturbance limitation to assure system performance, let us start with the stability condition described in (5) for the state feedback design with past error feedforward case as well as the output described in (7) to form the following path using the singular values as a more restrictive region:

[ σ ( G + S ) < σ ( G ) ] × σ ( u k )

which can be further reformed to the following:

σ ( G u k + S u k ) < σ ( G u k ) σ ( G u k + G u k 1 G u k 1 + S u k ) < σ ( G u k + G u k 1 G u k 1 ) σ ( G u ˜ k + G u k 1 + S u k ) < σ ( G u ˜ k + G u k 1 ) σ ( Ψ k d k + G u k 1 + Ψ k d k ) < σ ( Ψ k d k + G u k 1 )

This can be directed to isolate the load disturbance in one hand after some manipulation and maximize its effect to form the following equation:

σ ¯ ( d k ) < σ ¯ ( i = 0 k ( Ψ i ) j = 0 k 1 ( d j ) G u 0 ) σ _ ( G u k 1 ) (9)

This condition clearly says that the maximum singular value of the load disturbance acting on the present trial has to be less than the maximum singular value of the difference of the sum of all previous trials output eigenvalues minus the sum of previous trial load disturbances and the initial input response as well as the minimum singular value to the last trial control action. This makes the range where the load disturbance acting on any trial k is very restrictive and has a small range of variation in terms of its maximum singular value. The second part of the right-hand side of (9) can also be modified to give the form of [ σ _ ( h = 0 k 1 ( Ψ h ) v = 0 k 1 ( d v ) G u 0 ) ] , which makes the condition given in (9) as

σ ¯ ( d k ) < σ ¯ ( i = 0 k ( Ψ i ) j = 0 k 1 ( d j ) G u 0 ) σ ( h = 0 k 1 ( Ψ h ) v = 0 k 1 ( d v ) G u 0 ) (10)

In the case of current error feedback, the load disturbance limitation condition is derived the same. The starting point is again the stability condition given in (6). The load disturbance can happen at any time instance in trial k and it is not periodic. Thus, it must be in a form that contains its weight of direction such that its effect can be analysed and suppressed. This is again considered through singular value analysis, where the maximum singular value representing the disturbance must be contained in stability region. The analysis starts with considering the singular value of the following

σ ( G ) < σ ( G + S )

[ σ ( G ) < σ ( G + S ) ] × σ ( u k )

σ ( G u k ) < σ ( G u k + S u k )

σ ( G u k + G u k 1 G u k 1 ) < σ ( G u k + G u k 1 G u k 1 + S u k )

σ ( G u ˜ k + G u k 1 ) < σ ( G u ˜ k + G u k 1 + S u k )

σ ( Ψ k d k + G u k 1 ) < σ ( Ψ k d k + G u k 1 + Ψ k d k )

σ ¯ ( i = 0 k ( Ψ i ) j = 0 k 1 ( d j ) G u 0 ) σ _ ( G u k 1 ) < σ ¯ ( d k ) (11)

And this can be rewritten as

σ ¯ ( i = 0 k ( Ψ i ) j = 0 k 1 ( d j ) G u 0 ) σ _ ( h = 0 k 1 ( Ψ h ) v = 0 k 1 ( d v ) G u 0 ) < σ ¯ ( d k ) (12)

As it can be seen, the resulting condition (12) states that the maximum singular value of the disturbances has to be greater than the sum of all trials output singular values but not to include minimum singular values of sum of past output signals, past disturbances, initial output as well as the maximum singular value of past disturbances and initial output, also not exceeds a value of 1. This is hard to achieve in the presence of an easier method such the past error feedforward with better feasible region of disturbance suppression.

Overall, the result found in current error feedback (12), does not support the use of current error feedback against the presence of easier case. Also, it clearly points to the advantage of using the past error feedforward controller due to its simple structure and the ease of the applying the load disturbance limitation conditions in comparison of the current error feedback case in ILC state feedback design.

4. Conclusion

Two ILC state feedback schemes have been investigated based on load disturbance conditions, past error feedforward and current error feedback. The results obtained in (10) and in (12) show the advantage of using the past error feedforward over current error feedback in term of the region of disturbance suppression found. The past error feedforward scheme has wider region of disturbance limitation condition compared to the current error feedback case, where it has less region of disturbance limitation. In the future, experimental verification would be valuable to support the results found and investigation of the output injection scheme is still an open area to find its conditions of stability in term of load disturbance presence.

Conflicts of Interest

The authors declare no conflicts of interest regarding the publication of this paper.

References

[1] Rogers, E., Galkowski, K. and Owens, D. (2007) Control System Theory and Applications for Linear Repetitive Processes. Springer-Verlag, Berlin.
[2] Arimoto, S., Kawamura, S. and Miyazaki, F. (1984) Bettering Operation of Robots by Learning. Journal of Robotic Systems, 1,123-140.
https://doi.org/10.1002/rob.4620010203
[3] Moore, K.L., Daleh, M. and Battacharrya, S.P. (1992) Iterative Learning Control: A Survey and New Results. Journal of Robotic Systems, 9, 563-594.
https://doi.org/10.1002/rob.4620090502
[4] Bristow, D.A., Tharayil, M. and Alleyne, A.G. (2006) A Survey of Iterative Learning Control. IEEE Control System Magazine, 26, 96-114.
https://doi.org/10.1109/MCS.2006.1636313
[5] DeRover, D. and Bosgra, O.H. (1997) Dualization of the Internal Model Principle in Compensator and Observer Theory with Application to Repetitive and Learning Control. Proceedings of the 1997 American Control Conference, Albuquerque, 6 June 1997.
https://doi.org/10.1109/ACC.1997.609617
[6] DeRover, D., Bosgra, O.H. and Steinbuch, M. (2000) Internal Model-Based Design of Repetitive and Iterative Learning Controllers for Linear Multivariable Systems. International Journal of Control, 73, 914-929.
https://doi.org/10.1080/002071700405897
[7] Freeman, C.T., Alsubaie, M.A., Cai, Z., Rogers, E. and Lewin, P.L. (2013) A Common Setting for the Design of Iterative Learning and Repetitive Controllers with Experimental Verification. International Journal of Adaptive Control and Signal Process, 27, 230-249.
https://doi.org/10.1002/acs.2299
[8] Francis, B.A. and Wonham, W.M. (1976) The Internal Model Principle of Control Theory. Automatica, 12, 457-465.
https://doi.org/10.1016/0005-1098(76)90006-6
[9] Li, Y., Liu, L. and Feng, G. (2018) Robust Adaptive Output Feedback Control to a Class of Non-Triangular Stochastic Nonlinear Systems. Automatica, 89, 325-332.
https://doi.org/10.1016/j.automatica.2017.12.020
[10] Tong, S., Li, Y. and Sui, S. (2016) Adaptive Fuzzy Tracking Control Design for SISO Uncertain Non-Strict Feedback Nonlinear Systems. IEEE Transactions on Fuzzy Systems, 24, 1441-1454.
https://doi.org/10.1109/TFUZZ.2016.2540058
[11] Alsubaie, M. and Rogers, E. (2018) Robustness and Load Disturbance Conditions for State Based Iterative Learning Control. Optimal Control Applications and Methods, 39, 1965-1975.
https://doi.org/10.1002/oca.2460
[12] Inoue, T., Nakano, M., Kubo, T., Matsumoto, S. and Baba, H. (1981) High Accuracy Control of a Proton Synchrotron Magnet Power Supply. IFAC Proceedings, 14, 3137-3142.
https://doi.org/10.1016/S1474-6670(17)63938-7

Copyright © 2024 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.