Load Disturbance Conditions for Current Error Feedback and Past Error Feedforward State-Feedback Iterative Learning Control ()

Athari Alotaibi^{}, Asmaa Alkandri^{}, Muhammad Alsubaie^{}

Electronics Engineering Technology Department, College of Technological Studies, PAAET, Educational Shewaikh, Kuwait.

**DOI: **10.4236/ica.2021.122004
PDF
HTML XML
186
Downloads
652
Views
Citations

Electronics Engineering Technology Department, College of Technological Studies, PAAET, Educational Shewaikh, Kuwait.

Iterative
learning control is a controlling tool developed to overcome periodic
disturbances acting on repetitive systems. State-feedback ILC controller was
designed based on the use of the small gain theorem. Stability conditions were
reported in the case of past error and current error feedback schemes based on
Singular values. Disturbances acting on the load of the system were reported for the case of
past error feedforward only which kept the investigation of the current error
feedback as an open question. This paper develops a comparison between the past
error feedforward and current error feedback schemes disturbance conditions in
singular values. As a result, the conditions found highly support the use of
the past error over the current error feedback.

Share and Cite:

Alotaibi, A. , Alkandri, A. and Alsubaie, M. (2021) Load Disturbance Conditions for Current Error Feedback and Past Error Feedforward State-Feedback Iterative Learning Control. *Intelligent Control and Automation*, **12**, 65-72. doi: 10.4236/ica.2021.122004.

1. Introduction

Robot manipulators can be provided to undertake a pick and place operation with repeated executions of a finite duration task. In gantry robot application, an object is collected from a fixed location point, transferred with predefined period to be placed at another specified location. Once job is complete, the robot returns to the starting location point to repeat the same task repeatedly. Thus, the objective is to follow a prescribed reference as closely as possible for as many times as possible before the need for resetting. Other applications arise having the same operation manner, such as chemical batch processes, petrochemical processes, and microelectronics manufacturing [1].

In this paper, a trial is the known terminology for each completed job, and the trial length is known to define the finite duration. The reference trajectory with finite time duration is denoted by
$r\left(t\right)$ over the period
$0\le t\le T<\infty $, where *T* denotes the trial length. After each completed trial, information generated is available to compute the control input to be applied to the next trial.

Let the integer
$k\ge 0$ denote the trial number and
${y}_{k}\left(t\right)$ the output on trial *k*, where, in this paper, attention is restricted to single-input–single-output systems with an immediate generalization to multi-input multi-output systems (MIMO). Moreover, the error on trial *k* is
${e}_{k}\left(t\right)=r\left(t\right)-{y}_{k}\left(t\right)$.

Iterative learning control (ILC) is a developed controlling tool that had been especially devoted to systems operating in the mode described earlier [2]. The basic principle behind it is to construct a set of control inputs
$\left\{{u}_{k}\right\}$ such that the error sequence
$\left\{{e}_{k}\right\}$ converges to zero in *k* trial after trial. In industrial applications, it is acceptable to design the controller such that the error convergence to within a specified tolerance.

For any instance on the current trial, information from any instance on the complete previous trial can be used assuming all previous trial data is available; noncausal temporal information.

Several works reported the development of ILC control designs [3] [4]. Those introduced the development of the ILC principle starting from the early reported work in [2], where a D-type ILC law was introduced having the following form ${u}_{k+1}={u}_{k}+\Upsilon {\stackrel{\dot{}}{e}}_{k}$, with $\Upsilon $ being the learning gain to be designed to further complicated designs. The papers [3] [4], review ILC designs, which form a good start for scholars.

There are two tracks to develop ILC laws, one where the design does not depend on the model of the dynamics to be controlled such as the Phase-lead type ILC. This holds the control law as
${u}_{k+1}\left(p\right)={u}_{k}\left(p\right)+\Upsilon {e}_{k}\left(p+\lambda \right)$, where
$\lambda >0$ is the phase-lead term and the system sampled with *p* denoting a sampling instant along a trial. This track of course faces cases where such control laws are insufficient to achieve the required control performance (or not capable of controlling the dynamics) and this has led to the development of model-based designs. [4] is a good starting point for the literature.

In this paper, we produce the missing disturbances condition for the current error feedback state ILC based on the framework reported in [5] and [6]. The framework assumes periodic disturbances acting on system input. The framework designed controller explicitly incorporates current error feedback, whereas the work of [7] presented a modified framework that incorporated past and current error feedback in designing the ILC controller. The idea behind the two designs depends on the internal model principle [8] with isolating the delay model and introducing a controller that stabilize the overall system around the delay operator. In both designs, past and current error cases, the disturbances acting on system input are not considered.

There are several studies that reported the uncertainty or disturbances issues such as those in [9] [10] [11], where either a robust adaptive control design was investigated for a class of nontriangular nonlinear systems in the case of unmodeled dynamics and stochastic disturbances or for an adaptive fuzzy tracking control design problem, as well, for uncertain nonstrict feedback nonlinear SISO systems.

[11] investigated the load disturbances for state feedback control in the duality framework for past error feedforward, whereas [7] did not consider such a problem within the framework that might affect the quality of the response obtained. This paper, extends the disturbance conditions in ILC design for the current error feedback case alongside the previously reported case, which concerns the past error feedforward case. A comparison is made on the results obtained for the new results which was left for the reader to dig for in [11].

The next section gives a brief background on the work of [7]. Then, the new results for past error feedforward and current error feedback in load disturbance presence are given. Finally, the overall conclusion and possible future investigation are clarified.

2. Background

Here we revise the ILC design introduced in [7] starting with considering a linear MIMO system
$S$ of *m* outputs, *p* inputs, and *n* states. The following space form
$S\left(z\right)=C\left(zIn-F\right)-1\Xi +D$ describes the overall transfer function in the state space form in discrete linear time-invariant, where the matrices *f*, Ξ, *C*, and *D* are of dimensions that keep the previous equation valid.

The system output *y*(*z*) of size *m* × *o* and *u*(*z*) is the system input of size *p* × *o*, then the output *y*(*z*) can be represented in the form of *y*(*z*) = *S*(*z*)*u*(*z*). It is known that, in ILC, the system operates a single trial of fixed time, then it resets to its original position waiting to start the next trial. Thus, a single trial with finite duration can be considered to express system dynamics over a single trial to be

$\begin{array}{l}{x}_{k}\left(i+1\right)=F{x}_{k}\left(i\right)+\Xi {u}_{k}\left(i\right),\text{\hspace{0.17em}}\text{\hspace{0.17em}}{x}_{k}(0)={x}_{0}\\ {y}_{k}\left(i\right)=C{x}_{k}\left(i\right)+D{u}_{k}\left(i\right),\end{array}$ (1)

where
$0\le i\le N-1$ with *N* defined as the number of trial samples. With no loss of generality, it is acceptable to consider the initial value *x*_{0} = 0 due to resetting condition. The form presented in Equation (1) can be defined in 2 dimensions propagation. This is reflected on the start of the ILC Field for both continuous time domain and discrete field, where the latter forms the natural base to the ILC interest due to its nature of storing data. Numerous ILC designs recently rely on lifting the discrete expression to be one notational form, the index trial notation, see [4]. The lifted expression starts with introducing the input and output supervectors; *u* and *y* in iteration index

${u}_{k}={\left[{u}_{k}\left(0\right),{u}_{k}\left(1\right),\cdots ,{u}_{k}\left(N-1\right)\right]}^{\text{T}}$

${y}_{k}={\left[{y}_{k}\left(0\right),{y}_{k}\left(1\right),\cdots ,{y}_{k}\left(N-1\right)\right]}^{\text{T}}$

A feedback connection is used to stabilize the system along the trial, where system stability is always a requirement in ILC designs. Now, system overall dynamics can be defined in the form of

${y}_{k}=S{u}_{k}$ (2)

with the lower triangular Toeplitz process matrix $S$, where its lower elements are the Markov parameters as

$S=\left[\begin{array}{ccccc}C\Xi & 0& 0& \cdots & 0\\ CF\Xi & C\Xi & 0& \cdots & 0\\ C{F}^{2}\Xi & CF\Xi & C\Xi & \cdots & 0\\ \vdots & \vdots & \vdots & \ddots & \vdots \\ C{F}^{N-1}\Xi & C{f}^{N-2}\Xi & C{F}^{N-3}\Xi & \cdots & C\Xi \end{array}\right],$

The reference $r\left(t\right)$ is also defined to hold the vector form in discrete space as

$r={\left[r\left(0\right),r\left(1\right),\cdots ,r\left(N-1\right)\right]}^{\text{T}}$

Through measuring the error, the ILC objective is to use the measured error as a forcing function added to previous trial input to produce next trial input signal such that the system output follows the reference trajectory accurately as the trial index tends to infinity.

[12] defined a periodic signal of length *N* in the discrete-time formation as

$\begin{array}{l}{x}_{w}\left({t}_{k}+1\right)=F{x}_{w}\left({t}_{k}\right),\text{\hspace{0.17em}}\text{\hspace{0.17em}}{x}_{w}\left({t}_{0}\right)={x}_{w}0\\ w\left({t}_{k}\right)={C}_{w}x\left({t}_{k}\right),\end{array}$ (3)

where the *N* × *N* matrix *F _{w}* is given by

${F}_{w}=\left[\begin{array}{ccccc}0& 1& 0& \cdots & 0\\ \vdots & \vdots & \vdots & \ddots & \vdots \\ 0& 0& 0& \cdots & 1\\ 1& 0& 0& \cdots & 0\end{array}\right]$

and the 1 × *N* row vector *C _{w}* as

${C}_{w}=\left[\begin{array}{c}\begin{array}{ccccc}1& 0& 0& \cdots & 0\end{array}\end{array}\right]$

The ILC in state feedback structure problem can be stated as follows. It is required to find a robust controller
$K\left(z\right)$ (with *z* denoting the discrete-time delay operator) for the robust periodic control problem such that, given an *m*× *p* transfer-function matrix
$S\left(z\right)$ with an input vector consists of the plant and a disturbance input,
$u={u}_{S}+{u}_{w}$, the output signal defined in (2) and a reference signal
$r\left({t}_{k}\right)=r\left({t}_{k+N}\right),{t}_{k}=0,\Delta T,2\Delta T,\cdots $ with *N* sampling time.

The objective is to design the controller $K\left(z\right)$ such that, the overall closed-loop system is asymptotically stable, the tracking error ${e}_{k}=r-{y}_{k}$ tends to zero along the trial domain, and the two previous conditions are robust.

[7] extended the work presented by [5] [6] to design the ILC controller in several design schemes. The first was with state feedback

$\stackrel{\u02dc}{u}\left(i\right)=-{K}_{l}\left[\begin{array}{c}{x}_{l,k}\left(i\right)\\ {x}_{k}\left(i\right)\end{array}\right]$

and the second was through output injection. Each case has two different stability conditions, depending on either using current error feedback or past error feedforward with the following stability condition always achieved

$\Vert H\left(z\right)\Vert <1$ (4)

In this paper we are considering the state feedback case only. Thus, for $H\left(z\right)$ being the overall transfer function around the delay model, $G\left(z\right)$ is the overall transfer function of the system and $S\left(z\right)$ is the plant model [5]. For the past error feedforward case, the stability condition is

$H\left(z\right)=\left(G\left(z\right)+S\left(z\right)\right)G{\left(z\right)}^{-1}$ (5)

For the current error feedback, the stability condition is

$H\left(z\right)=G\left(z\right){\left(G\left(z\right)+S\left(z\right)\right)}^{-1}$ (6)

where *G*(*z*) in both cases is governed by the following

$G\left(z\right)=\left[\begin{array}{c}\begin{array}{cc}D{C}_{l}& C\end{array}\end{array}\right]{\left(z{I}_{{N}_{p+n}}-\left[\begin{array}{cc}{F}_{l}& 0\\ \Xi {C}_{l}& F\end{array}\right]+\left[\begin{array}{c}{\Xi}_{l}\\ D{\Xi}_{l}\end{array}\right]{K}_{l}\right)}^{-1}\left[\begin{array}{c}{\Xi}_{l}\\ D{\Xi}_{l}\end{array}\right]+D{D}_{l}.$

The design presented in [7] did not consider the case where disturbance might act on the system load. [11] considered only the case with past error feedforward. Here in this paper, we investigate the case of current error feedback and compare the condition found to that found in [11].

3. Load Disturbance Limitation in State-Feedback ILC

As a start, [11] defined the system described in (1) in single-input–single-output case in term of load and measurement disturbances ${d}_{k}\left(t\right)$ and ${n}_{k}\left(t\right)$, respectively, as

$\begin{array}{l}{\Psi}_{k}\left(t+\delta \right)=S\left(q\right){u}_{k}\left(t\right)+{d}_{k}\left(t\right)\\ {y}_{k}\left(t\right)={\Psi}_{k}\left(t\right)+{n}_{k}\left(t\right),\text{\hspace{0.17em}}t=0,1,\cdots ,n-1\end{array}$ (7)

where *k* represents the iteration index and *q* is being the forward shift operator. The time delay operator,
$\delta $, is inserted in the output equation. Without loss of generality, it is assumed that the process matrix
$S\left(q\right)$ has no delay. In the beginning of each trial, the state of the system is assumed to start from a fixed position. The number of samples in one iteration is
$N+\delta $.

Now, if a control action takes place at time $t=0$, the system will respond when $t=\delta $. Thus, it is trivial to control the output ${\Psi}_{k}\left(t\right)$ at times $\delta \le t\le \delta +N-1$, using the input ${u}_{k}\left(t\right)$ at times $0\le t\le N-1$. The reference signal $r\left(t\right)$ is then defined over the period $\delta \le t\le \delta +N-1$, and the control problem would be as to let ${\Psi}_{k}\left(t\right)$ to follow $r\left(t\right)$ as close as possible, where $r\left(t\right)$ is the same for all trials.

Accordingly, the system in (7) can be described with the control input signal
${u}_{k}\left(t\right)$ defined earlier and the output
${\Psi}_{k}\left(t\right)$ for trial *k* can also be defined as

${\Psi}_{k}={\left[{\Psi}_{k}\left(\delta \right),{\Psi}_{k}\left(\delta +1\right),\cdots ,{\Psi}_{k}\left(\delta +N-1\right)\right]}^{\text{T}}$ (8)

The load disturbance vector
${d}_{k}$ is analogous to
${u}_{k}\left(t\right)$ ; and the measurement disturbance
${n}_{k}$, the measured output vector
${y}_{k}$, and the reference vector *r* are defined analogous to (8). Required assumptions are made about
${d}_{k}$ and
${n}_{k}$ where 1) their mean is zero, weakly stationary random variables with bounded variance; 2) they are uncorrelated with each other; and 3) they are uncorrelated between iterations. Examining load disturbance limitation to assure system performance, let us start with the stability condition described in (5) for the state feedback design with past error feedforward case as well as the output described in (7) to form the following path using the singular values as a more restrictive region:

$\left[\sigma \left(G+S\right)<\sigma \left(G\right)\right]\times \sigma (\; u\; k\; )$

which can be further reformed to the following:

$\begin{array}{l}\sigma \left(G{u}_{k}+S{u}_{k}\right)<\sigma \left(G{u}_{k}\right)\\ \sigma \left(G{u}_{k}+G{u}_{k-1}-G{u}_{k-1}+S{u}_{k}\right)<\sigma \left(G{u}_{k}+G{u}_{k-1}-G{u}_{k-1}\right)\\ \sigma \left(G{\stackrel{\u02dc}{u}}_{k}+G{u}_{k-1}+S{u}_{k}\right)<\sigma \left(G{\stackrel{\u02dc}{u}}_{k}+G{u}_{k-1}\right)\\ \sigma \left({\Psi}_{k}-{d}_{k}+G{u}_{k-1}+{\Psi}_{k}-{d}_{k}\right)<\sigma \left({\Psi}_{k}-{d}_{k}+G{u}_{k-1}\right)\end{array}$

This can be directed to isolate the load disturbance in one hand after some manipulation and maximize its effect to form the following equation:

$\stackrel{\xaf}{\sigma}\left({d}_{k}\right)<\stackrel{\xaf}{\sigma}\left({\displaystyle {\sum}_{i=0}^{k}\left({\Psi}_{i}\right)}-{\displaystyle {\sum}_{j=0}^{k-1}\left({d}_{j}\right)}-G{u}_{0}\right)-\underset{\_}{\sigma}\left(G{u}_{k-1}\right)$ (9)

This condition clearly says that the maximum singular value of the load disturbance acting on the present trial has to be less than the maximum singular value of the difference of the sum of all previous trials output eigenvalues minus the sum of previous trial load disturbances and the initial input response as well as the minimum singular value to the last trial control action. This makes the range where the load disturbance acting on any trial *k* is very restrictive and has a small range of variation in terms of its maximum singular value. The second part of the right-hand side of (9) can also be modified to give the form of
$\left[\underset{\_}{\sigma}\left({{{\displaystyle \sum}}^{\text{}}}_{h=0}^{k-1}\left({\Psi}_{h}\right)-{{{\displaystyle \sum}}^{\text{}}}_{v=0}^{k-1}\left({d}_{v}\right)-G{u}_{0}\right)\right]$, which makes the condition given in (9) as

$\stackrel{\xaf}{\sigma}\left({d}_{k}\right)<\stackrel{\xaf}{\sigma}\left(\underset{i=0}{\overset{k}{{\displaystyle \sum}}}\left({\Psi}_{i}\right)-\underset{j=0}{\overset{k-1}{{\displaystyle \sum}}}\left({d}_{j}\right)-G{u}_{0}\right)-\sigma \left(\underset{h=0}{\overset{k-1}{{\displaystyle \sum}}}\left({\Psi}_{h}\right)-\underset{v=0}{\overset{k-1}{{\displaystyle \sum}}}\left({d}_{v}\right)-G{u}_{0}\right)$ (10)

In the case of current error feedback, the load disturbance limitation condition is derived the same. The starting point is again the stability condition given in (6). The load disturbance can happen at any time instance in trial *k* and it is not periodic. Thus, it must be in a form that contains its weight of direction such that its effect can be analysed and suppressed. This is again considered through singular value analysis, where the maximum singular value representing the disturbance must be contained in stability region. The analysis starts with considering the singular value of the following

$\sigma \left(G\right)<\sigma \left(G+S\right)$

$\left[\sigma \left(G\right)<\sigma \left(G+S\right)\right]\times \sigma (\; u\; k\; )$

$\sigma \left(G{u}_{k}\right)<\sigma \left(G{u}_{k}+S{u}_{k}\right)$

$\sigma \left(G{u}_{k}+G{u}_{k-1}-G{u}_{k-1}\right)<\sigma \left(G{u}_{k}+G{u}_{k-1}-G{u}_{k-1}+S{u}_{k}\right)$

$\sigma \left(G{\stackrel{\u02dc}{u}}_{k}+G{u}_{k-1}\right)<\sigma \left(G{\stackrel{\u02dc}{u}}_{k}+G{u}_{k-1}+S{u}_{k}\right)$

$\sigma \left({\Psi}_{k}-{d}_{k}+G{u}_{k-1}\right)<\sigma \left({\Psi}_{k}-{d}_{k}+G{u}_{k-1}+{\Psi}_{k}-{d}_{k}\right)$

$\stackrel{\xaf}{\sigma}\left(\underset{i=0}{\overset{k}{{\displaystyle \sum}}}\left({\Psi}_{i}\right)-\underset{j=0}{\overset{k-1}{{\displaystyle \sum}}}\left({d}_{j}\right)-G{u}_{0}\right)-\underset{\_}{\sigma}\left(G{u}_{k-1}\right)<\stackrel{\xaf}{\sigma}\left({d}_{k}\right)$ (11)

And this can be rewritten as

$\stackrel{\xaf}{\sigma}\left(\underset{i=0}{\overset{k}{{\displaystyle \sum}}}\left({\Psi}_{i}\right)-\underset{j=0}{\overset{k-1}{{\displaystyle \sum}}}\left({d}_{j}\right)-G{u}_{0}\right)-\underset{\_}{\sigma}\left(\underset{h=0}{\overset{k-1}{{\displaystyle \sum}}}\left({\Psi}_{h}\right)-\underset{v=0}{\overset{k-1}{{\displaystyle \sum}}}\left({d}_{v}\right)-G{u}_{0}\right)<\stackrel{\xaf}{\sigma}\left({d}_{k}\right)$ (12)

As it can be seen, the resulting condition (12) states that the maximum singular value of the disturbances has to be greater than the sum of all trials output singular values but not to include minimum singular values of sum of past output signals, past disturbances, initial output as well as the maximum singular value of past disturbances and initial output, also not exceeds a value of 1. This is hard to achieve in the presence of an easier method such the past error feedforward with better feasible region of disturbance suppression.

Overall, the result found in current error feedback (12), does not support the use of current error feedback against the presence of easier case. Also, it clearly points to the advantage of using the past error feedforward controller due to its simple structure and the ease of the applying the load disturbance limitation conditions in comparison of the current error feedback case in ILC state feedback design.

4. Conclusion

Two ILC state feedback schemes have been investigated based on load disturbance conditions, past error feedforward and current error feedback. The results obtained in (10) and in (12) show the advantage of using the past error feedforward over current error feedback in term of the region of disturbance suppression found. The past error feedforward scheme has wider region of disturbance limitation condition compared to the current error feedback case, where it has less region of disturbance limitation. In the future, experimental verification would be valuable to support the results found and investigation of the output injection scheme is still an open area to find its conditions of stability in term of load disturbance presence.

Conflicts of Interest

The authors declare no conflicts of interest regarding the publication of this paper.

[1] | Rogers, E., Galkowski, K. and Owens, D. (2007) Control System Theory and Applications for Linear Repetitive Processes. Springer-Verlag, Berlin. |

[2] |
Arimoto, S., Kawamura, S. and Miyazaki, F. (1984) Bettering Operation of Robots by Learning. Journal of Robotic Systems, 1,123-140.
https://doi.org/10.1002/rob.4620010203 |

[3] |
Moore, K.L., Daleh, M. and Battacharrya, S.P. (1992) Iterative Learning Control: A Survey and New Results. Journal of Robotic Systems, 9, 563-594.
https://doi.org/10.1002/rob.4620090502 |

[4] |
Bristow, D.A., Tharayil, M. and Alleyne, A.G. (2006) A Survey of Iterative Learning Control. IEEE Control System Magazine, 26, 96-114.
https://doi.org/10.1109/MCS.2006.1636313 |

[5] |
DeRover, D. and Bosgra, O.H. (1997) Dualization of the Internal Model Principle in Compensator and Observer Theory with Application to Repetitive and Learning Control. Proceedings of the 1997 American Control Conference, Albuquerque, 6 June 1997. https://doi.org/10.1109/ACC.1997.609617 |

[6] |
DeRover, D., Bosgra, O.H. and Steinbuch, M. (2000) Internal Model-Based Design of Repetitive and Iterative Learning Controllers for Linear Multivariable Systems. International Journal of Control, 73, 914-929.
https://doi.org/10.1080/002071700405897 |

[7] |
Freeman, C.T., Alsubaie, M.A., Cai, Z., Rogers, E. and Lewin, P.L. (2013) A Common Setting for the Design of Iterative Learning and Repetitive Controllers with Experimental Verification. International Journal of Adaptive Control and Signal Process, 27, 230-249. https://doi.org/10.1002/acs.2299 |

[8] |
Francis, B.A. and Wonham, W.M. (1976) The Internal Model Principle of Control Theory. Automatica, 12, 457-465. https://doi.org/10.1016/0005-1098(76)90006-6 |

[9] |
Li, Y., Liu, L. and Feng, G. (2018) Robust Adaptive Output Feedback Control to a Class of Non-Triangular Stochastic Nonlinear Systems. Automatica, 89, 325-332.
https://doi.org/10.1016/j.automatica.2017.12.020 |

[10] |
Tong, S., Li, Y. and Sui, S. (2016) Adaptive Fuzzy Tracking Control Design for SISO Uncertain Non-Strict Feedback Nonlinear Systems. IEEE Transactions on Fuzzy Systems, 24, 1441-1454. https://doi.org/10.1109/TFUZZ.2016.2540058 |

[11] |
Alsubaie, M. and Rogers, E. (2018) Robustness and Load Disturbance Conditions for State Based Iterative Learning Control. Optimal Control Applications and Methods, 39, 1965-1975. https://doi.org/10.1002/oca.2460 |

[12] |
Inoue, T., Nakano, M., Kubo, T., Matsumoto, S. and Baba, H. (1981) High Accuracy Control of a Proton Synchrotron Magnet Power Supply. IFAC Proceedings, 14, 3137-3142. https://doi.org/10.1016/S1474-6670(17)63938-7 |

Journals Menu

Contact us

+1 323-425-8868 | |

customer@scirp.org | |

+86 18163351462(WhatsApp) | |

1655362766 | |

Paper Publishing WeChat |

Copyright © 2023 by authors and Scientific Research Publishing Inc.

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.