Asymptotic Evaluations of the Stability Index for a Markov Control Process with the Expected Total Discounted Reward Criterion ()

Jaime Eduardo Martínez-Sánchez^{}

Departamento de Ciencias del Tecnológico de Monterrey, Campus Monterrey (ITESM-CM), Ciudad de Monterrey, N.L., México.

**DOI: **10.4236/ajor.2021.111004
PDF
HTML XML
275
Downloads
805
Views
Citations

Departamento de Ciencias del Tecnológico de Monterrey, Campus Monterrey (ITESM-CM), Ciudad de Monterrey, N.L., México.

In this work, for a control consumption-investment process with the discounted reward optimization criteria, a numerical estimate of the stability index is made. Using explicit formulas for the optimal stationary policies and for the value functions, the stability index is explicitly calculated and through statistical techniques its asymptotic behavior is investigated (using numerical experiments) when the discount coefficient approaches 1. The results obtained define the conditions under which an approximate optimal stationary policy can be used to control the original process.

Keywords

Control Consumption-Investment Process, Discrete-Time Markov Control Process, Expected Total Discounted Reward, Probabilistic Metrics, Stability Index Estimation

Share and Cite:

Martínez-Sánchez, J. (2021) Asymptotic Evaluations of the Stability Index for a Markov Control Process with the Expected Total Discounted Reward Criterion. *American Journal of Operations Research*, **11**, 62-85. doi: 10.4236/ajor.2021.111004.

1. Introduction

In a standard way (see [1] [2] for definitions), let M be a Markov control process at discrete time with infinite horizon (also called Markov decision processes) and let $\text{M \u02dc}$ be its approximation. We will use the performance criterion (objective function) called expected total discounted reward. Suppose that the optimal control problem for $\text{M \u02dc}$ has a solution, that is, we can find an optimal solution ( ${\stackrel{\u02dc}{f}}_{\ast}$ ) for the approximate process $\text{M \u02dc}$. Now, if for some reasons (some of these causes are discussed later), it is not possible to find an optimal policy for the original process M, we could use the policy ( ${\stackrel{\u02dc}{f}}_{\ast}$ ) to control the original process M. The use of such approximation will cause a reduction in the total discounted reward, this reduction is measured by the stability index (Δ), see [3] [4] [5] for definition. The importance of this stability index is that it allows us to calibrate the use of ( ${\stackrel{\u02dc}{f}}_{\ast}$ ) to control the original process M.

Clearly, if this stability index is very high ( $\text{\Delta}\to \infty $ ), it would imply that it is not optimal to use the optimal policy ( ${\stackrel{\u02dc}{f}}_{\ast}$ ) to control the process M, on the other hand if this stability index is low ( $\text{\Delta}\to 0$ ), then the use of this approximation is valid.

In the available literature, both the study and calculation of the stability index has been carried out from a theoretical approach through different way: with the application of contractive operators, see for example [6] [7] [8]; with the use of certain ergodicity conditions, see [9] [10] [11]; and with the application of the use of different probabilistic metrics, see [12] for definitions of the different kinds of probabilistic metrics, so for example, in [9] the total variation metric is used, in [6] the Kantorovich metric is used, in [7] and [8] the Prokhorov metric is used.

The results obtained in all the papers mentioned above are an upper bound for the stability index, which is a function of certain parameters and some probabilistic metric, that is

$\text{\Delta}\le \mathcal{K}\mu \left(\cdot ,\cdot \right)$, (1)

where
$\mathcal{K}$ is an explicit constant and *μ *is a certain probability metric.

Clearly, the discount factor (*α*) involved in optimization criteria is also involved in the explicit constant of
$\mathcal{K}$ in inequality (1): Our goal is to determine the behavior of the stability index as a function of (
$1-\alpha $ ) when the discount factor tends to 1 (
$\alpha \uparrow 1$ ).

Unlike the theoretical study of the stability index as presented in inequality (1), in this work, the stability index will be studied with a more applied perspective.

In this work, a Markov control process about consumption-investment is presented (with expected total discounted reward), for which the stability index is explicitly obtained and later we study its asymptotic behavior when the discount factor tends to 1. These asymptotic evaluations for the stability index will be carried out using techniques statistics; as mentioned above, our goal is to measure the sensitivity of the stability index as a function of ( $1-\alpha $ ) when ( $\alpha \uparrow 1$ ).

To achieve the above, instead of using inequality (1), we will use statistical techniques to estimate the following model:

$\text{\Delta}\left(1-\alpha \right)=\frac{\mathcal{W}}{{\left(1-\alpha \right)}^{\kappa}}$, $\mathcal{W}\in \mathbb{R}$, $\kappa \ge 1$, (2)

where
$\mathcal{W}$ and *κ* are the (unknown) parameters of the model, but estimable from simulated data of the discount factor *α* and using the simple linear regression analysis technique. From Equation (2), we will say that the stability index is of order
$-\kappa $ with respect to (
$1-\alpha $ ) and we will express this as
$\text{\Delta}\left(1-\alpha \right)~\mathcal{M}{\left(1-\alpha \right)}^{-\kappa}$.

Now, if
$\alpha \uparrow 1$ clearly for high values of *κ* the stability index given in Equation (2) will tend to increase rapidly, which will be indicative that it will not be optimal to use policy
${\stackrel{\u02dc}{f}}_{\ast}$ to control the original process M.

The numerical experiments carried out in this work have the goal of estimating the sensitivity *κ* of the stability index given in Equation (2) when
$\alpha \uparrow 1$. These asymptotic evaluations will give us information to answer the question posed above. In the rest of this document, we will refer to this sensitivity (*κ*) as the order of Δ, indistinctly.

As far as we go in the literature review, no numerical studies, no simulations, etc. were found that use statistical techniques to evaluate the order of the stability index with respect to the discount factor.

The results obtained in this work using the simple linear regression model technique depend on the value of a parameter involved in the discounted reward function used, however, it is clear that results show that when
$\alpha \uparrow 1$, then the stability index as a function of (
$1-\alpha $ ) tends to increase rapidly, so it is not recommended to use an approximate optimal policy (
${\stackrel{\u02dc}{f}}_{\ast}$ ) to control the original model M. The results also suggest that the selection of the value of the parameter used in the reward function as well as the value of the discount factor are very important to validate the use of the optimal policy
${\stackrel{\u02dc}{f}}_{\ast}$ to control M. From the results obtained in the estimates of *κ*, the largest value was −1.75, *i.e.*,
$\text{\Delta}\left(1-\alpha \right)~\mathcal{M}{\left(1-\alpha \right)}^{-1.75}$, although clearly from the Equation (2), it would seem natural that the best possible order should be at most
$\mathcal{M}{\left(1-\alpha \right)}^{-1}$.

Finally, we would like to comment on the reasons why we propose the model given in (2) for the asymptotic study of the stability index.

In [6] [7] and [8], the stability index is studied under expected total discounted cost criterion and the results found are stability inequalities such as the one given in (1). Furthermore, the constant
$\mathbb{K}$ involved in inequality (1) is an explicit and inversely proportional function to the term (
$1-\alpha $ ) in all cases; for example, in [6] it is found that
$\Delta ~\mathcal{M}{\left(1-\alpha \right)}^{-2}$ using the Kantorovich metric, while in [7] it is obtained that
$\Delta ~\mathcal{M}{\left(1-\alpha \right)}^{-2}$ with the total variation metric, and [8] shows a result in which
$\Delta ~\mathcal{M}{\left(1-\alpha \right)}^{-3}$ using the Prokhorv metric. So, given that in this work a control process is studied with the expected total discounted reward criterion and based on the aforementioned results, it seems natural to propose the use of the model given in Equation (2) for the study of the asymptotic evaluations of the stability index. In [9] [10] and [11] there are also stability inequalities like the one given in inequality (1), but using the average-cost criterion, however in these papers the stability index presented an order of
$\mathcal{M}{\left(1-\delta \right)}^{-\gamma}$, where *δ* is the ergodicity parameter and
$\gamma \in \mathbb{R}$.

This work is organized as follows. Section 2, a brief description of Markov control models (also called Markov decision processes) is presented as well as some well-known results for discounted optimal control problem with bounded reward; in Section 2.1, we present the problem of estimating the stability index as well as the assumptions that guarantee the existence of the optimal solution for the original (M) and approximate processes ( $\text{M \u02dc}$ ), respectively. In Section 3, the control process with which it will work (consumption-investment) is presented, while in Section 3.1 its stability index is explicitly obtained; in Section 3.2, the results obtained regarding the asymptotic evaluation of the stability index are presented. Finally in Section 4, the conclusions of this work are presented as well as some proposed future researches.

2. The Discounted Reward Criterion

For a topological space
$\left(\mathfrak{X},\tau \right)$,
$\mathfrak{B}\left(\mathfrak{X}\right)$ denotes the Borel *σ*-algebra generated by the topology *τ* and measurability will always mean Borel measurability. Moreover,
$M\left(\mathfrak{X}\right)$ is the class of measurable functions on
$\mathfrak{X}$ whereas
${M}_{b}\left(\mathfrak{X}\right)$ is the subspace of bounded measurable functions endowed with the supremum norm given as
${\Vert u\Vert}_{\infty}={\mathrm{sup}}_{x\in \mathfrak{X}}\left|u\left(x\right)\right|$,
$u\in {M}_{b}\left(\mathfrak{X}\right)$. The subspace of bounded continuous functions is denoted by
${\mathcal{C}}_{b}\left(\mathfrak{X}\right)$. For a subset
$\mathcal{B}\subseteq \mathfrak{X}$,
${\mathbb{I}}_{\mathcal{B}}$ stands for the indicator function of
$\mathcal{B}$, *i.e.*,
${\mathbb{I}}_{\mathcal{B}}\left(x\right)=1$ for
$x\in \mathfrak{X}$ and
${\mathbb{I}}_{\mathcal{B}}\left(x\right)=0$ for
$x\notin \mathfrak{X}$. A Borel space
$\mathcal{Y}$ is a measurable subset of a complete separable metric space endowed with inherited metric.

Let

$M=\left(\mathcal{X},\mathcal{A},\left\{\mathcal{A}\left(x\right):x\in \mathcal{X}\right\},r,\mathcal{Q}\right)$ (3)

be the standard Markov control model (see [1] [13], for definitions). That is thought as a model of a controlled stochastic process $\left\{\left({x}_{n},{a}_{n}\right)\right\}$, where the state process $\left\{{x}_{n}\right\}$ takes values in the Borel space $\mathcal{X}$ and the control process $\left\{{a}_{n}\right\}$ takes values in the Borel space $\mathcal{A}$. The controlled process involves as follows: at each time $n\in {\mathbb{N}}_{0}=\left\{0,1,2,\cdots \right\}$, the contolled observes the system in some state ${x}_{n}=x$ and choose a control ${a}_{n}=a$ from the admissible control subset $\mathcal{A}\left(x\right)$, which is assumed to be a Borel subset of $\mathcal{A}$. It is also assumed that the admissible pairs set $\mathbb{K}:=\left\{\left(x,a\right):x\in \mathcal{X},a\in \mathcal{A}\left(x\right)\right\}$ belongs to $\mathfrak{B}\left(\mathcal{X}\times \mathcal{A}\right)$. Then, the controller receives a reward $r\left(x,a\right)$ where $r$ is a real-valued Borel measurable function defined on $\mathbb{K}:=\left\{\left(x,a\right):x\in \mathcal{X},a\in \mathcal{A}\left(x\right)\right\}$. Moreover, the controlled system moves to a new state ${x}_{n}={x}^{\prime}$ according to the distribution measure $\mathcal{Q}\left(\cdot |x,a\right)$, where $\mathcal{Q}$ is a stochastic kernel on $\mathcal{X}$ given $\mathbb{K}$, that is, $\mathcal{Q}\left(\cdot |x,a\right)$ is a probability measure on $\mathbb{K}$ for each pair $\left(x,a\right)\in \mathbb{K}$, and $\mathcal{Q}\left(\cdot |x,a\right)$ is a Borel measurable function on $\mathbb{K}$ for each Borel subset $\mathcal{B}$ of $\mathcal{X}$. Then, the controlled choose a new control ${a}_{n}={a}^{\prime}\in \mathcal{A}\left({x}^{\prime}\right)$ receiving a reward $r\left({x}^{\prime},{a}^{\prime}\right)$ and so on.

Let
${\mathbb{H}}_{n}:={\mathbb{K}}^{n}\times \mathcal{X}$ for
$n\in {\mathbb{N}}_{0}$ and
${\mathbb{H}}_{0}=\mathcal{X}$. Observe that a generic element of
${\mathbb{H}}_{n}$ has the form
${h}_{n}=\left({x}_{0},{a}_{0},{x}_{1},{a}_{1},\cdots ,{x}_{n-1},{a}_{n-1},{x}_{n}\right)$ where
$\left({x}_{k},{a}_{k}\right)\in \mathbb{K}$ for
$k=0,1,\cdots ,n-1$ and
${x}_{n}\in \mathcal{X}$. A control policy is a sequence
$\pi =\left\{{\pi}_{n}\right\}$ where
${\pi}_{n}\left(\cdot ,\cdot \right)$ is a stochastic kernel on
$\mathcal{A}$ given
${\mathbb{H}}_{n}$ satisfying the constraint
${\pi}_{n}\left(\mathcal{A}\left({x}_{n}\right)|{h}_{n}\right)=1$ for all
${h}_{n}\in {\mathbb{H}}_{n}$,
$n\in {\mathbb{N}}_{0}$. Now, let
$\mathbb{F}$ be the class of all measurable functions
$f:\mathcal{X}\to \mathcal{A}$ such that
$f\left(x\right)\in \mathcal{A}\left(x\right)$ for each
$x\in \mathcal{X}$. A control policy
$\pi =\left\{{\pi}_{n}\right\}$ is said to be (deterministic) stationary if there exists
$f\in \mathbb{F}$ such that the measure
${\pi}_{n}\left(\cdot |x\right)$ is concentrated at
$f\left(x\right)$ for each
$x\in \mathcal{X}$ and
$n\in {\mathbb{N}}_{0}$. Following a standard convention, the stationary policy π is identified with the selector *f*. The class of all policies is denoted by Π and the class of all stationary policies is identified with the class
$\mathbb{F}$.

Let
$\Omega :={\left(\mathcal{X}\times \mathcal{A}\right)}^{\infty}$ be the canonical sample space and
$\mathfrak{F}$ the product *σ*-algebra. For each policy
$\pi =\left\{{\pi}_{n}\right\}\in \Pi $ and “initial” state
${x}_{0}\in \mathcal{X}$ there exists a probability measure
${\mathcal{P}}_{x}^{\pi}$ on the measurable space
$\left(\Omega ,\mathfrak{F}\right)$ that governs the evolution of the controlled process
$\left\{\left({x}_{n},{a}_{n}\right)\right\}$.

The expected total discounted reward criterion is given as

${\mathcal{R}}_{\alpha}\left(x,\pi \right):={\mathbb{E}}_{x}^{\pi}{\displaystyle {\sum}_{t=0}^{\infty}{\alpha}^{t}r\left({x}_{t},{a}_{t}\right)}$, (4)

where the discount factor $\alpha \in \left(0,1\right)$ is fixed and ${\mathbb{E}}_{x}^{\pi}$ denotes the expectation operator with respect to the probability measure ${\mathcal{P}}_{x}^{\pi}$.

The optimal control problem is to find a control policy ${\pi}^{*}\in \Pi $ (if exists) such that

${\mathcal{R}}_{\alpha}^{*}\left(x\right):={\mathcal{R}}_{\alpha}\left(x,{\pi}^{*}\right):={\mathrm{sup}}_{\pi \in \Pi}{\mathcal{R}}_{\alpha}\left(x,\pi \right)$, (5)

for all $x\in \mathcal{X}$.

The policy ${\pi}^{*}$ is called discounted optimal policy, while ${\mathcal{R}}_{\alpha}^{*}$ is called the discounted optimal value function. Later we will impose conditions that guarantee the finiteness of the value function ${\mathcal{R}}_{\alpha}^{*}$ and the existence of an optimal policy ${\pi}^{*}$.

2.1. The Stability Index and the Problem of Its Estimation

The problem of (quantitative) estimation stability (“continuity” or “robustness”) arises when there is an uncertainty about the stochastic kernel $\mathcal{Q}\left(\cdot |x,a\right)$ defined in the standard Markov control model M (see model (3)). The “original” task of the controller consists in the search for the optimal policy ${\pi}^{*}$ (see Equation (5)). In many applications this task cannot be fulfilled directly due to any of the following causes:

1) Frequently $\mathcal{Q}\left(\cdot |x,a\right)$ or some of its parameters are unknown to the controller, and this stochastic kernel is estimated using some statistical procedures. With the results of these estimates, another stochastic kernel $\stackrel{\u02dc}{\mathcal{Q}}\left(\cdot |x,a\right)$ is generated that is interpreted as an accessible approximation to the unknown $\mathcal{Q}\left(\cdot |x,a\right)$.

2) There are situations where $\mathcal{Q}\left(\cdot |x,a\right)$ is known but too complicated to have any hope of solving the control policy optimization problem. In such cases, $\mathcal{Q}\left(\cdot |x,a\right)$ is sometimes replaced by a “theoretical approximation” $\stackrel{\u02dc}{\mathcal{Q}}\left(\cdot |x,a\right)$, which results in a controllable process with a simpler structure.

We assume that
$\mathcal{Q}\left(\cdot |x,a\right)$ is not available to the controller and it is substituted by a given approximating stochastic kernel
$\stackrel{\u02dc}{\mathcal{Q}}\left(\mathcal{B}|x,a\right)$,
$x\in \mathcal{X}$,
$\mathcal{A}\left(x\right)$ and
$\mathcal{B}\in \mathfrak{B}\left(\mathcal{X}\right)$. The “approximating” Markov process governed by
$\stackrel{\u02dc}{\mathcal{Q}}$ will be denoted by
$\left\{{\stackrel{\u02dc}{x}}_{t}\right\}\equiv \left\{{\stackrel{\u02dc}{x}}_{t},t=0,1,\cdots \right\}$, *i.e.*, let

$\text{M \u02dc}=\left(\mathcal{X},\mathcal{A},\left\{\mathcal{A}\left(\stackrel{\u02dc}{x}\right):\stackrel{\u02dc}{x}\in \mathcal{X}\right\},\stackrel{\u02dc}{r},\stackrel{\u02dc}{\mathcal{Q}}\right)$, (6)

be the “approximate” for the Markov control model given in model (3).

Changing
${x}_{t}$ for
${\stackrel{\u02dc}{x}}_{t}$ in Equation (4), we get
${\stackrel{\u02dc}{\mathcal{R}}}_{\alpha}\left(x,\pi \right)$ the discounted reward criterion for the approximate process
$\text{M \u02dc}$. Now, suppose that it is possible (at least theoretically) to find an optimal policy
${\stackrel{\u02dc}{\pi}}^{*}$ for process
$\text{M \u02dc}$, *i.e.,*

${\stackrel{\u02dc}{\mathcal{R}}}_{\alpha}^{*}\left(\stackrel{\u02dc}{x}\right):={\stackrel{\u02dc}{\mathcal{R}}}_{\alpha}\left(\stackrel{\u02dc}{x},{\stackrel{\u02dc}{\pi}}^{*}\right):={\mathrm{sup}}_{\pi \in \Pi}{\stackrel{\u02dc}{\mathcal{R}}}_{\alpha}\left(\stackrel{\u02dc}{x},\pi \right)$. (7)

The control policy ${\stackrel{\u02dc}{\pi}}^{*}$ defined in Equation (7) is used as the approximation to the optimal non-accessible policy ${\pi}^{*}$ (assuming it exists). In other words, policy ${\stackrel{\u02dc}{\pi}}^{*}$ is used to control the original process M instead of policy ${\pi}^{*}$.

The reduction in reward for such an approximation, is estimated by the following stability index (see [3] [4] [5] ):

${\text{\Delta}}_{{\mathcal{R}}_{\alpha}}\left(x\right):={\mathcal{R}}_{\alpha}\left(x,{\pi}^{*}\right)-{\mathcal{R}}_{\alpha}\left(x,{\stackrel{\u02dc}{\pi}}^{*}\right)\ge 0$, $x\in \mathcal{X}$. (8)

The stability estimation problem consists of searching for inequalities of the following type:

${\text{\Delta}}_{{\mathcal{R}}_{\alpha}}\left(x\right)\le \mathcal{K}\left(x\right)\psi \left[\mu \left(\mathcal{Q},\stackrel{\u02dc}{\mathcal{Q}}\right)\right]$, $x\in \mathcal{X}$. (9)

where
$\mathcal{K}\left(x\right)$ is a function with explicitly calculated values;
$\psi :{\mathbb{R}}^{+}\to {\mathbb{R}}^{+}$ is a real continuous function such that
$\psi \left(s\right)\to 0$ as
$s\to 0$ and *μ* is a metric probabilistic on the space of probability measures.

The results obtained in [6] - [11] provide inequalities as given in inequality (9).

In this paper, we consider a particular example of a Markov control process for which optimal stationary policies can be explicitly calculated. The explicit form of these stationary policies
${\pi}^{*}$ (for the “original” process M) and
${\stackrel{\u02dc}{\pi}}^{*}$ (for the “approximate” process
$\text{M \u02dc}$ ) makes it possible to explicitly calculate the stability index
${\Delta}_{{\mathcal{R}}_{\alpha}}$. The goal of this work is to study the asymptotic behavior of
${\Delta}_{{\mathcal{R}}_{\alpha}}$ when
$\alpha \uparrow 1$. Using direct calculations and numerical approximations, we will show that the stability index (see Equation (8)) can be expressed as a function that depends on (
$1-\alpha $ ) and has an order of *κ*, *i.e.*,

${\Delta}_{{\mathcal{R}}_{\alpha}}\equiv {\Delta}_{{\mathcal{R}}_{\alpha}}\left(1-\alpha \right)=\frac{\mathcal{W}}{{\left(1-\alpha \right)}^{\kappa}}$, $\mathcal{W}\in \mathbb{R}$, $\kappa \ge 1$, (10)

where the (unknown) parameters
$\mathcal{W}$ and *κ* will be estimated using statistical techniques, see the analogy with Equation (2).

To finish this section, the assumptions that guarantee the existence of the stationary optimal control policy ( ${\pi}^{*}$ and ${\stackrel{\u02dc}{\pi}}^{*}$ ) for the optimal control problems given in equations (5) and (7) respectively, are shown below:

Assumption 2.1. (Existence)

1) The function
$r\left(\cdot ,\cdot \right)$ is bounded by a constant *b* > 0;

2) $\mathcal{A}\left(x\right)$ is non-empty compact subset of $\mathcal{A}$ for each $x\in \mathcal{X}$ and the mapping $x\to \mathcal{A}\left(x\right)$ is continuous;

3) $r\left(\cdot ,\cdot \right)$ is a continuous function on $\mathbb{K}$ ;

4) $\mathcal{Q}\left(\cdot |\cdot ,\cdot \right)$ is weakly continuous on $\mathbb{K}$, that is, the mapping

$\left(x,a\right)\to {\displaystyle {\int}_{\mathcal{X}}u\left(y\right)\mathcal{Q}\left(\text{d}y|x,a\right)}$, (11)

is continuous for each function $u\in {\mathcal{C}}_{b}\left(\mathcal{X}\right)$.

The second set of assumptions guarantees the discounted reward criterion is both well defined and finiteness.

Assumption 2.2. (Finiteness)

The following holds for each $x\in \mathcal{X}$ :

1) The function
$r\left(\cdot ,\cdot \right)$ is bounded by a constant *b* > 0;

2) $\mathcal{A}\left(x\right)$ is a non-empty compact subset of $\mathcal{X}$ ;

3) $r\left(x,\cdot \right)$ is a continuous function on $\mathcal{A}\left(x\right)$ ;

4) $\mathcal{Q}\left(\cdot |x,\cdot \right)$ is strongly continuous on $\mathcal{A}\left(x\right)$, that is, the mapping

$a\to {\displaystyle {\int}_{\mathcal{X}}u\left(y\right)\mathcal{Q}\left(\text{d}y|x,a\right)}$, (12)

is continuous for each function $u\in {M}_{b}\left(\mathcal{X}\right)$.

For more information see [2] [14] [15]. Now, if $\mathcal{C}\left(\mathcal{X}\right)$ denote either ${\mathcal{C}}_{b}\left(\mathcal{X}\right)$ or ${M}_{b}\left(\mathcal{X}\right)$ depending on whether Assumption 2.1 or 2.2 is being used, respectively; then, under either one of Assumption 2.1 or 2.2, the dymanic programming operator:

$Tu\left(x\right):={\mathrm{sup}}_{a\in \mathcal{A}\left(x\right)}\left[r\left(x,a\right)+\alpha {\displaystyle {\int}_{\mathcal{X}}u\left(y\right)\mathcal{Q}\left(\text{d}y|x,a\right)}\right]$, (13)

$x\in \mathcal{X}$,is a contraction operator from Banach space
$\left(\mathcal{C}\left(\mathcal{X}\right),{\Vert \text{\hspace{0.05em}}\cdot \text{\hspace{0.05em}}\Vert}_{\infty}\right)$ into itself with contraction factor *α* (see [2] ).

Remark 2.3. Under Assumptions 2.1 and 2.2, there is a solution to the optimal control problem given in Equation (5); which is unique and the value function does not depend on the initial state of the process. For a proof, see [2] or [13].

3. A Markov Control Consumption-Investment Process and Its Approximation

This example is presented in [1]. Consider the following Markov control process:

Let $\mathcal{X}=\left[0,\infty \right)$ ; $\mathcal{A}=\left[0,\infty \right)$ ; $\mathcal{A}\left(x\right)=\left[0,\infty \right)$, $x\in \mathcal{X}$. The dynamics of the “original” process (M) is given by:

${x}_{t}={a}_{t}{\xi}_{t}$, for $t=1,\cdots $ ; (14)

and for the “approximate” process ( M ˜ )

${\stackrel{\u02dc}{x}}_{t}={\stackrel{\u02dc}{a}}_{t}{\stackrel{\u02dc}{\xi}}_{t}$, for $t=1,\cdots $ ; (15)

where
$\left\{{\xi}_{t},t\ge 1\right\}$ and
$\left\{{\stackrel{\u02dc}{\xi}}_{t},t\ge 1\right\}$ are two sequences of independent and identically distributed non-negative random variables (*i.i.d*), which have distributions
${\mathcal{F}}_{\xi}$ and
${\mathcal{F}}_{\stackrel{\u02dc}{\xi}}$ respectively. Clearly,
${\mathcal{F}}_{\xi}$ and
${\mathcal{F}}_{\stackrel{\u02dc}{\xi}}$ are in the space of all distributions in
$\left(\mathcal{X},\mathfrak{B}\left(\mathcal{X}\right)\right)$.

In this model, ${x}_{t-1}$ is interpreted as current capital. Amount ${a}_{t}\in \left[0,{x}_{t-1}\right]$ represents what is invested in assets (such as stocks, bonds, etc.), which generate a profit/loss given by ${a}_{t}{\xi}_{t}$. The rest of the capital ${x}_{t-1}-{a}_{t}$ is dedicated to consumption and the satisfaction (or benefit) of this consumption is estimated by the utility function given by ${\left({x}_{t-1}-{a}_{t}\right)}^{p}$, where $0<p<1$ is a given parameter.

The reward function per unit of time is given by

$r\left({x}_{t-1},{a}_{t}\right)={\left({x}_{t-1}-{a}_{t}\right)}^{p}$ for $t=1,\cdots $ ; $0<p<1$. (16)

Assumption 3.1. (Only for this example)

The *i.i.d* random variables
${\left\{{\xi}_{t}\right\}}_{t\ge 1}$,
${\left\{{\stackrel{\u02dc}{\xi}}_{t}\right\}}_{t\ge 1}$ given in Equations (14) and (15) respectively, satisfy the following (for details, see [1] ):

$\lambda :=\mathbb{E}{\xi}^{p}<\frac{1}{\alpha}$ ; $\stackrel{\u02dc}{\lambda}:=\mathbb{E}{\stackrel{\u02dc}{\xi}}^{p}<\frac{1}{\alpha}$. (17)

Now, for an “initial” state $x\in \left[0,\infty \right)$ the optimal control problem (see Equation (5)) for this Markov control consumption-investment process is

${\mathcal{R}}_{\alpha}\left(x,{\pi}^{*}\right):={\mathrm{sup}}_{\pi \in \Pi}{\mathbb{E}}_{x}^{\pi}{\displaystyle {\sum}_{t=1}^{\infty}{\alpha}^{t-1}r{\left({x}_{t-1}-{a}_{t}\right)}^{p}}$, (18)

analogously for the “approximate” process, we have

${\stackrel{\u02dc}{\mathcal{R}}}_{\alpha}\left(\stackrel{\u02dc}{x},{\stackrel{\u02dc}{\pi}}^{*}\right):={\mathrm{sup}}_{\pi \in \Pi}{\mathbb{E}}_{\stackrel{\u02dc}{x}}^{\pi}{\displaystyle {\sum}_{t=1}^{\infty}{\alpha}^{t-1}r{\left({\stackrel{\u02dc}{x}}_{t-1}-{\stackrel{\u02dc}{a}}_{t}\right)}^{p}}$, (19)

where $\stackrel{\u02dc}{x}\in \left[0,\infty \right)$ is an “initial” state for the “approximate” process.

Under these conditions, in [1] it is shown that the processes are given in equations (14) and (15) satisfies both Assumptions 2.1 and 2.2 and that fulfill the following:

1) The optimal stationary policy for Equation (18) is the following selector

${f}_{\ast}={\left(\alpha \lambda \right)}^{\frac{1}{1-p}}x$, $x\in \left[0,\infty \right)$. (20)

2) The value function given in Equation (18) is

${\mathcal{R}}_{\alpha}\left(x,{f}_{\ast}\right)=\frac{1}{{\left[1-{\left(\alpha \lambda \right)}^{\frac{1}{1-p}}\right]}^{1-p}}{x}^{p}$, $x\in \left[0,\infty \right)$. (21)

3) The optimal stationary policy for Equation (19) is the following selector

${\stackrel{\u02dc}{f}}_{\ast}={\left(\alpha \stackrel{\u02dc}{\lambda}\right)}^{\frac{1}{1-p}}\stackrel{\u02dc}{x}$, $\stackrel{\u02dc}{x}\in \left[0,\infty \right)$. (22)

The next thing is that we explicitly calculate the stability index for this control process, which we will use to perform the asymptotic evaluations. In the next section, we show the development we did to obtain this calculation.

3.1. Explicit Calculation of the Stability Index for the Markov Control Consumption-Investment Process

In this section, the stability index (
${\Delta}_{{\mathcal{R}}_{\alpha}}$ ) is explicitly calculated for the control consumption-investment process which was presented in the previous section. As was mentioned in the introduction section, the expression that we find for the stability index is a function of the parameters *p* and
$\u03f5$, where
$\u03f5$ is the measure of the approximation between the probability distributions
${\mathcal{F}}_{\xi}$ and
${\mathcal{F}}_{\stackrel{\u02dc}{\xi}}$ (see Equations (14) and (15)), while *p* is the parameter involved in the reward function (see Equation (16)).

In economics, this parameter *p* is associated with elasticity, that is, elasticity measures the percentage change in the consumer’s utility in response to percentage changes in the consumer’s money supply (for more details, see [16] or [17] ). For this reason, it is important to measure its effect on the asymptotic behavior of the stability index.

From Equation (16), the possible values for the parameter *p* lie in interval
$0<p<1$.

Our goal is to calculate asymptotic evaluations of the stability index when $\u03f5\to 0$ (which would imply that ${\mathcal{F}}_{\stackrel{\u02dc}{\xi}}$ is closer to ${\mathcal{F}}_{\xi}$ ) and for extreme values of

the range of *p*, that is, we are interested in values of
$p\approx 0$,
$p\approx \frac{1}{2}$ and
$p\approx 1$.

Now, we will proceed to calculate the stability index and for this, we will take an “initial” state $x=\stackrel{\u02dc}{x}=1$ as well as the following distribution functions to measure the effect of the shock on the processes:

Assumption 3.2. (Only for this example)

We consider the random vectors given in processes (14) and (15) respectively, have an exponential distribution with parameters
$\theta $ and
$\stackrel{\u02dc}{\theta}$ respectively, *i.e.*,
$\xi ~{\mathcal{F}}_{\xi}\equiv \mathrm{exp}\left(\theta \right)$ and
$\stackrel{\u02dc}{\xi}~{\mathcal{F}}_{\stackrel{\u02dc}{\xi}}\equiv \mathrm{exp}\left(\stackrel{\u02dc}{\theta}\right)$ with
$\stackrel{\u02dc}{\theta}=\theta \left(1-\u03f5\right)$, where the values of
$\u03f5$ measure the approximation between both distributions,
$0<\u03f5<1$.

Under Assumption 3.1 and 3.2, we have

$\lambda :=\mathbb{E}{\xi}_{1}^{p}={\displaystyle {\int}_{0}^{\infty}{\xi}^{p}\left[\frac{{\text{e}}^{-\frac{\xi}{\theta}}}{\theta}\right]\text{d}\xi}$,

and after some direct calculations,

$\lambda ={\theta}^{p}\text{\Gamma}\left(p\right)$. (23)

Similarly, for the perturbed random vector, we have

$\stackrel{\u02dc}{\lambda}={\stackrel{\u02dc}{\theta}}^{p}\text{\Gamma}\left(p\right)$,

and since $\stackrel{\u02dc}{\theta}=\theta \left(1-\u03f5\right)$, then from the above equality it follows that

$\stackrel{\u02dc}{\lambda}=\lambda {\left(1-\u03f5\right)}^{p}$. (24)

Next, the stability index is calculated.

From Equation (8) we have

${\text{\Delta}}_{{\mathcal{R}}_{\alpha}}\left(1\right):={\mathcal{R}}_{\alpha}\left(1,{f}_{\ast}\right)-{\mathcal{R}}_{\alpha}\left(1,{\stackrel{\u02dc}{f}}_{\ast}\right)\ge 0$. (25)

The first term on the right side of Equation (25) is given in Equation (21) with $x=1$. The next thing is that we will calculate the second term on the right side of Equation (25): To do this, we substitute the approximate policy of optimal control with $\stackrel{\u02dc}{x}=1$, given in Equation (22), in the reward function of the “original” model given in Equation (18), and we have

${\mathcal{R}}_{\alpha}\left(1,{\stackrel{\u02dc}{f}}_{\ast}\right)={\mathbb{E}}_{1}^{{\stackrel{\u02dc}{f}}_{\ast}}{\displaystyle {\sum}_{t=1}^{\infty}{\alpha}^{t-1}{\left({\stackrel{\u02dc}{x}}_{t-1}-{\stackrel{\u02dc}{a}}_{t-1}\right)}^{p}}$.

The above equation represents the discounted reward obtained when the trajectory of the “original” process given in Equation (14) is controlled by the optimal policy obtained from the “approximate” process given in Equation (15) and the “initial” state is $x=1$.

Now, since ${a}_{t}\in f\left({x}_{t-1}\right)$ (see [1] for details), we have

${\mathcal{R}}_{\alpha}\left(1,{\stackrel{\u02dc}{f}}_{\ast}\right)={\mathbb{E}}_{1}^{{\stackrel{\u02dc}{f}}_{\ast}}{\displaystyle {\sum}_{t=1}^{\infty}{\alpha}^{t-1}{\left({\stackrel{\u02dc}{x}}_{t-1}-{\stackrel{\u02dc}{a}}_{t-1}\right)}^{p}}$,

${\mathcal{R}}_{\alpha}\left(1,{\stackrel{\u02dc}{f}}_{\ast}\right)={\displaystyle {\sum}_{t=1}^{\infty}{\alpha}^{t-1}{\mathbb{E}}_{1}^{{\stackrel{\u02dc}{f}}_{\ast}}{\left({\stackrel{\u02dc}{x}}_{t-1}-{\left(\alpha \stackrel{\u02dc}{\lambda}\right)}^{\frac{1}{1-p}}{\stackrel{\u02dc}{x}}_{t-1}\right)}^{p}}$,

finally, we have

${\mathcal{R}}_{\alpha}\left(1,{\stackrel{\u02dc}{f}}_{\ast}\right)={\left[1-{\left(\alpha \stackrel{\u02dc}{\lambda}\right)}^{\frac{1}{1-p}}\right]}^{p}{\displaystyle {\sum}_{t=0}^{\infty}{\alpha}^{t}{\mathbb{E}}_{1}^{{\stackrel{\u02dc}{f}}_{\ast}}{\stackrel{\u02dc}{x}}_{t}^{p}}$. (26)

Now, the evolution of the approximate process (see Equation (15)) is represented as follows

${\stackrel{\u02dc}{x}}_{t}={\stackrel{\u02dc}{a}}_{t}{\stackrel{\u02dc}{\xi}}_{t}$,

So,

${\stackrel{\u02dc}{x}}_{1}={\stackrel{\u02dc}{a}}_{1}{\stackrel{\u02dc}{\xi}}_{1}={\left(\alpha \stackrel{\u02dc}{\lambda}\right)}^{\frac{1}{1-p}}\stackrel{\u02dc}{x}{\stackrel{\u02dc}{\xi}}_{1}={\left(\alpha \stackrel{\u02dc}{\lambda}\right)}^{\frac{1}{1-p}}{\stackrel{\u02dc}{\xi}}_{1}$.

${\stackrel{\u02dc}{x}}_{2}={\stackrel{\u02dc}{a}}_{2}{\stackrel{\u02dc}{\xi}}_{2}={\left(\alpha \stackrel{\u02dc}{\lambda}\right)}^{\frac{1}{1-p}}{\stackrel{\u02dc}{x}}_{1}{\stackrel{\u02dc}{\xi}}_{2}={\left(\alpha \stackrel{\u02dc}{\lambda}\right)}^{\frac{2}{1-p}}{\stackrel{\u02dc}{\xi}}_{1}{\stackrel{\u02dc}{\xi}}_{2}$.

$\vdots $

${\stackrel{\u02dc}{x}}_{t}={\stackrel{\u02dc}{a}}_{t}{\stackrel{\u02dc}{\xi}}_{t}={\left(\alpha \stackrel{\u02dc}{\lambda}\right)}^{\frac{1}{1-p}}{\stackrel{\u02dc}{x}}_{t-1}{\stackrel{\u02dc}{\xi}}_{t}={\left(\alpha \stackrel{\u02dc}{\lambda}\right)}^{\frac{t}{1-p}}{\stackrel{\u02dc}{\xi}}_{1}{\stackrel{\u02dc}{\xi}}_{2}\cdots {\stackrel{\u02dc}{\xi}}_{t}$.

If we raise the last equality to the power *p*, we have

${\stackrel{\u02dc}{x}}_{t}^{p}={\left(\alpha \stackrel{\u02dc}{\lambda}\right)}^{\frac{pt}{1-p}}{\stackrel{\u02dc}{\xi}}_{1}^{p}{\stackrel{\u02dc}{\xi}}_{2}^{p}\cdots {\stackrel{\u02dc}{\xi}}_{t}^{p}$.

Now if we take the expected value on both sides of the above equality and since the random elements are *i.i.d**.*,

${\mathbb{E}}_{1}^{{\stackrel{\u02dc}{f}}_{\ast}}{\stackrel{\u02dc}{x}}_{t}^{p}={\left(\alpha \stackrel{\u02dc}{\lambda}\right)}^{\frac{pt}{1-p}}{\mathbb{E}}_{1}^{{\stackrel{\u02dc}{f}}_{\ast}}\left[{\stackrel{\u02dc}{\xi}}_{1}^{p}{\stackrel{\u02dc}{\xi}}_{2}^{p}\cdots {\stackrel{\u02dc}{\xi}}_{t}^{p}\right]$,

${\mathbb{E}}_{1}^{{\stackrel{\u02dc}{f}}_{\ast}}{\stackrel{\u02dc}{x}}_{t}^{p}={\left(\alpha \stackrel{\u02dc}{\lambda}\right)}^{\frac{pt}{1-p}}{\mathbb{E}}_{1}^{{\stackrel{\u02dc}{f}}_{\ast}}\left({\stackrel{\u02dc}{\xi}}_{1}^{p}\right){\mathbb{E}}_{1}^{{\stackrel{\u02dc}{f}}_{\ast}}\left({\stackrel{\u02dc}{\xi}}_{2}^{p}\right)\cdots {\mathbb{E}}_{1}^{{\stackrel{\u02dc}{f}}_{\ast}}\left({\stackrel{\u02dc}{\xi}}_{t}^{p}\right)$.

Now, by inequality (17),

${\mathbb{E}}_{1}^{{\stackrel{\u02dc}{f}}_{\ast}}{\stackrel{\u02dc}{x}}_{t}^{p}={\left(\alpha \stackrel{\u02dc}{\lambda}\right)}^{\frac{pt}{1-p}}{\left(\stackrel{\u02dc}{\lambda}\right)}^{t}$,

${\mathbb{E}}_{1}^{{\stackrel{\u02dc}{f}}_{\ast}}{\stackrel{\u02dc}{x}}_{t}^{p}={\left(\alpha \right)}^{\frac{p}{1-p}t}{\left(\stackrel{\u02dc}{\lambda}\right)}^{\frac{1}{1-p}t}$. (27)

Substituting Equation (27) in Equation (26) and after performing some direct calculations, we have

${\mathcal{R}}_{\alpha}\left(1,{\stackrel{\u02dc}{f}}_{\ast}\right)={\left[1-{\left(\alpha \stackrel{\u02dc}{\lambda}\right)}^{\frac{1}{1-p}}\right]}^{p}{\displaystyle {\sum}_{t=0}^{\infty}{\alpha}^{t}{\left(\alpha \right)}^{\frac{p}{1-p}t}{\left(\stackrel{\u02dc}{\lambda}\right)}^{\frac{1}{1-p}t}}$,

${\mathcal{R}}_{\alpha}\left(1,{\stackrel{\u02dc}{f}}_{\ast}\right)={\left[1-{\left(\alpha \stackrel{\u02dc}{\lambda}\right)}^{\frac{1}{1-p}}\right]}^{p}{\displaystyle {\sum}_{t=0}^{\infty}{\left[{\left(\alpha \stackrel{\u02dc}{\lambda}\right)}^{\frac{1}{1-p}}\right]}^{t}}$. (28)

Inequalities (17) guarantees $\alpha \stackrel{\u02dc}{\lambda}<1$ ; furthermore, since $0<p<1$, it is guaranteed that $1<\frac{1}{1-p}$. Finally, the two reasons above guarantee that ${\left(\alpha \stackrel{\u02dc}{\lambda}\right)}^{\frac{1}{1-p}}<1$.

Therefore, calculating the sum of geometric serie involved in Equation (28), this one can be expressed as

${\mathcal{R}}_{\alpha}\left(1,{\stackrel{\u02dc}{f}}_{\ast}\right)={\left[1-{\left(\alpha \stackrel{\u02dc}{\lambda}\right)}^{\frac{1}{1-p}}\right]}^{p}\frac{1}{1-{\left(\alpha \stackrel{\u02dc}{\lambda}\right)}^{\frac{1}{1-p}}}$,

${\mathcal{R}}_{\alpha}\left(1,{\stackrel{\u02dc}{f}}_{\ast}\right)=\frac{1}{{\left[1-{\left(\alpha \stackrel{\u02dc}{\lambda}\right)}^{\frac{1}{1-p}}\right]}^{1-p}}$. (29)

Then, to obtain the stability index, Equation (21) with $x=1$ and Equation (29) are substituted in Equation (25) and we obtain

${\Delta}_{{\mathcal{R}}_{\alpha}}\left(1\right)=\frac{1}{{\left[1-{\left(\alpha \lambda \right)}^{\frac{1}{1-p}}\right]}^{1-p}}-\frac{1}{{\left[1-{\left(\alpha \stackrel{\u02dc}{\lambda}\right)}^{\frac{1}{1-p}}\right]}^{1-p}}$. (30)

Now, substituting Equation (24) in Equation (30), we have

${\text{\Delta}}_{{\mathcal{R}}_{\alpha}}\left(1\right)=\frac{1}{{\left[1-{\left(\alpha \lambda \right)}^{\frac{1}{1-p}}\right]}^{1-p}}-\frac{1}{{\left[1-{\left(\alpha \lambda {\left(1-\u03f5\right)}^{p}\right)}^{\frac{1}{1-p}}\right]}^{1-p}}$. (31)

For each fixed *p*, a *θ* value in Equation (23) can be selected such that
$\lambda =1$, so Equation (31) can be written as

${\text{\Delta}}_{{\mathcal{R}}_{\alpha}}\left(1\right)=\frac{1}{{\left[1-{\alpha}^{\frac{1}{1-p}}\right]}^{1-p}}-\frac{1}{{\left[1-{\alpha}^{\frac{1}{1-p}}{\left(1-\u03f5\right)}^{\frac{p}{1-p}}\right]}^{1-p}}$. (32)

The stability index given in Equation (32) remains a function that depends on the discount factor (*α*), the parameter *p* of the reward function, see Equation (16), and the level of approximation
$\u03f5$ of the distributions
${\mathcal{F}}_{\xi}$ and
${\mathcal{F}}_{\stackrel{\u02dc}{\xi}}$, see Assumption 3.2.

3.2. Study of the Asymptotic Evaluations of the Stability Index

The goal of this work is to perform asymptotic numerical estimations of the stability index as a function of (
$1-\alpha $ ), that is, find its order (*κ*) when
$\alpha \uparrow 1$, see Equation (10). For this, we will use the result obtained in the previous section of the explicit calculation of the stability index, see Equation (32).

Equation (32) shows that the stability index is a function of *p* and as mentioned in the previous section, this parameter of the utility function is important in economics since it is related to elasticity. So, to estimate the effect that this parameter has on the stability index, we will select arbitrary values of this parameter in such way that 1) values close to zero (it would imply consumers insensitive

to monetary change); 2) values close to $\frac{1}{2}$ (average consumers); and 3) values

close to 1 (sensitive consumers). However for our goal, these values of *p* would give us information about the conditions in which the approximate policy
$\stackrel{\u02dc}{f}$ can be used to control the original process M, that is, we want to study if values

given of *p* close to zero (to
$\frac{1}{2}$ and to 1) in the reward function allows us to use this approach.

Methodology and results obtained. For a fixed value of *p* in Equation (32) and given a value of
$\u03f5$, we will generate 100 values of *α*, starting at
$\alpha =0.5$ with increments of 0.005. Then, for each of the 100 generated values of
$\alpha =0.5,0.505,\cdots ,0.995$, the value of (
$1-\alpha $ ) is substituted in Equation (32) and we would have 100 values of the stability index (as function of (
$1-\alpha $ )). With these 100 values of (
$1-\alpha $ ) and the stability index
${\text{\Delta}}_{{\mathcal{R}}_{\alpha}}\left(1-\alpha \right)$, a simple linear regression model is performed to estimate the *κ* parameter involved in Equation (10) and this value would be the estimation of the order of the stability index with respect to (
$1-\alpha $ ), *i.e.*,
${\text{\Delta}}_{{\mathcal{R}}_{\alpha}}~{\left(1-\alpha \right)}^{-\stackrel{^}{\kappa}}$. We are interested in the behavior of the *k *estimate when
$\alpha \uparrow 1$ and
$\u03f5\to 0$.

For example, if $p=\frac{1}{100}$ then of Equation (32) we have that the stability index is expressed as

${\text{\Delta}}_{{\mathcal{R}}_{\alpha}}\left(1\right)=\frac{1}{{\left[1-{\alpha}^{\frac{100}{99}}\right]}^{\frac{99}{100}}}-\frac{1}{{\left[1-{\alpha}^{\frac{100}{99}}{\left(1-\u03f5\right)}^{\frac{1}{99}}\right]}^{\frac{99}{100}}}$, (33)

Now, remembering that $\u03f5$ values represent the measure of the approximation between the distributions ${\mathcal{F}}_{\xi}$ and ${\mathcal{F}}_{\stackrel{\u02dc}{\xi}}$ (see Assumption 3.2), so let’s assume $\u03f5=0.2$ and we substitute it in Equation (33), we have

${\text{\Delta}}_{{\mathcal{R}}_{\alpha}}\left(1\right)=\frac{1}{{\left[1-{\alpha}^{\frac{100}{99}}\right]}^{\frac{99}{100}}}-\frac{1}{{\left[1-{\alpha}^{\frac{100}{99}}{\left(0.8\right)}^{\frac{1}{99}}\right]}^{\frac{99}{100}}}$, (34)

Now, we generate 100 values of $\alpha =0.5,0.505,\cdots ,0.995$ and later we substitute ( $1-\alpha $ ) in Equation (34) and 100 values of the stability index are generated, that is shown in Figure 1.

Remark 3.3. In Figure 1, the stability index ${\text{\Delta}}_{{\mathcal{R}}_{\alpha}}\left(1\right)$ given in Equation (34) is represented as delta, this is, ${\text{\Delta}}_{{\mathcal{R}}_{\alpha}}\left(1\right)\equiv \text{delta}$ and the measure $\u03f5$, we call epsilon.

From Figure 1, we can see that when $\alpha \uparrow 1$ $\Rightarrow $ $\text{delta}\equiv {\text{\Delta}}_{{\mathcal{R}}_{\alpha}}\left(1\right)\to \infty $ that is, it is very costly to use the optimal policy of the approximate process given in Equation (22) to control the original process given in Equation (14).

Figure 1. Scatterplot generated by 100 data points of stability index $\left({\text{\Delta}}_{{\mathcal{R}}_{\alpha}}\right)$ obtained from Equation (34).

On the other hand, to obtain the asymptotic evaluations of the stability index when
$\alpha \uparrow 1$, that is, the estimation of the *κ* parameter that appears in Equation (10):

${\text{\Delta}}_{{\mathcal{R}}_{\alpha}}\equiv {\text{\Delta}}_{{\mathcal{R}}_{\alpha}}\left(1-\alpha \right)=\frac{\mathcal{W}}{{\left(1-\alpha \right)}^{\kappa}}$, $\mathcal{W}\in \mathbb{R}$, $\kappa \ge 1$,

we will proceed to estimate the following simple linear regression model:

$\mathrm{ln}{\left[{\Delta}_{{\mathcal{R}}_{\alpha}}\left(1-\alpha \right)\right]}_{i}=\mathrm{ln}\left(\mathcal{W}\right)+\kappa \mathrm{ln}{\left(1-\alpha \right)}_{i}+{\nu}_{i}$, $i=1,\cdots ,100$. (35)

where
${\nu}_{i}$ is a white noise (see [18] for definition),
$\mathcal{W}$ and *κ* are the parameters to be estimated with the results of 100 data generated and represented in Figure 1. The results of the regression estimate given in Equation (35) is shown below:

Regression Analysis: ln(delta) versus ln(1-alpha)

Therefore, from the above results we have that $\stackrel{^}{\kappa}=-2.1369$ and from Equation (10) it can be concluded that the asymptotic estimate of the stability index when $\alpha \uparrow 1$, it is $\stackrel{^}{\kappa}=-2.1369$, that is, the sensitivity of the stability index with respect to ( $1-\alpha $ ) is

${\text{\Delta}}_{{\mathcal{R}}_{\alpha}}\left(1-\alpha \right)\text{~}\mathcal{M}{\left(1-\alpha \right)}^{-2.1369}$. (36)

On the other hand, the estimation of this asymptotic evaluation of *κ* will be better when the approximation of the
${\mathcal{F}}_{\stackrel{\u02dc}{\xi}}$ distribution is closer to the
${\mathcal{F}}_{\xi}$ distribution (see Assumption 3.2), that is

if $\u03f5\to 0$, then ${\mathcal{F}}_{\stackrel{\u02dc}{\xi}}\to {\mathcal{F}}_{\xi}$ (and so $\stackrel{^}{\kappa}\to \kappa $ ). (37)

To see the above, given the fixed value of
$p=\frac{1}{100}$, we proceeded to replicate the estimates of *κ* given in Equation (35) for
$\u03f5=0.1,0.05,0.01,0.001$.

For $p=\frac{1}{100}$ and $\u03f5=0.1$, from Equation (33) we have the following stability index

${\text{\Delta}}_{{\mathcal{R}}_{\alpha}}\left(1\right)=\frac{1}{{\left[1-{\alpha}^{\frac{100}{99}}\right]}^{\frac{99}{100}}}-\frac{1}{{\left[1-{\alpha}^{\frac{100}{99}}{\left(0.9\right)}^{\frac{1}{99}}\right]}^{\frac{99}{100}}}$. (38)

For the same 100 values of ( $1-\alpha $ ) and using the above equation, another 100 values of the stability index were generated, which are presented in Figure 2.

From this Figure 2, we observe that for $p=\frac{1}{100}$ and $\u03f5=0.1$, (when

$\alpha \uparrow 1$ ) it remains it is very costly to use the optimal policy of the approximate process given in Equation (22) to control the original process given in Equation (14); however, the stability index ${\text{\Delta}}_{{\mathcal{R}}_{\alpha}}\left(1\right)\equiv \text{delta}$ is reduced, that is due to the greater precision $\u03f5=0.1$ of in the approximation of the distribution ${\mathcal{F}}_{\stackrel{\u02dc}{\xi}}$ to the distribution ${\mathcal{F}}_{\xi}$.

Figure 2. Scatterplot generated by 100 data points of stability index $\left({\text{\Delta}}_{{\mathcal{R}}_{\alpha}}\right)$ obtained from Equation (38).

Now, with this new 100 data from Figure 2, the *κ* parameter is re-estimated in the simple linear regression model given in Equation (35). The results obtained are the following:

Regression Analysis: ln(delta) versus ln(1-alpha)

The results show that $\stackrel{^}{\kappa}=-2.1562$, and we obtain that the stability index has an order −2.156 with respect to ( $1-\alpha $ ), that is ${\Delta}_{{\mathcal{R}}_{\alpha}}\left(1-\alpha \right)~\mathcal{M}{\left(1-\alpha \right)}^{-2.1562}$.

Now, to investigate the asymptotic behavior of this sensitivity *κ*, we will make the approximation between the probability functions better and better, *i.e.*,
$\u03f5\to 0$.

So, analogously to what has already been explained, the results for $p=\frac{1}{100}$ and $\u03f5=0.05,0.01,0.001$ are presented in Figures 3-5.

The five figures above show that for fixed $p=\frac{1}{100}$ and when $\u03f5$ tends to

zero (which implies that ${\mathcal{F}}_{\stackrel{\u02dc}{\xi}}$ approaches to ${\mathcal{F}}_{\xi}$ ), then the stability index has zero (see y-values labels).

The previous interpretation is clearer if we look at Figure 6 and Figure 7, in which we have joined the five previous figures in contour lines of the stability index.

In the last two figures observe y-values labels, it is clear that when epsilon tends to zero, then the stability index also tends to zero. The above implies that, the better the approximation between the distribution functions then the approximate optimal policy can be used to control the original process.

Now, for each group of 100 data generated in each of the five graphs, the *κ* parameter involved in the simple linear regression model given in Equation (35) was estimated. The results obtained from these estimates are presented in Table 1,

Figure 3. Scatterplot generated by 100 data points of stability index $\left({\text{\Delta}}_{{\mathcal{R}}_{\alpha}}\right)$ obtained from Equation (33) with $\u03f5=0.05$.

Figure 4. Scatterplot generated by 100 data points of stability index $\left({\text{\Delta}}_{{\mathcal{R}}_{\alpha}}\right)$ obtained from Equation (33) with $\u03f5=0.01$.

Figure 5. Scatterplot generated by 100 data points of stability index $\left({\text{\Delta}}_{{\mathcal{R}}_{\alpha}}\right)$ obtained from Equation (33) with $\u03f5=0.001$.

Figure 6. Results of the association of the stability index and the approximation measure in the probability distributions (epsilon).

Figure 7. Magnification of Figure 6, when alpha approaches to 1.

Table 1. Asymptotic evaluation of the stability index ( $\stackrel{^}{\kappa}$ ).

note that the first two cases $\left(\u03f5=0.2;\u03f5=0.1\right)$ correspond to the results that have been explained in previous pages.

In Table 1, the green cell shows the best approximation used between the distribution functions (see Assumption 3.2) with which the numerical estimate for the asymptotic evaluation of the stability index was found, which is shown in the skyblue cell.

Based on results of Table 1, we can conclude that for $p=\frac{1}{100}$, when $\u03f5\to 0$

and
$\alpha \uparrow 1$, then the asymptotic evaluation of the stability index is
$\stackrel{^}{\kappa}\approx -2.175$, *i.e.*, the stability index has an order

${\text{\Delta}}_{{\mathcal{R}}_{\alpha}}\left(1-\alpha \right)~\mathcal{M}{\left(1-\alpha \right)}^{-2.175}$.

To study the sensitivity of the stability index (
$\stackrel{^}{\kappa}$ ), numerical experiments were carried out for other values of *p*. Each of these*p* values was substituted in Equation (32) and the stability indices (
${\text{\Delta}}_{{\mathcal{R}}_{\alpha}}$ ) were obtained as a function of *α* and
$\u03f5$ as shown in Table 2.

Then, for each fixed value of *p *given in Table 2, we will use
$\u03f5=0.2,0.1,0.05,0.01,0.001$ ; subsequently for each pair of fixed *p* and
$\u03f5$, 100 data of (
$1-\alpha $ ) were generated and they were substituted in the formulas of Table 2 obtaining 100 data of the index stability as a function of (
$1-\alpha $ ); finally, these 100 pairs of (
$1-\alpha $ ) and
${\text{\Delta}}_{{\mathcal{R}}_{\alpha}}\left(1-\alpha \right)$ were used for the asymptotic evaluation of the approximate stability index with the estimation of the *κ* parameter involved in the simple linear regression model given in Equation (35). The results obtained in these numerical estimates are presented in Table 3.

Table 2. Explicit expression of the stability index for different values of the reward function parameter (*p*).

Table 3. Explicit expression of the stability index for different values of the reward function parameter (*p*)*.*

Remark 3.4. In Table 3, for values of $p\ge \frac{95}{100}$ the speed with which the

stability index has to infinity is greater; so it does not allow to obtain $n=100$ values of $\alpha =0.5,0.505,\cdots ,0.995$. Thus, for example the results shown in

Table 3 for $p=\frac{95}{100}$ were obtained with $n=89$ data, while for $p=\frac{99}{100}$,

they were performed with
$n=50$ data. The results presented for the rest of the *p* values in Table 3 were obtained with 100 data.

Clearly, the results presented for each *p* value in Table 3, must be interpreted in the same way that the results in Table 1 were previously explained.

Discussion of results. The motivation for studying discounted reward (cost) problems is primarily economic. Capital accumulation processes of an economy, inventory problems, inventory management, portfolio management, are applications of this type of optimization criteria. The reward function used in this work, see Equation (16), is a very used function in economics, it belongs to the family of consumer utility functions, specifically the so-called Cobb-Douglas utility function (see [19] for definitions), so the selection of the parameter *p* in Equation (16) it must be very careful. The results obtained in this work about the asymptotic evaluations of the stability index (which are presented in Table 3) are interpreted as follows:

1) If
$p\to 1:{\Delta}_{{\mathcal{R}}_{\alpha}}~\mathcal{M}{\left(1-\alpha \right)}^{-\infty}$. That is, if the parameter *p* of the reward function approaches 1, then the sensitivity of the stability index grows indefinitely.

Therefore, for values of $p\approx 1$ the use of an approximate policy to control the original process is not recommended, that is because the results show (see

Table 3) that for $p\ge \frac{8}{9}$ we have $\stackrel{^}{\kappa}\le -2.284$, which is why the stability index can be up to $\mathcal{M}{\left(1-\alpha \right)}^{-8.05}$.

2) If $p\to \frac{1}{2}:{\Delta}_{{\mathcal{R}}_{\alpha}}~\mathcal{M}{\left(1-\alpha \right)}^{-1.75}$. In this case, the results obtained (see Table 3) suggest that if values of $\frac{2}{5}\le p\le \frac{7}{10}$ are selected in the reward function

given in Equation (16), then it would seem reasonable to use the approximate policy ${\stackrel{\u02dc}{f}}_{\ast}$ to control the original process M.

3) If $p\to 0:{\Delta}_{{\mathcal{R}}_{\alpha}}~\mathcal{M}{\left(1-\alpha \right)}^{-2}$. In this case, the results obtained in this work using statistical techniques are the same as those found in articles [6] and [7], but using uppers bounds such as given in Equation (1).

Remember that by definition we have
$0<p<1$ (see Equation (16)). Now, from the three previous points, the results obtained show that for extreme values of *p* (close to zero or one) it is not recommended to use an approximate policy to control the original process. The results obtained suggest a selection of the *p*

value close to the average ( $p\approx \frac{1}{2}$ ) in the reward function, to use of such an approximation.

4. Conclusion

Despite the extensive literature that exists on the subject of Markov control process, there are few works developed on the subject of estimating the stability index. The study of stability for control processes represents a challenge, both from a theoretical and an applied point of view. In this application work, it is intended to contribute to the study of stability using statistical techniques instead of probabilistic metrics. The limitations of this work are the use of a simple Markov control process as well as the use of an exponential distribution function to measure the shock effect of the process. However, the numerical estimates found are consistent and show their impact on the sensitivity of the stability index to changes in both the discount factor and the parameter in the reward function; obviously, the results obtained respond favorably to the original question, which was posed in the introduction, so we can conclude that the objective of this work was achieved. Finally, it is recommended to strengthen the results found in this work by carrying out some of the following future investigations: 1) Using more complex Markov control processes; 2) Validate the robustness of the results using another type of distribution function to measure the shock effect of the process; 3) Use another type of reward function; and 4) Use of other statistical techniques for the asymptotic estimation of the stability index.

Acknowledgements

The author wishes to thank referees for valuable suggestions on improvement of the previous version of the paper.

Conflicts of Interest

The authors declare no conflicts of interest regarding the publication of this paper.

[1] | Dynkin, E.B. and Yushkevich, A.A. (1979) Controlled Markov Processes. Springer-Verlag, New York. |

[2] |
Hernandez-Lerma, O. (1989) Adaptive Markov Control Process. Vol. 79, Springer-Verlang, New York. https://doi.org/10.1007/978-1-4419-8714-3 |

[3] |
Gordienko, E.I. (1992) An Estimate of the Stability of Optimal Control of Certain Stochastic and Deterministic Systems. Journal of Soviet Mathematics, 59, 891-899. https://doi.org/10.1007/BF01099115 |

[4] |
Gordienko, E.I. and Salem, F.S. (1998) Robustness Inequalities for Markov Control Processes with Unbounded Cost. Systems & Control Letters, 33, 125-130. https://doi.org/10.1016/S0167-6911(97)00077-7 |

[5] |
Gordienko, E.I. and Yushkevich, A.A. (2003) Stability Estimates in the Problem of Average Optimal Switching of a Markov Chain. Mathematical Methods of Operations Research, 57, 345-365. https://doi.org/10.1007/s001860200258 |

[6] |
Gordienko, E.I., Lemus-Rodriguez, E. and Montes-de-Oca, R. (2008) Discounted Cost Optimality Problem: Stability with Respect to Weak Metrics. Mathematical Methods of Operations Research, 68, 77-96. https://doi.org/10.1007/s00186-007-0171-z |

[7] |
Gordienko, E., Martínez, J. and Ruiz de Chávez, J. (2015) Stability Estimation of Transient Markov Decision Processes. In: Mena, R.H., Pardo, J.C., Rivero, V. and Bravo, G.U., Eds., XI Symposium on Probability and Stochastic Processes, Mexico, 18-22 November 2013, 157-176. https://doi.org/10.1007/978-3-319-13984-5_8 |

[8] |
Martínez-Sánchez, J.E. (2020) Stability Estimation for Markov Control Processes with Discounted Cost. Applied Mathematics, 11, 491-509. https://doi.org/10.4236/am.2020.116036 |

[9] | Gordienko, E.I. and Salem-Silva, F. (2000). Estimates of Stability of Markov Control Processes with Unbounded Costs. Kybernetika, 36, 195-210. |

[10] | Montes-de-Oca, R. and Salem-Silva, F. (2005) Estimates for Perturbations of Average Markov Decision Process with a Minimal State and Upper Bounded by Stochastically Ordered Markov Chains. Kybernetika, 41,757-772. |

[11] |
Martinez, J. and Zaitzeva, E. (2015) Note on Stability Estimation in Average Markov Control Processes. Kybernetika, 51, 629-638. http://doi.org/10.14736/kyb-2015-4-0629 |

[12] | Rachev, S.T. (1991) Probability Metrics and the Stability of Stochastic Models. Wiley, Chichester. |

[13] |
Hernandez-Lerma, O. and Lasserre, J. (1996) Discrete-Time Markov Control Processes: Basic Optimality Criteria. Springer, New York. https://doi.org/10.1007/978-1-4612-0729-0 |

[14] |
Hernandez-Lerma, O. and Lasserre, J.B. (1999) Further Topics on Discrete-Time Markov Control Processes. Springer, New York. https://doi.org/10.1007/978-1-4612-0561-6 |

[15] |
Van Nunen, J.A. and Wessels, J. (1978) Note—A Note on Dynamic Programming with Unbounded Rewards. Management Science, 24, 485-586. https://doi.org/10.1287/mnsc.24.5.576 |

[16] | Carlton, D. and Perloff, J. (2005) Modern Industrial Organization. Pearson, Addison Wesley, Boston. |

[17] | Viscusi, W., Harrington, J. and Vernon, J. (2005) Economics of Regulation and Antitrust. The MIT Press, Cambridge. |

[18] | Kmenta, J. (1971) Elements of Econometrics. 2nd Edition. Macmillan Publishing Company, New York. |

[19] | Cobb, C.W. and Douglas, P.H. (1928) A Theory of Production. American Economic Review, 18, 139-165. |

Journals Menu

Contact us

+1 323-425-8868 | |

customer@scirp.org | |

+86 18163351462(WhatsApp) | |

1655362766 | |

Paper Publishing WeChat |

Copyright © 2024 by authors and Scientific Research Publishing Inc.

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.