Modeling the Dynamics of the Random Demand Inventory Management System

Abstract

At any given time, a product stock manager is expected to carry out activities to check his or her holdings in general and to monitor the condition of the stock in particular. He should monitor the level or quantity available of a given product, of any item. On the basis of the observation made in relation to the movements of previous periods, he may decide to order or not a certain quantity of products. This paper discusses the applicability of discrete-time Markov chains in making relevant decisions for the management of a stock of COTRA-Honey products. A Markov chain model based on the transition matrix and equilibrium probabilities was developed to help managers predict the likely state of the stock in order to anticipate procurement decisions in the short, medium or long term. The objective of any manager is to ensure efficient management by limiting overstocking, minimising the risk of stock-outs as much as possible and maximising profits. The determined Markov chain model allows the manager to predict whether or not to order for the period following the current period, and if so, how much.

Share and Cite:

Ndikumagenge, J. and Ntayagabiri, J. (2023) Modeling the Dynamics of the Random Demand Inventory Management System. Journal of Applied Mathematics and Physics, 11, 438-447. doi: 10.4236/jamp.2023.112026.

1. Introduction

Decision making plays a fundamental role at individual, organizational, societal. It is critical at the governmental level. After considering all the circumstances, the decision maker (investor) must go through a mental process before making a decision among several alternatives [1] . The decision made by the decision maker today has an impact on his future career and business, either positively or negatively. The most important decision that the decision maker faces is how to allocate his funds optimally at each decision era, particularly over time in an uncertain market environment [1] .

On a global scale, the increase in flows and the number of goods and services exchanged has made it necessary to create suitable tools of optimal management of the entire supply chain. However, the persistent uncertainty about the real needs of the customers and the lack of knowledge on the effective capacities of the suppliers by companies managers, require tools for planning the supply chains. This aims to reduce the degree of uncertainty on most of the obstacles encountered. This often occurs in transport and delivery of the goods within the time limits. In many supply chains, mismatches between supply and demand are mitigated by the use of inventory [2] . Inventories can be kept at different levels of the supply chain, including raw materials, components, semi-finished products and/or finished items. Successful inventory management must balance the benefits of inventory (i.e., reduction in lost sales) with the associated cost (which is typically reflected in the cost of holding inventory). One way to reduce the cost associated with inventory is to pool the demands of several items on the same item (flexible): provided that the demands are not perfectly positively correlated, pooling several demands on the same item reduces the required amount of safety stock and (therefore) reduces the cost of holding inventory. This phenomenon is referred to as “risk pooling” or “statistical economies of scale” [3] . However, this tends to come at a cost. This “cost of flexibility” can be summarized as an increase in product cost (when the flexible item is inherently more expensive to manufacture or purchase) and/or an additional adjustment cost (when the item needs to undergo additional processing or transportation in order to make it “ready for use” when demand arises).

Strategic planning related to inventory or product stock management can be modeled, studied using formal models, such as Petri nets, Markov chains, etc. Most of the processes that evolve in time in a probabilistic way can be easily modelled using Markov chains. The latter have the particular property that the probabilities involving how the process will evolve in the future depend only on the current state of the process, and are therefore independent of past events [4] . In stochastic analysis, the Markov chain specifies a system of transitions of an entity from one state to another. Identifying the transition as a random process, Markov dependence theory emphasizes the “memoryless property”, i.e. the future state (next step or position) of any process depends strictly on its current state but not on its past sequence of noticed experiences over time [1] . Markov chains allow, among other things, to model, study, analyze, design and simulate various stochastic processes in various and variable contexts. In probability theory, a stochastic process or sometimes a random process is a set of random variables; it is often used to represent the evolution of a random value, or a system, over time. It is therefore the probabilistic counterpart of a deterministic process (or system). A stochastic process is a process whose behavior is nondeterministic; it can be considered as a sequence of random variables. Any system or process that can be analyzed using probability theory is stochastic [5] . This paper aims to demonstrate the application and applicability of discrete time Markov chains in improving inventory management processes and procedures as well as relevant decision making in a local and/or regional production and/or marketing company.

2. Tools and Methods

2.1. Tools

The objective of this study is to apply discrete time Markov chains to build a prediction model of the evolution of a company. In this article, we will apply the above model to COTRA-Honey cooperative as a case study in order to illustrate and highlight the proposed solution. The processes related to the ordering of products can be assimilated to discrete time random events (processes, phenomena). These can be modelled and studied using Markov chains. A Markov process can be represented by a directed graph to visualize its evolution. The nodes of the graph are the possible states for the Markov chains, an arrow going from state i to state j indicates that there is a strictly positive probability that the next state of the chain is state j if it is currently in state i. The weight is put on the arrow from state i to state j and can also be expressed by a transition matrix. A Markov process is a valuated directed graph X, such that X = ( E , [ V ] ) where: [ V ] = E E .

Figure 1 depicts a three-state Markov process as an example of Markov chain.

X = ( E , [ V ] ) , with: E = E 1 , E 2 , E 3 and

E 1 E 2 E 3 [ V ] = E 1 E 2 E 3 ( 0 λ 12 λ 13 0 0 λ 23 λ 31 0 0 )

2.2. Methods

2.2.1. Optimization of a Stock Procurement Process

Supply management within a company is a central component of its business. When it is efficient, it limits overstocking and promotes the profitability of the company as a whole. The challenge for every procurement manager is to optimize inventory management by avoiding stock-outs and overstocks. Out-of-stocks actually represent a loss of revenue, while overstocks result in additional costs.

Figure 1. Markov chain [6] .

To carry out the work, we are interested in companies offering for sale a particular item and whose orders to the supplier are made at any period, for instance monthly. The customer demands for each period are random, independent and identically distributed. Therefore, we can find the probability that any number of items. If we denote p ( i ) this probability, the probability that i items are requested is given by the following expression:

q ( i ) = 1 p ( 0 ) p ( i 1 ) (1)

The cost of an order is composed of a fixed cost C and a unit cost c. The cost of x units is therefore equal to C + c x . Let us represent the unit selling price and the unit storage cost respectively by v and k with v > c + k and D a random variable of distribution p = ( p ( i ) ) i representing the customer demand during a given period. If at a given time there are x 0 units in stock, the decision-maker may decide not to order anything. In this case, the storage cost will be k x 0 , his income will be v min { D , x 0 } , and the expected profit is given by:

B x 0 ( 0 ) = v x 0 r ( x 0 ) + v i = 0 x 0 1 i p ( i ) k x 0 . (2)

He may also decide to order a quantity x of additional items to complete his stock and at that moment, the expected profit is given by:

B x 0 ( x ) = v ( x 0 + x ) r ( x 0 + x ) + v i = 0 x 0 + x 1 i p ( i ) k ( x 0 + x ) C c x . (3)

For x > 0 , we have:

B x 0 ( x + 1 ) b x 0 ( x ) = v r ( x 0 + x + 1 ) ( k + c ) (4)

By noting S, the theoretical optimal stock, with:

S = max { i : r ( i ) > k + c v } (5)

The manager’s decision would be either to complete his stock up to S, by ordering the quantity x ^ = S x 0 , if x 0 S . If the quantity x 0 is sufficiently high (while remaining lower than S), it may be that B x 0 ( 0 ) is higher than B x 0 ( x ^ ) , and thus there is no interest in ordering.

Indeed, the application

[ 0, S ] i b i ( S i ) b i ( 0 ) (6)

is decreasing, which allows us to pose

s = min { 0 i S : b i ( 0 ) > b i ( S i ) } (7)

where s is the floor stock, above which there is no point in ordering.

2.2.2. Supply Management System Modeling

The demands being seen as a sequence ( D t ) , t of independent and identically distributed random variables. Let X ( t ) be the quantity of items available in stock at a given time, after the sale of this period, and before the possible order of the next period. The management policy is the same as of the previous subsection (subsection 3.2.1). The sequence of random variables ( X ( t ) ) t is a Markov chain, with values in the finite set { 0, , S } .

If we take s = a and S = b , with 0 < a < b , we have a transition matrix of the following form:

( P ( D t + 1 b ) P ( D t + 1 = b 1 ) P ( D t + 1 = a + 1 ) P ( D t + 1 = a ) P ( D t + 1 = 1 ) P ( D t + 1 = 0 ) P ( D t + 1 b ) P ( D t + 1 = b 1 ) P ( D t + 1 = a + 1 ) P ( D t + 1 = a ) P ( D t + 1 = 1 ) P ( D t + 1 = 0 ) P ( D t + 1 a ) P ( D t + 1 = a 1 ) P ( D t + 1 = 0 ) 0 0 0 P ( D t + 1 a + 1 ) P ( D t + 1 = a ) P ( D t + 1 = 1 ) P ( D t + 1 = 0 ) 0 0 P ( D t + 1 b 1 ) P ( D t + 1 = b 1 ) P ( D t + 1 = b a 1 ) P ( D t + 1 = b a 2 ) P ( D t + 1 = 0 ) 0 P ( D t + 1 b ) P ( D t + 1 = b ) P ( D t + 1 = b a ) P ( D t + 1 = b a 1 ) P ( D t + 1 = 1 ) P ( D t + 1 = 0 ) )

For all i { 0, , S } , p ( i ) is strictly positive. The Markov chain ( X ( t ) ) is irreducible and a periodic on the finite set { 0, , S } . It therefore admits a unique stationary measure, which we will note π = ( π ( i ) ) , i = 0 , , S .

2.2.3. Equilibrium Probabilities of the Markov Chain

When there is a value m large enough that the rows of the matrix P ( m ) are identical, then the probability that the system is in state j no longer depends on the initial state of the system (at t = 0 ). Equilibrium probabilities are long-term properties of Markov chains.

For any ergodic and irreducible Markov chain (with only one class) l i m m + p i j ( m ) exists and does not depend on state i.

Moreover,

lim m + p i j m = π j > 0 ,

where π j satisfies the following stationary state equations:

π j = i S π i p i j (8)

and

j S π j = 1 (9)

The π j is called the stationary state probabilities or equilibrium probability of the Markov chain. The term stationary state probability means that the probability of finding the process in a certain state, say j is independent of the probability distribution of the initial state.

It is important to note that stationary (or stability) probability does not imply that the process settles into a single state. Rather, the process continues to make transitions from state to state, and at any step m, the probability of transition from state i to state j is always p i j .

With the case of the previous subsection, in order to determine the steady-state probabilities, we need to solve a system of b + 2 equations with b + 1 unknowns of the following form:

( π 0 = π 0 P ( D t + 1 b ) + + π a P ( D t + 1 a ) + + π b 1 P ( D t + 1 b 1 ) + π b P ( D t + 1 b ) π 1 = π 0 P ( D t + 1 = b 1 ) + + π a P ( D t + 1 = a 1 ) + + π b 1 P ( D t + 1 = b 1 ) + π b P ( D t + 1 = b ) π a = π 0 P ( D t + 1 = a + 1 ) + + π a P ( D t + 1 = 0 ) + + π b 1 P ( D t + 1 = b a 1 ) + π b P ( D t + 1 = b a ) π a + 1 = π 0 P ( D t + 1 = a ) + + 0 + + π b 1 P ( D t + 1 = b a 2 ) + π b P ( D t + 1 = b a 1 ) π b 1 = π 0 P ( D t + 1 = 1 ) + + 0 + + π b 1 P ( D t + 1 = 0 ) + π b P ( D t + 1 = 1 ) π b = π 0 P ( D t + 1 = 0 ) + + 0 + + 0 + π b P ( D t + 1 = 0 ) 1 = π 0 + π 1 + + π a + + π b 1 + π b

3. Results

3.1. Transition Probability Matrices of COTRA-Honey

The system to be modeled is a management system of COTRA-Honey cooperative at the level of the products stocks, which is the most important point of this cooperative. COTRA-Honey receives honey from different suppliers. After processing, some of the honey is put into packages that can contain different quantities for sale, and the rest is kept in cans containing many litres. After a certain period of time, if the quantity in stock is less than a fixed value, the request is prepared and sent to the suppliers.

This system is similar to a Markov chain. We start from the fact that the orders are made at the end of each month. Let D 1 , D 2 , D 3 , be the customer requests in the first month, the second month, the third month, …, respectively. D t is assumed to be independent and identically distributed random variables with the Poisson distribution of mean 1.

Let X 0 be the starting quantity, X 1 the quantity available at the end of the first month, X 2 the quantity available at the end of the second month, X 3 the quantity available at the end of the third month, …: with X 0 = 5 cans. The cooperative uses the following ordering policy: If there are less than 2 cans in stock, the cooperative orders the quantity necessary to complete the stock up to the maximum height. However, if there are at least two cans in stock, no order is placed. Thus, { X t } for t = 0 , 1 , is a stochastic process. The possible states of the process are the integers 0,1,2,3,4,5 representing the possible number of cans available at the end of the month. The random variables X t can be calculated by the following expression:

X t + 1 = ( max { 5 D t + 1 , 0 } if X t < 2 max { X t D t + 1 , 0 } if X t 2 (10)

for t = 0 , 1 , 2 ,

Recall that X t represents the quantity in stock at the end of month t, to say that X t represents the state of the system at time t. Since the current state is X t = i , the previous expression indicates that X t + 1 depends only on D t + 1 and X t . Since X t + 1 is independent of the past, the stochastic process { X t } , ( t = 0 , 1 , ) has the Markov property and is therefore a Markov chain.

Since D t + 1 has the Poisson distribution of mean 1, then we have:

P { D t + 1 = n } = 1 n e 1 n ! , for n = 0 , 1 ,

Applying this formula, we find:

( P ( D t + 1 = 0 ) = 1 0 e 1 0 ! = e 1 = 0.3678 P ( D t + 1 = 1 ) = 1 1 e 1 1 ! = e 1 = 0.3678 P ( D t + 1 = 2 ) = 1 2 e 1 2 ! = e 1 2 = 0.1839 P ( D t + 1 = 3 ) = 1 3 e 1 3 ! = e 1 6 = 0.0613 P ( D t + 1 = 4 ) = 1 4 e 1 4 ! = e 1 24 = 0.0153 P ( D t + 1 2 ) = 1 P ( D t + 1 = 0 ) P ( D t + 1 = 1 ) = 0.2644 P ( D t + 1 3 ) = 1 P ( D t + 1 = 0 ) P ( D t + 1 = 1 ) P ( D t + 1 = 2 ) = 0.0805 P ( D t + 1 4 ) = 1 P ( D t + 1 = 0 ) P ( D t + 1 = 1 ) P ( D t + 1 = 2 ) P ( D t + 1 = 3 ) = 0.0192 P ( D t + 1 5 ) = 1 P ( D t + 1 = 0 ) P ( D t + 1 = 1 ) P ( D t + 1 = 2 ) P ( D t + 1 = 3 ) P ( D t + 1 = 4 ) = 0.0039

The one-step transition matrix is then written as follows:

0 1 2 3 4 5 P = 0 1 2 3 4 5 ( 0.0039 0.0153 0.0613 0.1839 0.3678 0.3678 0.0039 0.0153 0.0613 0.1839 0.3678 0.3678 0.2644 0.3678 0.3678 0 0 0 0.0805 0.1839 0.3678 0.3678 0 0 0.0192 0.0613 0.1839 0.3678 0.3678 0 0.0039 0.0153 0.0613 0.1839 0.3678 0.3678 )

The two-stage transition matrix is given by:

0 1 2 3 4 5 P 2 = 0 1 2 3 4 5 ( 0.0396 0.0848 0.1815 0.2741 0.2776 0.1423 0.0396 0.0848 0.1815 0.2741 0.2776 0.1423 0.0997 0.1449 0.1740 0.1163 0.2325 0.2325 0.1279 0.2070 0.2868 0.1839 0.0972 0.0972 0.0856 0.1591 0.2755 0.2854 0.1649 0.0296 0.0396 0.0848 0.1815 0.2741 0.2776 0.1426 )

The three-stage transition matrix is given by:

0 1 2 3 4 5 P 3 = 0 1 2 3 4 5 ( 0.0764 0.1383 0.2350 0.2520 0.2002 0.0981 0.0764 0.1383 0.2350 0.2520 0.2002 0.0981 0.0617 0.1069 0.1788 0.2160 0.2610 0.1755 0.0942 0.1519 0.2175 0.1829 0.1947 0.1589 0.1000 0.1681 0.2534 0.2160 0.1615 0.1009 0.0764 0.1383 0.2350 0.2520 0.2002 0.0981 )

After four steps, we have:

0 1 2 3 4 5 P 4 = 0 1 2 3 4 5 ( 0.0875 0.1498 0.2351 0.2238 0.1887 0.1151 0.0875 0.1498 0.2351 0.2238 0.1887 0.1151 0.0710 0.1268 0.2143 0.2288 0.2226 0.1266 0.0775 0.1317 0.2079 0.2133 0.2206 0.1489 0.0889 0.1485 0.2250 0.2067 0.1951 0.1357 0.0875 0.1498 0.2351 0.2230 0.1887 0.1151 )

The eight-step Transition Matrix is given by: After sixteen steps, we have:

0 1 2 3 4 5 P 8 = 0 1 2 3 4 5 ( 0.0817 0.1401 0.2222 0.2218 0.2050 0.1293 0.0817 0.1401 0.2222 0.2218 0.2050 0.1293 0.0819 0.1403 0.2219 0.2207 0.2050 0.1302 0.0823 0.1409 0.2227 0.2209 0.2040 0.1292 0.0820 0.1406 0.2228 0.2217 0.2042 0.1287 0.0817 0.1401 0.2222 0.2218 0.2050 0.1293 )

After sixteen steps, we have the following transition matrix:

0 1 2 3 4 5 P 16 = 0 1 2 3 4 5 ( 0.0819 0.1404 0.2224 0.2213 0.2046 0.1293 0.0819 0.1404 0.2224 0.2213 0.2046 0.1293 0.0819 0.1404 0.2224 0.2213 0.2046 0.1293 0.0819 0.1404 0.2224 0.2213 0.2046 0.1293 0.0819 0.1404 0.2224 0.2213 0.2046 0.1293 0.0819 0.1404 0.2224 0.2213 0.2046 0.1293 )

The stationary state equations can be expressed as follows:

( π 0 = π 0 p 00 + π 1 p 10 + π 2 p 20 + π 3 p 30 + π 4 p 40 + π 5 p 50 π 1 = π 0 p 01 + π 1 p 11 + π 2 p 21 + π 3 p 31 + π 4 p 41 + π 5 p 51 π 2 = π 0 p 02 + π 1 p 12 + π 2 p 22 + π 3 p 32 + π 4 p 42 + π 5 p 52 π 3 = π 0 p 03 + π 1 p 13 + π 2 p 23 + π 3 p 33 + π 4 p 43 + π 5 p 53 π 4 = π 0 p 04 + π 1 p 14 + π 2 p 24 + π 3 p 34 + π 4 p 44 + π 5 p 54 π 5 = π 0 p 05 + π 1 p 15 + π 2 p 25 + π 3 p 35 + π 4 p 45 + π 5 p 55 1 = π 0 + π 1 + π 2 + π 3 + π 4 + π 5

By replacing the transition probabilities with their values in this system we have:

{ π 0 = 0.0039 π 0 + 0.0039 π 1 + 0.2644 π 2 + 0.0805 π 3 + 0.0192 π 4 + 0.0039 π 5 π 1 = 0.0153 π 0 + 0.0153 π 1 + 0.3678 π 2 + 0.1839 π 3 + 0.0613 π 4 + 0.0153 π 5 π 2 = 0.0613 π 0 + 0.0613 π 1 + 0.3678 π 2 + 0.3678 π 3 + 0.1839 π 4 + 0.0613 π 5 π 3 = 0.1839 π 0 + 0.1839 π 1 + 0.3678 π 3 + 0.3678 π 4 + 0.1839 π 5 π 4 = 0.3678 π 0 + 0.3678 π 1 + 0.3678 π 4 + 0.3678 π 5 π 5 = 0.3678 π 0 + 0.3678 π 1 + 0.3678 π 5 1 = π 0 + π 1 + π 2 + π 3 + π 4 + π 5

Solving this system gives us the following solutions:

π 0 = 0.0819 ; π 1 = 0.1404 ; π 2 = 0.2224 ; π 3 = 0.2213 ; π 4 = 0.2046 ; π 5 = 0.1293

By analyzing the sixteen step transition matrix, we notice that each of the six rows has identical entries.

3.2. One-Step State Transition Diagram

The information given by the transition matrix can also be represented graphically with the state transition diagram as shown on Figure 2. Here is the representation of the state transition diagram associated with the one-step transition matrix.

The arrows for the state transition diagram indicate the possible transitions from one state to another, or sometimes from one state to another, as the stock moves from the end of the month to the end of the next month. The number next to each arrow indicates the probability of that particular transition occurring.

4. Conclusion

In conclusion, the best stock management for a production and/or marketing company is a critical concept for an investor’s success because, when done correctly, it limits products overstocking as well as stock outs. To be able to control the stock in all aspects, the manager requires highly efficient decision-making tools. The goal of this study was to find the best Markov model for predicting COTRA-Honey stock states.

Figure 2. State transition diagram.

Analyzing the transition matrix for the different stages, given, for example, that there are three cans of honey in stock at the end of the month, the probability that there will be no cans of honey in stock two months later is 0.1279 or p 30 ( 2 ) = 0.1279 . The probability that there will be three cans of honey in stock two months later is 0.1839 or p 33 ( 2 ) = 0.1839 . The probability that there will be one can of honey in stock three months later is 0.1519 or p 31 ( 3 ) = 0.1519 . The probability that there will be five cans of honey three months later is 0.1589 or p 35 ( 3 ) = 0.1589 . The probability of having two cans of honey four months later is 0.2079 or p 32 ( 4 ) = 0.2079 . The probability of having two cans of honey in stock eight months later is 0.2227 or p 32 ( 8 ) = 0.2227 and the probability of having four cans sixteen months later for example is 0.2046 or p 34 ( 16 ) = 0.2046 .

By analyzing the sixteen step transition matrix, we notice that each of the six rows has identical entries. This then means that the probability of being in state j after sixteen months is essentially independent of the initial state of the system. The equilibrium probabilities π j also called stationary probabilities that are found by solving the system up are the same results as those appearing in the sixteen step transition matrix. This means that the probability that there will be zero, one, two, three, four and five honey cans in stock after several months (≥16 months) tends to 0.0819; 0.1404; 0.2224; 0.2213; 0.2046; 0.1293 respectively. The main interest of the developed management model is to predict the probable states of the stock, i.e. its behaviour over time, in order to know in advance when to place orders with suppliers and to avoid all the risks that can be caused by the hasardeous stock management.

Conflicts of Interest

The authors declare no conflicts of interest regarding the publication of this paper.

References

[1] Azubike, S. and Ephraim, O. (2020) Application of Markov Chain (CM) Model to the Stochastic Forecasting of Stocks Prices in Nigeria: The Case of Dangote Cement. International Journal of Applied Science and Mathematical Theory, 6, 20.
[2] Yannick, D. and Inneke, V.N. (2011) A Discrete Time Markov Chain Model for a Periodic Inventory System with One-Way Substitution. Proceedings of the conference name, Katholieke Universiteit Leuven, 20 June 2011.
[3] Jan, A.V.M. (2008) Operations Strategy: Principles and Practice, Dynamic Ideas, Belmont Addison-Wesley.
[4] Frederick, S.H. and Gerard, J.L. (2000) Introduction to Operations Research. Thomas Casson.
[5] Santhi, P.K. (2019) Markov Decision Process in Supply Chain Management. Madurai Kamaraj University, Madurai, Tamil Nadu.
[6] Haoues, M. (2006) Lutilisation Conjointe des Rseaux de Petri Stochastiques et des Processus de Markov pour la Modlisation, lanalyse Et Lvaluation des Performances dun Systme de Production: Ligne demboutissage de Lentreprise B.A.G Batna. Universit El-Hadj Lakhdar Batna, Batna, Algeria.

Copyright © 2024 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.