Evaluation of the Reliability of a System: Approach by Monte Carlo Simulation and Application ()
1. Introduction
A production system is defined as the set of resources (people, machines, methods and processes) whose synergy is organized to transform raw materials (or components) in order to create a product or a service [1] [2] [3] .
Current production systems cause continuous irregularities in production and this is due to breakdowns that occur during manufacturing. The concern of any company is to ensure its function continuously with better quality, minimum cost and maximum security [4] . To achieve this perspective, the companies have a maintenance department whose role is to choose an appropriate maintenance policy taking into consideration the technical, economic and financial aspects of the different methods in order to optimize the operating safety of the systems of productions that allow decisions to be made.
This would mean making industrial systems or processes more reliable, and in so doing, reducing the costs of system failure, thereby boosting production and the manufacturer’s gross margin. What’s more, reliability enhances site safety and reduces the severity of environmental threats.
To evaluate the performance of a system, in this paper, we use Monte Carlo simulation to find solutions to the problems of the production system.
Monte Carlo simulation is a very interesting method because it gives access to many parameters inaccessible by other methods and leads to extremely detailed analyses of the systems studied. With the Monte Carlo simulation, the analyst clearly sees the combinations of input values associated with the outcomes and thus has information that is extremely useful for further analysis of the system.
Monte Carlo simulation remains the most reliable tool for determining the probability of failure. However, it remains very costly, especially for complex systems with large finite element models and many uncertain design parameters.
In the rest of this paper, we will describe the method used in the reliability assessment.
2. Markov Chains and Basic Concepts of Monte Carlo Simulation
Monte Carlo methods by Markov Chains make it possible to greatly broaden the range of distributions that can be simulated numerically. They are relatively simple to implement and often only require knowledge of the target density function up to a constant, which makes them interesting in many situations.
However, a naive implementation can lead to very long computation times, since the convergence of these methods is relatively slow when they are not well calibrated to a given situation.
To build such an algorithm, it is therefore necessary to determine an appropriate set of transition probabilities P, that is to say irreducible, ergodic and having the right stationary distribution [5] .
2.1. Markov Chains
A sequence of random variables
$\left\{{X}_{n}\right\}$ ,
$n\ge 0$ with values in the countable space E is called stochastic (discretetime) process (with values in E). The set E is the state space, whose elements will be denoted i, j and k. When
${X}_{n}=i$ , the process is said to be in, or visiting, the state i at time n.
Markov Chains are stochastic processes whose evolution is governed by a recurrence equation of the type
${X}_{n+1}=f\left({X}_{n},{Z}_{n+1}\right)$ , where
$\left\{{Z}_{n}\right\}$ ,
$n\ge 1$ is a sequence independent of the initial value
${X}_{0}$ . This extremely simple structure is sufficient to generate a wide variety of behaviors.
2.1.1. Definitions and Property
1) Definition 1: (Markov Chain)
Consider
$\left({X}_{n};n\ge 0\right)$ a sequence of random variables with values in the set of states E assumed to be equal to N. We say that this sequence is a Markov Chain if, for everything
$n\ge 0$ and for every sequence
$\left({i}_{0},{i}_{1},\cdots ,{i}_{n1},i,j\right)$ , we have the relation 1.
$P\left({X}_{n+1}=j\text{\hspace{0.17em}}{X}_{0}={i}_{0},\cdots ,{X}_{n}=i\right)=P\left({X}_{n+1}=j\text{\hspace{0.17em}}{X}_{n}=i\right)$ (1)
Touche [6] makes the following observation:
The state of the process at the moment
$\left(n+1\right)$ depends only on that at the n previous moment, but not on its previous states;
We will say that such a process is without memory.
2) Definition 2: (Homogeneous Markov Chain)
A Markov Chain is said to be homogeneous (in time), if the preceding probability does not depend on n. We have the relation 2.
${p}_{ij}\left(n\right)=P\left({X}_{n+1}=j\text{\hspace{0.17em}}{X}_{n}=i\right)={p}_{ij}=P\left({X}_{1}=j\text{\hspace{0.17em}}{X}_{0}=i\right),\text{}n\ge 0$ (2)
3) Definition 3: (Transition Probability)
We define the probability of transition from state i to state j between times n and
$n+1$ by the quantity defined by relation 3.
${p}_{ij}=P\left({X}_{n+1}=j\text{\hspace{0.17em}}{X}_{n}=i\right),\text{}\forall i,j\in E$ (3)
where
${p}_{ij}$ is the probability that the system is in the state j at the moment
$n+1$ knowing at the moment n it was in the state i.
4) Definition 4: (Transition Matrix)
The transition matrix is the matrix P whose general term
$p\left(i,j\right)$ is the probability of transition from state i to state j [6] . It is a matrix which has the characteristics below and is defined by relation 4 [7] .
It is square,
It is independent of time.
$P=\left[\begin{array}{ccc}p\left(1,1\right)& \cdots & p\left(1,j\right)\\ \vdots & \ddots & \vdots \\ p\left(i,1\right)& \cdots & p\left(i,j\right)\end{array}\right]$ (4)
This matrix is stochastic because the (stochastic) line vector i contains the probabilities of all possible transitions starting from the state i whose sum is equal to one [8] .
2.1.2. Property of the Matrix P
P admits 1 as its eigenvalue;
There is an eigenvector, associated with the eigenvalue 1 which defines a probability distribution.
Notes:
• A homogeneous Markov Chain “jumps” randomly from state to state, and the probability of each jump is given by the transition matrix P;
• The law of
${X}_{0}$ is called the initial law of the Markov Chain and is written by the relation 5 [6] .
${\pi}_{0}=\left(P\left({X}_{0}=1\right),P\left({X}_{0}=2\right),P\left({X}_{0}=N1\right),P\left({X}_{0}=N\right)\right)$ (5)
2.1.3. Characterizations of a Homogeneous Markov Chain
The sequence
$\left\{{X}_{n}\right\}$ ,
$n\in N$ is a homogeneous Markov Chain if and only if there exists a matrix P having the property defined by relation 6.
$\begin{array}{l}P\left({X}_{n}={i}_{n},{X}_{n1}={i}_{n1},\cdots ,{X}_{0}={i}_{0}\right)\\ =P\left({X}_{0}={i}_{0}\right)p\left({i}_{0},{i}_{1}\right)\cdots p\left({i}_{n1},{i}_{n}\right),\text{}\forall n\in N\text{and}{i}_{0},\cdots ,{i}_{n}\in E\end{array}$ (6)
In this case, P is the homogeneous Markov Chain transition matrix
${\left({X}_{n}\right)}_{\left(n\in N\right)}$ [9] .
2.1.4. State Graphs
To visualize the evolution of a homogeneous Markov Chain, it is often useful to represent the transition matrix of P the Markov Chain by a directed graph. The nodes of the graph are the possible states for the Markov Chain. An arrow from state i to state j indicates that there is a strictly positive probability that the next state in the chain will be the state j if it is currently in the state i. We put weight
$P\left(i,j\right)$ on the arrow going from state i to state j [10] . Figure 1 gives an illustration of a state graph.
2.1.5. Law of Probability of X_{n}
The analysis of the transient state of a Markov Chain consists in determining the vector
${\pi}^{\left(n\right)}$ of the probabilities of states which one generally notes
${\pi}_{i}^{\left(n\right)}=P\left({X}_{n}=i\right)$ , so that the chain
$\left({X}_{n},n\in N\right)$ is in the state i after n step.
The distribution of
${X}_{n}$ can be described in the form of the row vector given by relation 7.
$\pi =\left({\pi}_{1}^{\left(n\right)},{\pi}_{2}^{\left(n\right)},\cdots \right)$ with
${\pi}_{1}^{\left(n\right)}+{\pi}_{2}^{\left(n\right)}+\cdots =1$ (7)
To calculate the vector
${\pi}^{\left(n\right)}$ , it is necessary to know either the value taken by
${X}_{0}$ , that is to say the initial state of the process, or its initial distribution defined by the relation 8 [11] .
${\pi}^{\left(0\right)}=\left({\pi}_{1}^{\left(0\right)},{\pi}_{2}^{\left(0\right)},\cdots ,{\pi}_{1}^{\left(0\right)},\cdots \right)$ (8)
According to the total probability theorem, we have relations 9 and 10.
$P\left({X}_{n}=i\right)={\displaystyle \underset{i\in E}{\sum}P\left({X}_{0}=j\right)\cdot P\left({X}_{n}=i{X}_{0}=j\right)}$ (9)
${\pi}_{i}\left(n\right)={\displaystyle \underset{i\in E}{\sum}{\pi}_{i}\left(0\right)\cdot {p}_{ij}\left(n\right)}$ (10)
In a similar way, we obtain the relation 11:
${\pi}^{\left(n\right)}={\pi}^{\left(0\right)}{P}^{\left(n\right)}$ (11)
Property:
If the eigenvalue 1 of the stochastic matrix P of a homogeneous Markov chain is simple and dominant (any other eigenvalue has a modulus strictly less than 1) then the sequence
${\left({P}_{n}\right)}_{\left(n\in N\right)}$ converges to a strictly positive matrix
${P}^{\infty}$ of the form given by relation 12.
${P}^{\infty}=\left[\begin{array}{ccc}{p}_{1}& \cdots & {p}_{N}\\ \vdots & \ddots & \vdots \\ {p}_{1}& \cdots & {p}_{N}\end{array}\right]$ (12)
The elements of the matrix
${P}^{\infty}$ verify relation 13.
${p}_{1}+{p}_{2}+\cdots +{p}_{N}=1$ (13)
Moreover, any sequence
${\left({\pi}_{n}\right)}_{\left(n\in \mathbb{N}\right)}$ defined in its recurrent form given by equation 14 converges to
${\pi}_{\infty}$ as defined by relation 15 and is the unique probability distribution satisfying relation 16.
$\{\begin{array}{l}{\pi}_{n+1}={\pi}_{n}\times P\\ {\pi}_{0}=\left(P\left({X}_{0}=1\right)\text{}P\left({X}_{0}=2\right)\text{}\cdots \text{}P\left({X}_{0}=N\right)\right)\end{array},\text{}\forall n\in \mathbb{N}$ (14)
${\pi}_{\infty}=\left({p}_{1}\text{}{p}_{2}\text{}\cdots \text{}{p}_{N}\right)$ (15)
$\pi \times P=\pi $ (16)
2.1.6. Stationary Distributions and Limits for Homogeneous Markov Chains
It is often found that the distribution
${\pi}^{\left(n\right)}$ converges to a limiting distribution when
$n\to \infty $ . In this case, the latter is said to define the steady state of the Markov Chain.
In practice, it is generally accepted that the steady state of a Markov Chain is reached in a finite number of transitions [6] .
1) Definition 1: (Limit Distribution)
We say that a Markov Chain converges towards
$\pi $ or has a limiting distribution
$\pi $ if we have the relation 17 and that independently of the initial distribution
${\pi}^{\left(0\right)}$ .
$\underset{n\to \infty}{\mathrm{lim}}{\pi}^{n}=\pi $ (17)
2) Definition 2: (Stationary Markov Chain)
A Markov Chain is said to be stationary if the distribution
${\pi}^{\left(n\right)}$ is independent of time.
In other words, if the initial distribution
${\pi}^{\left(0\right)}$ is a stationary distribution of the Markov Chain in question.
2.2. Basic Concepts of Monte Carlo Simulation
2.2.1. Monte Carlo Method
The Monte Carlo method is broadly defined as a technique for solving a model using random or pseudorandom numbers [12] [13] . Random numbers are stochastic variables that are uniformly distributed over the interval
$\left[0;1\right]$ and show stochastic independence [13] . This means that the variables can take any value between 0 and 1 with the same probability. Independence implies that if we know the random numbers
${r}_{1},{r}_{2},\cdots ,{r}_{i1}$ we have no information about
${r}_{i}$ .
Pseudorandom numbers are generated by applying deterministic algorithms called random number generators. For practical purposes, the behavior of these numbers is considered strictly random. They are then considered to be uniformly distributed and independent. The most common algorithms for the generation of random numbers are: the multiplicative congruent generator and the mixed congruent generator [14] . Uniform random variables can sometimes be used directly in simulations. In other cases, they must be converted into nonuniform distributions before the start of the simulation. The procedures for generating nonuniformly distributed random variables can be categorized into three techniques: the inverse transformation method, the composition method and the acceptreject method. There are also special methods for specific distributions. A more detailed description is given in [12] [14] .
Among the most widespread applications of the Monte Carlo method, we find simulations [13] .
2.2.2. Monte Carlo Simulation
It is used in Dependability (SdF) when a system proves to be too complex to be treated by several methods, in this case Fault Trees (ADD), Analysis of the Failure Modes of their Effects and of their Criticality (AMDEC) and Petri nets (RDP). Its principle consists in simulating a large number of times the dynamic behavior of the components of a system in order to evaluate its operating characteristics, by reconstituting the total state [15] .
1) Definitions
There are several definitions, of which we will cite three.
a) Definition 1
The Monte Carlo simulation method is a numerical technique for solving mathematical problems by simulating random variables. There is no absolute consensus on a precise definition of what a Monte Carlolike technique is, but the most usual description is that methods of this type are characterized by the use of chance to solve computational problems. They are generally applicable to problems of the numerical type, or to problems of a probabilistic nature itself [11] .
b) Definition 2
Monte Carlo methods are very often the only approaches usable for the study of highdimensional nonlinear systems for which no analytical approach is applicable. They are used in an industrial context, to characterize the response to a random excitation or to carry out a study of the propagation of uncertainties. They are generally applicable to problems of the numerical type, or to problems of a probabilistic nature themselves [9] .
c) Definition 3
The use of the Monte Carlo simulation method allows us to take into account the diversity of possible situations without resorting to point estimates.
2) Advantages of Monte Carlo simulation
Monte’s simulation Carlo is a very interesting method because it gives access to many parameters inaccessible by other methods and leads to extremely detailed analyzes of the systems studied:
 It is not limited by the number of states of the system studied because, even if there are hundreds of thousands of them, only the preponderant states appear during the simulation;
 It allows any law of probability to be taken into account;
 It allows the association in the same model of deterministic phenomena and random phenomena;
 It can insert and simulate all features and processes of the system that can be recognized;
 It can provide a wide range of output parameters;
 Its computer implementation is easy.
Three conditions are necessary for its use:
• A behavior model of the studied system capable of correctly reproducing its operation and its evolution over time when it is subjected to various hazards (failure, repairs, external events, etc.). We can find at this stage, to properly model the system: the Markov process (which consists in representing the behavior of a system by a set of components that can be in a finite number of operating states) or the Petri nets (where the various states of the modeled system are traversed sequentially) which can constitute interesting supports;
• A description of the data in probabilistic form;
• Monte simulation software Carlo to carry out random draws of the input variables (state of the system), to produce stories of the system from its behavior model and to statistically analyze the output variables [9] .
3) Stages of the Monte Carlo simulation
In general, the Monte Carlo simulation involves the following steps:
Step 1: Writing a parametric model
The aim of this first step is to define an algebraic model (of the form
$y=f\left({x}_{1},{x}_{2},\cdots ,{x}_{n}\right)$ ) which makes it possible to show the relationships between the input parameters of the system
$\left({x}_{1},{x}_{2},\cdots ,{x}_{n}\right)$ and the results obtained
$\left({y}_{1},{y}_{2},\cdots ,{y}_{n}\right)$ through the mathematical function f.
Step 2: Generation of random data
The key to Monte Carlo simulation is that it generates the random data set.
So, it is necessary to associate with each input random numbers according to adequate distributions (Uniform, Normal, etc.). In this case, it is necessary to have a random number generator to carry out this step.
Different modeling techniques are available. They depend on the architecture of the system studied, the undesirable events concerned, the criteria to be evaluated and the assumptions taken into account in the models. Among all these techniques, we mention: analytical equivalents, fault trees, Markov graphs, Petri nets, …etc.
At the end of this step (in the context of a production system) we are able to define:
Probability density functions (distribution law, random variables);
A random number generator.
This step also allows us to:
○ To make a list of breakdowns;
○ Define the event ready to simulate;
○ To have a new state vector of the components to know the new temporary architecture of the system to be studied.
Step 3: Evaluation of the model at a number of iterations
Here, it is necessary to make an execution for the stochastic data defined in the previous step, to calculate the result
$\left({y}_{i}\right)$ .
It will therefore be a question of repeating the experiment n times, that is to say repeating the evaluation of the model (redoing step 2) with new random values of the variables
$\left({x}_{i}\right)$ of the model until reaching a threshold defined at the beginning. (a number of iterations, a precision, etc.).
Step 4: Calculation of static values and through graphs
This step involves representing the results obtained, by applying the previous steps, in the form of a histogram (graphical representation) to clearly visualize the results
$\left({y}_{i}\right)$ and calculate, among other things, statistical variables: the mean, the standard deviation, and the coefficient of variation [9] .
There are generally two types of Monte Carlo simulation: nonsequential Monte Carlo (by system states) [16] [17] [18] and sequential (chronological) Monte Carlo [19] [20] [21] [22] [23] .
The preceding statistical variables are evaluated by the sequential Monte Carlo simulation.
The coefficient of variation makes it possible to impose a maximum number of samples as a criterion for stopping the process of convergence of the Monte Carlo simulation.
The evaluation of the coefficient of variation is done by relation 18 [24] .
$\epsilon =\frac{{\sigma}_{x}}{\sqrt{N}\cdot \stackrel{\xaf}{x}}$ (18)
With:
 N: the number of samples (years);

$\stackrel{\xaf}{x}$ : the mean of the study sample;

${\sigma}_{x}$ : the standard deviation of the random variable x;

$\epsilon $ : the dispersion coefficient.
Based on the 2005 Canadian Safety Survey, estimates with a coefficient of variation less than 16.6% are considered reliable and can be used. Estimates with a coefficient of variation between 16.6% and 33.3% should be accompanied by a disclaimer warning users of high error rates.
Step 5: Analysis of the results obtained
The idea of this step is to comment on the results obtained previously.
3. Application of Monte Carlo Simulation by Markov Chains on a Production System for Reliability Modeling
This method consists of representing the operation of a system by a set of components that can be in a finite number of operating and fault states.
Markov Chains are the simplest means to generate random probabilities and to model the states of a production system and possible transitions in Monte Carlo simulations.
Considering a production system, the history of operating hours over a year of operation reveals the time between failures (TBF), given in Table 1.
3.1. Production System Reliability Parameters
The processing of data from the history of operating hours makes it possible to determine the reliability
$R\left(n\right)$ , the probability of failures
$F\left(n\right)$ , the probability density of failures
$f\left(n\right)$ , the mean time between MTBF failures and the failure rate
$\lambda \left(n\right)$ .
Assuming that the TBFs evolve according to an exponential law, we then have relations 19 to 23 respectively.
$R\left(n\right)={\text{e}}^{\lambda \left(n\right)\cdot n}$ (19)
$F\left(n\right)=1{\text{e}}^{\lambda \left(n\right)\cdot n}$ (20)
$f\left(n\right)=\lambda \left(n\right){\text{e}}^{\lambda \left(n\right)\cdot n}$ (21)
$\text{MTBF}=\frac{{\displaystyle \sum \text{TBF}}}{N}=461h$ (22)
$\lambda \left(n\right)=\lambda =\frac{1}{\text{MTBF}}=0.00217{h}^{1}$ (23)
3.2. Markov Chain Modeling the Reliability of the Production System
We consider here a production system evolving according to a stochastic process in discrete time and discrete state E space (
$E=\left\{0,1,2\right\}$ ).
Table 1. Operating hours (h) history.
It is assumed that our production system can be found in three states: the state E_{0} where there are no failures, the state E_{1} where there is a partial failure and the state E_{2} where a total failure of the system is observed. It is assumed that the graph of states and transitions modeling reliability is described by Figure 2.
3.3. Implementation of Monte Carlo Simulation Steps
Step 1: Writing a parametric model
The parametric model consists of a system of equations defined as follows:
$\{\begin{array}{l}P\left({X}_{n+1}=0{X}_{n}=0\right)={p}_{0,0}=1\lambda \\ P\left({X}_{n+1}=1{X}_{n}=0\right)={p}_{0,1}=\lambda \\ P\left({X}_{n+1}=2{X}_{n}=0\right)={p}_{0,2}=\lambda \\ P\left({X}_{n+1}=0{X}_{n}=1\right)={p}_{1,0}=0\\ P\left({X}_{n+1}=1{X}_{n}=1\right)={p}_{1,1}=0\\ P\left({X}_{n+1}=2{X}_{n}=1\right)={p}_{1,2}=0\\ P\left({X}_{n+1}=0{X}_{n}=2\right)={p}_{2,0}=0\\ P\left({X}_{n+1}=1{X}_{n}=2\right)={p}_{2,1}=0\\ P\left({X}_{n+1}=2{X}_{n}=2\right)={p}_{2,2}=0\end{array}$ (24)
Step 2: Generation of a first random number
According to the writing of the previous parametric model and by applying relation 4, the reliability transition matrix is given by relation 25.
$P=\left[\begin{array}{ccc}1\lambda & \lambda & \lambda \\ 0& 0& 0\\ 0& 0& 0\end{array}\right]=\left[\begin{array}{ccc}0.99783& 0.00217& 0.00217\\ 0& 0& 0\\ 0& 0& 0\end{array}\right]$ (25)
The initial condition is given by relation 26.
${\left({\pi}_{0},{\pi}_{1},{\pi}_{2}\right)}^{0}=\left(1,0,0\right)$ (26)
The probability of the system in its states (E_{0}, E_{1} and E_{2}) after one year of operation is given by relation 27.
${\pi}_{1}={\pi}_{0}\times P$ (27)
Thus, the generation of a first random number gives the relation 28.
$\begin{array}{c}{\left({\pi}_{0},{\pi}_{1},{\pi}_{2}\right)}^{1}=\left(1,0,0\right)\left[\begin{array}{ccc}0.99783& 0.00217& 0.00217\\ 0& 0& 0\\ 0& 0& 0\end{array}\right]\\ =\left(0.99783,0.00217,0.00217\right)\end{array}$ (28)
Figure 2. System reliability state graph.
Step 3: Evaluation of the model at a number of iterations
We first set the number of iterations to 90 randomly.
To determine the probabilistic state of the system in the year n, we applied the distribution of
${X}_{n}$ of Markov. We then denote
${\pi}_{n}$ the row matrix (relation 29).
${\pi}_{n}=\left(P\left({X}_{n}=1\right)\text{}P\left({X}_{n}=2\right)\text{}\cdots \text{}P\left({X}_{n}=N\right)\right)$ (29)
According to relation 14, we have by conjecture relations 30 and 31.
${\pi}_{n}={\pi}_{0}\times {P}^{n}$ (30)
${\left({\pi}_{0},{\pi}_{1},{\pi}_{2}\right)}^{n}={\left({\pi}_{0},{\pi}_{1},{\pi}_{2}\right)}^{0}{P}^{n}$ (31)
The probability of the system in its states after two years of operation is (relation 32):
$\begin{array}{c}{\left({\pi}_{0},{\pi}_{1},{\pi}_{2}\right)}^{2}=\left(1,0,0\right){\left[\begin{array}{ccc}0.99783& 0.00217& 0.00217\\ 0& 0& 0\\ 0& 0& 0\end{array}\right]}^{2}\\ =\left(0.99566,0.00216,0.00216\right)\end{array}$ (32)
The probability of the system in its states after three years of operation is (relation 33):
$\begin{array}{c}{\left({\pi}_{0},{\pi}_{1},{\pi}_{2}\right)}^{3}=\left(1,0,0\right){\left[\begin{array}{ccc}0.99783& 0.00217& 0.00217\\ 0& 0& 0\\ 0& 0& 0\end{array}\right]}^{3}\\ =\left(0.99349,0.00216,0.00216\right)\end{array}$ (33)
By conjecture, by completing with the probabilities of the system in its states after n years of operation
$\left(n\in \left\{4,5,\cdots ,90\right\}\right)$ , we have Table 2.
Step 4: Calculation of static values through the graphs
Table 3 gives the various random values of system reliability and failure.
Table 4 gives the different random values of the probability density of the system.
According to the series of samples obtained previously, the reliability, failure and density graphs are represented respectively by Figures 35 and Tables 57 show the values of the reliability, failure and density.
Step 5: Analysis of the results obtained
At the end of the previous results, we see that the reliability of the production system decreases slowly over time and stabilizes from the 87th year of operation, ie with a value of 0.82860 (Figure 3). The production system remains guaranteed during the uptime, with reliability exceeding 80%. However, increased monitoring of the production system remains necessary, in order to increase its reliability.
On the other hand, reduced reliability corresponds to an increase in the probability of failures (Figure 4). The fact that the probability of failure approaches 1, means an increase in repair of the production system.
Table 2. Probability of reliability after n years.
Table 3. Reliability and failure values.
Table 4. Probability density value.
Figure 3. System reliability probability graph by years.
Figure 4. System failure graph by years.
Figure 5. Probability density graph of system failures by years.
Table 5. Statistics of R(n) of the system.
Table 6. Statistics of F(n) of the system.
Table 7. Statistics of f(n)of the system.
The curve in Figure 5 represents the instantaneous failure probability density. In this case, the increase in downtime of the production system causes a decrease in reliability and increases the probability of the presence of a defect or failure.
With regard to Tables 57 and by applying the safety threshold set in Canada in 2005, the estimates of the coefficient of variation are reliable. Indeed, the lower the value of the coefficient of variation, the more accurate the estimation of the reliability functions (R(n), F(n) and f(n)). This can be explained by a low dispersion around the mean, due to a low coefficient of variation.
4. Proposals for Actions to Improve Reliability
To improve the reliability of the production system, we offer the following recommendations:
○ Daily inspections must be respected in order to detect failures very early to trigger the repair process as soon as possible;
○ When a breakdown occurs, immediately change the faulty element with an element playing an equivalent but more reliable role;
○ Perform preventive maintenance (regular maintenance, monitoring of anomaly rate increases, etc.);
○ Comply with the established maintenance program.
5. Conclusion
This paper aimed to evaluate the reliability of a production system and to propose actions to improve reliability, by applying Monte Carlo simulation using Markov Chains. The main reliability characteristics were evaluated. It turns out that the production system always ensures better functioning during its life cycle, due to a reliability exceeding 80%. However, as this reliability decreases, albeit slowly over time, this does not negate the increase in failure frequency. It is on the basis of this observation that suggestions for improving reliability have been made.
Acknowledgements
The authors would like to thank the anonymous reviewers for their time and effort. Their constructive comments and helpful suggestions helped us to clarify the main paper’s research contributions and improve its quality.