The New Mixed Generalized Erlang Distribution

Abstract

In probability theory, the mixture distribution M has a density function for the collection of random variables and weighted by wi ≥ 0 and . These mixed distributions are used in various disciplines and aim to enrich the collection distribution to more parameters. A more general mixture is derived by Kadri and Halat, by proving the existence of such mixture by wi ∈ R, and maintaining . Kadri and Halat provided many examples and applications for such new mixed distributions. In this paper, we introduce a new mixed distribution of the Generalized Erlang distribution, which is derived from the Hypoexponential distribution. We characterize this new distribution by deriving simply closed expressions for the related functions of the probability density function, cumulative distribution function, moment generating function, reliability function, hazard function, and moments.

Share and Cite:

Kadri, T. and Ghannam, Y. (2023) The New Mixed Generalized Erlang Distribution. Applied Mathematics, 14, 497-511. doi: 10.4236/am.2023.148031.

1. Introduction

Mixtures of distributions have been widely used for modeling observed situations whose various characteristics as reflected by the data differ from those that would be anticipated under the simple component distribution. More general families of distributions, well known as mixtures, are usually considered alternative models that offer more flexibility. These are superimpositions of simpler component distributions depending on a parameter, itself being a random variable with some distribution. Mixed Poisson distributions, in particular, have been widely used as claim frequency distributions.

In probability theory, the mixture distribution M has a density function:

f M ( t ) = i I w i f X i ( t )

for the collection of random variables X i , i I N and weighted by w i 0 and i I w i = 1 . These mixed distributions are used in various disciplines and aim to enrich the collection distribution to more parameters. The basic definitions and properties of continuous random variables with their characteristics that are often used in order to introduce the Mixed distribution presents in [1] and [2] . A more general mixture is derived by Kadri and Halat [3] , by proving the existence of such mixture by w i , and maintaining i I w i = 1 . Kadri and Amina provided many examples and applications for such new mixed distributions.

The Hypoexponential distribution with different parameters is one of the distributions considered to be a new mixed distribution. Recall that the probability density function of this case S n ~ H y p o e x p ( α ) where α + n given by Smaili et al. [4] , as:

f S n ( t ) = i = 1 n f X i ( t ) P i , t > 0 (1.1)

where X i ~ E x p ( α i ) and P i = j = 1 , j i n ( 1 α i α j ) , i = 1 , 2 , , n . They showed that i = 1 n 1 P i = 1 having 1 P i . So this staple H y p o e x p ( α ) to the family of the new defined mixed distributions.

Follow with the work of Kadri and Halat [3] , a construction of “New Mixed T-G Distribution” having T to be a new mixed distribution, like the Hypoexponential distribution, and G is function random variable of the parent distribution that exist in the summand of the PDF of T, say X i . To illustrate, considering “T” the Hypoexponential distribution, with PDF presented in 1.1, G will be any function g of the random variable X i ~ E x p ( α i ) . The generated distribution will have an expression as:

f T - G ( t ) = i = 1 n G i j f g ( X i ) ( t )

Thus many distributions were generated as the Mixed Hypoexponential Weibull distribution, Mixed Hypoexponential Frechet distribution, Mixed Hypoexponential Pareto distribution, Mixed Hypoexponential Power distribution, Mixed Hypoexponential Gumbel distribution, and Mixed Hypoexponential Extreme Value distribution.

Also, we add the importance of generating this new distribution, that these newly generated distributions preserve the nature of being also a mixed distribution. To emphasize in deep the importance, we present it in the following Theorem.

Regarding this special form of pdf, we are able to generalize the cdf, the reliability and hazard rate functions, mgf, and moment of each distribution having the Generalized Mixed Distribution that is stated in the following Theorem given by Kadri and Halat [3] .

Theorem 1. Let N follow a new mixed distribution with pdf then f N ( t ) = i I A i f X i ( t ) for some minor distribution X i . The cdf, the reliability, hazard rate functions, mgf, and moments, respectively are:

F N ( t ) = i I A i F X i ( t )

R N ( t ) = i I A i R X i ( t )

h N ( t ) = i I A i f X i ( t ) i I A i R X i ( t )

Φ N ( t ) = i I A i Φ X i ( t )

E [ N l ] = i I A i E [ X i l ]

We add to that the coefficients of these sum, A i having the property that i I A i = 1 . It is important of determining a closed simple expression of this coefficient to characterize the new mixed distribution N.

One of the important distributions which is one of the mixed distribution is the Hypoexponential distribution which is considered to be the general case when the parameters are not different. Referring to the work done by Kadri et al. [4] , if S m ~ H y p o e x p ( α , k ) where α + n and k n having m = i = 1 n k i the probability density function is given by

f S m ( t ) = i = 1 n j = 1 k i A i j f X i j ( t ) (1.2)

where X i j ~ E r l ( j , α i ) and A i j is the coefficients of the Hypoexponential distribution, where A i j no need to be a positive real number. Thus we imply that:

F S m ( t ) = i = 1 n j = 1 k i A i j F X i j ( t )

R S m ( t ) = i = 1 n j = 1 k i A i j R X i j ( t )

h S m ( t ) = i = 1 n j = 1 k i A i j f X i j ( t ) i = 1 n j = 1 k i A i j R X i j ( t )

Φ S m ( t ) = i = 1 n j = 1 k i A i j Φ X i j ( t )

E [ S m l ] = i = 1 n j = 1 k i A i j E [ X i j l ]

with i = 1 n j = 1 k i A i j = 1 .

2. The New Mixed Generalized Erlang Distribution

The expression of the probability density function in (1.2) is a generalized mixed distribution, which opens a type of T-G family having T the general case of Hypoexponential distribution and G is a generated distribution from the minor distribution where the PDF of the Hypoeponential distribution is formed referring to the expression (1.2) this is the Erlang distribution X i j ~ E r l ( j , α i ) . So any function of the random variable of X i j will be considered as the G. One of these functions is the Generalized Erlang distribution.

Let X ~ E r l ( k , α ) be an Erlang distribution. Whereas, the transformation

G = X 1 c , c > 1 follows the Generalized Erlang with parameters k, β, c that can be

written as G ~ G E r l ( k , β , c ) where β c = α . The Generalized Erlang distribution is one of the important distributions used in several fields such as queuing theory, telegraphic engineering and stochastic process. The Generalized Erlang random variables has important application and it is well known referring to queueing waiting time context in telecommunication traffic engineering and others, see [5] .

In the following proposition, we present the characteristics found for the Generalized Erlang, see [6] .

Proposition 1. Let G ~ G E r l ( k , β , c ) , k , β > 0 and c > 1 . The expressions of PDF, CDF, Moment of order l and MGF, is given as:

f G ( t ) = c e ( t β ) c t 1 + c k β c k ( k 1 ) !

F G ( t ) = 1 Γ ( k , t c β c ) Γ ( k )

R G ( t ) = Γ ( k , t c β c ) Γ ( k )

E [ G l ] = β l Γ ( k + l c ) Γ ( k )

Φ G ( t ) = n = 0 ( t β ) n Γ ( k + n c ) n ! Γ ( k )

where Γ ( a , z ) is the upper incomplete Gamma function, see [7] .

After adopting the general case of the Hypoexponential distribution with PDF

f S m ( t ) = i = 1 n j = 1 k i A i j f X i j ( t )

and by taking G = G i j = X i j 1 c , where X i j ~ E r l ( j , α i ) then G i j ~ G E r l ( j , β i , c )

where β i c = α i , c > 1 we generate a new mixed distribution A named the New Mixed Generalized Erlang distribution. Thus the PDF of A is:

f A ( t ) = i = 1 n j = 1 k i Y i j f G i j ( t ) (2.1)

where β = ( β 1 , β 2 , , β n ) + n , k = ( k 1 , k 2 , , k n ) n , we denote where β i c = α i , c > 1 . We emphasize that A is a generalized mixed distribution and according to the discussion done by Kadri and Halat [3] , the distribution can be easily characterized. Referring to Theorem 1, we conclude for our new distribution A ~ N M G E ( k , β , c ) , the cdf, reliability, hazard and mgf are:

F A ( t ) = i = 1 n j = 1 k i Y i j F G i j ( t )

R A ( t ) = i = 1 n j = 1 k i Y i j R G i j ( t )

h A ( t ) = i = 1 n j = 1 k i Y i j f G i j ( t ) i = 1 n j = 1 k i Y i j R G i j ( t )

Φ A ( t ) = i = 1 n j = 1 k i Y i j Φ G i j ( t )

E [ A l ] = i = 1 n j = 1 k i Y i j E [ G i j l ]

and eventually, we have i = 1 n j = 1 k i Y i j = 1 . Also we may see these expressions more detailed when we insert the expressoin of the generalized Erlang distribution G i j given in Proposition 1. Thus we lead to the following expressions:

f A ( t ) = i = 1 n j = 1 k i Y i j c e ( t β ) c t 1 + c j β c j ( j 1 ) !

R A ( t ) = i = 1 n j = 1 k i Y i j Γ ( j , t c β c ) Γ ( j )

h A ( t ) = i = 1 n j = 1 k i Y i j c e ( t β ) c t 1 + c j β c j ( j 1 ) ! i = 1 n j = 1 k i Y i j Γ ( j , t c β c ) Γ ( j )

Φ A ( t ) = i = 1 n j = 1 k i n = 0 Y i j ( t β ) n Γ ( j + n c ) n ! Γ ( j )

E [ A l ] = i = 1 n j = 1 k i Y i j β l Γ ( j + l c ) Γ ( j )

F A ( t ) = 1 i = 1 n j = 1 k i Y i j Γ ( j , t c β c ) Γ ( j )

We note out to characterize our new distribution the New Mixed Generalized Erlang distribution, we are left by determining the coefficient Y i j that appears in the expressions. So we aim to find a method to compute Y i j in order to find the pdf, cdf, mgf, moment order l, hazard rate function and reliability function. Different approaches were proposed to determine these coefficients, like recursive algorithms and other by solving a linear matrix equation, see [4] [5] [6] [7] [8] .

3. The Matrix Form for Finding PDF of New Mixed Generalized Erlang Distribution

The purpose of this section is to find a linear algebra approach method for finding the coefficients Y i j . The method aims to determine m linear equations for Y i j . Our first equation is given in the previous section having i = 1 n j = 1 k i Y i j = 1 .

Next, we find the other m 1 linear equations for Y i j . We start by the following lemma.

Lemma 1. Let m 2 , c > 1 , k = ( k 1 , k 2 , , k n ) , and β = ( β 1 , β 2 , , β n ) , where m = i = 1 n k i . Then the uth derivative of the PDF of A ~ N M G E ( k , β , c ) at zero is given by:

f A u ( 0 ) = { 0 if 0 u c m 2 notzero if u = c m 1

Proof. Starting from the definition of the Hypoexponential distribution S m ~ H y p o e x p ( α , k ) which is the sum independent Erlang distribution E i ~ E ( α i , k i ) which means that the PDF of S m is the convolution of the PDF of the E i . Thus:

f S n ( t ) = f E 1 f E 2 f E n ( t )

then the Laplace transform of f S n ( t ) is:

L { f S n ( t ) } ( s ) = i = 1 n L { f E i ( t ) } ( s ) = i = 1 n ( 1 1 + s α i ) k i

and lim s + L { f S n ( t ) } is proportional to 1 s i = 1 n k i = 1 s m . Now, our new distribution A ~ N M G E ( k , β , c ) is a transform function of the random variable by E i 1 c , and we conclude that the behaviour of the Laplace transform of the PDF of A at infinity is proportional to 1 s c m or we may say L { f A ( t ) } ~ 1 s c m . The proof of

our result is done by induction. For u = 0 , and by Initial Value Theorem, we have:

f A ( 0 + ) = lim t 0 f A ( t ) = lim s + s L { f A ( t ) } = lim s + s ( 1 s c m ) = 0

Next step for u = 1 . We have L { f A ( t ) } = s L { f A ( t ) } f A ( 0 + ) = s L { f A ( t ) } .

Again:

f A ( 0 + ) = lim s + s L { f A ( t ) } = lim s + s 2 L { f A ( t ) } = lim s + 1 s c m 2 = 0

Next step for u = 2 . We have L { f A ( t ) } = s L { f A ( t ) } f A ( 0 + ) = s L { f A ( t ) } .

Again:

f A ( 0 + ) = lim s + s L { f A ( t ) } = lim s + s 3 L { f A ( t ) } = lim s + 1 s c m 3 = 0

Then continuing in the same manner knowing that L { f A ( i ) ( t ) } = s L { f A ( i 1 ) ( t ) } f A ( i 1 ) ( 0 + ) , till reaching the derivative u = c m 2 , we get:

f A ( c m 2 ) ( 0 + ) = lim s + s L { f A ( c m 3 ) ( t ) } = lim s + s c m 1 L { f A ( t ) } = lim s + s c m 1 s c m = 0

also we may conclude that:

f A ( c m 1 ) ( 0 + ) = lim s + s L { f A ( c m 2 ) ( t ) } = lim s + s c m L { f A ( t ) } = lim s + s c m s c m =

till reaching the derivative u = c m 1 .

f A ( c m 1 ) ( 0 + ) = lim s + s L { f A ( c m 2 ) ( t ) } = lim s + s c m L { f A ( t ) } = lim s + s c m s c m =

In the next definition, we define the YARGHANN Matrix (GH), that is used further to determine the vector Y = ( Y i j ) m × 1 . The definition is related to f G i j ( u ) ( 0 ) , the uth derivative of the PDF of Generalized Erlang distribution G i j ~ G E r l ( j , β i , c ) at 0. For simplicity, we write f G i j ( u ) ( 0 ) = f i , j ( u ) ( 0 ) .

Next, we find the expression of the uth derivative of f i , j ( u ) ( 0 ) .

Proposition 2. Let G ~ G E r l ( k , β , c ) , and l * . Then:

f G ( c l 1 ) ( 0 ) = { ( 1 ) l k c ( c l 1 ) ! ( k 1 ) ! ( l k ) ! β c l if 1 k l 0 if k > l

Proof. We have the PDF of G ~ G E r l ( k , β , c ) is f G ( x ) = c e ( x β ) c x 1 + c k β c k ( k 1 ) ! . But the series of e ( x β ) c is e ( x β ) c = n = 0 ( 1 ) n n ! ( x β ) c n . Then:

f G ( x ) = c β ( k 1 ) ! n = 0 ( 1 ) n n ! ( x β ) c n + c k 1

Then:

f G ( u ) ( x ) = c ( u ! ) ( k 1 ) ! n = u ( 1 ) n β c n + c k n ! ( x ) c n + c k ( u + 1 )

for c n + c k ( u + 1 ) = 0 , which c n + c k ( u + 1 ) = 0 , or c ( n + k ) = u + 1 , and to have all integers we must have c u + 1 and thus u + 1 = c l where l = n + k < u + 1 . Therefore, for this case:

f G ( u ) ( 0 ) = u ! c ( 1 ) n ( k 1 ) ! β c ( n + k ) n ! = u ! c ( 1 ) l k ( k 1 ) ! β c l ( l k ) !

However, the case c n + c k ( u + 1 ) > 0 , f G ( c l 1 ) ( 0 ) = 0 . In this case

c n + c k c l > 0 or n + k l > 0 having n = 0 , 1 , 2 , , gives l k < 0 . ◻

Definition 1. Let f i , j ( u ) ( 0 ) be the uth derivative of the PDF of G i j ~ G E r l ( j , β i , c ) at 0, where 1 i n and 1 j k i . We define the GH matrix for k = ( k 1 , k 2 , , k n ) , β = ( β 1 , β 2 , , β n ) , u = c l 1 for l = 1 , , m 1 , and m = i = 1 n k i of size m × m as:

G H = [ 1 1 1 1 f 1 , 1 c 1 ( 0 ) f 1 , k 1 c 1 ( 0 ) f n , 1 c 1 ( 0 ) f n , k n c 1 ( 0 ) f 1 , 1 2 c 1 ( 0 ) f 1 , k 1 2 c 1 ( 0 ) f n , 1 2 c 1 ( 0 ) f n , k n 2 c 1 ( 0 ) f 1 , 1 c l 1 ( 0 ) f 1 , k 1 c l 1 ( 0 ) f n , 1 c l 1 ( 0 ) f n , k n c l 1 ( 0 ) f 1 , 1 c ( m 1 ) 1 ( 0 ) f 1 , k 1 c ( m 1 ) 1 ( 0 ) f n , 1 c ( m 1 ) 1 ( 0 ) f n , k n c ( m 1 ) 1 ( 0 ) ] (3.1)

Next, the linear matrix equation for Y i j is formed, having the GH matrix, its coefficient matrix.

Theorem 2. Let f A ( t ) = i = 1 n j = 1 k i Y i j f G i j ( t ) , where G i j ~ G E r l ( j , β i , c ) , and Y = ( Y i j ) I m , I m = { ( i , j ) / 1 i n , 1 j k i and m = i = 1 n k i , k = ( k 1 , k 2 , , k n ) , β = ( β 1 , β 2 , , β n ) }. Then ( G H ) Y = e 1 T where e 1 = ( 1 0 0 ) .

Proof. We have previously started with our first equation from the definition of the mixed distribution that, i = 1 n j = 1 k i Y i j = 1 , the first linear equation for Y i j . To find the other equations we take the PDF of, f A ( t ) = i = 1 n j = 1 k i Y i j f G i j ( t ) . Then the uth derivative of f A ( t ) is f A ( u ) ( t ) = i = 1 n j = 1 k i Y i j f G i j ( u ) ( t ) . But from Proposition 1, we have f A ( u ) ( 0 ) = 0 , whenever 0 u c m 2 . We conclude that f A ( u ) ( 0 ) = i = 1 n j = 1 k i Y i j f G i j ( u ) ( 0 ) = 0 for 0 u c m 2 . To not considering any trivial equation, we have from Proposition 2, f G ( c l 1 ) ( 0 ) 0 , when 1 k i l , l * , so by taking l = 1 , , m 1 , we get u = c 1 , , c ( m 1 ) 2 < c m 2 . So f A ( c l 1 ) ( 0 ) = i = 1 n j = 1 k i 6 Y i j f G i j ( c l 1 ) ( 0 ) = 0 , l = 1 , 2 , , m 1 , and these will be the rest m 1 equations. Thus we obtain m equations that are equivalent to the matrix equation ( G H ) Y = e 1 T where GH is the m × m GH matrix defined in Definition 1 and e 1 = ( 1 0 0 ) . ◻

Next, we illustrate in two special cases of this distribution two N M G E ( k , β , 2 ) and N M G E ( k , β , 3 ) .

Corollary 1. Let A ~ N M G E ( k , β , 2 ) and G i j ~ G E r l ( j , β i , 2 ) , 1 i n and 1 j k i and m = i = 1 n k i . where u = 2 r 1 for r = 1 , , m 1 , β = ( β 1 , β 2 , , β n ) , k = ( k 1 , k 2 , , k n ) . Then we define the GH matrix of size m × m as:

G H = [ 1 1 1 1 f 1 , 1 ( 0 ) f 1 , k 1 ( 0 ) f n , 1 ( 0 ) f n , k n ( 0 ) f 1 , 1 ( 3 ) ( 0 ) f 1 , k 1 ( 3 ) ( 0 ) f n , 1 ( 3 ) ( 0 ) f n , k n ( 3 ) ( 0 ) f 1 , 1 ( 2 r 1 ) ( 0 ) f 1 , k 1 ( 2 r 1 ) ( 0 ) f n , 1 ( 2 r 1 ) ( 0 ) f n , k n ( 2 r 1 ) ( 0 ) f 1 , 1 ( 2 m 3 ) ( 0 ) f 1 , k 1 ( 2 m 3 ) ( 0 ) f n , 1 ( 2 m 3 ) ( 0 ) f n , k n ( 2 m 3 ) ( 0 ) ]

Moreover, finding the GH matrix in a simpler way, an expression of the uth derivative of PDF of Generalized Erlang Distribution is derived for c = 2 as:

f G ( c l 1 ) ( 0 ) = { 2 ( 1 ) l j ( 2 l 1 ) ! ( j 1 ) ! ( l j ) ! β i 2 l if j l 0 if j > l

where l = 1 , 2 , , m 1 .

Corollary 2. Let G i j ~ G E r l ( j , β i , 3 ) , m 2 , 1 i n , 1 j k i and m = i = 1 n k i where u = 3 r 1 , for r = 1 , , m 1 and β = ( β 1 , β 2 , , β n ) and k = ( k 1 , k 2 , , k n ) . Then we define the GH matrix as:

G H = [ 1 1 1 1 f 1 , 1 ( 0 ) f 1 , k 1 ( 0 ) f n , 1 ( 0 ) f n , k n ( 0 ) f 1 , 1 ( 5 ) ( 0 ) f 1 , k 1 ( 5 ) ( 0 ) f n , 1 ( 5 ) ( 0 ) f n , k n ( 5 ) ( 0 ) f 1 , 1 ( 3 r 1 ) ( 0 ) f 1 , k 1 ( 3 r 1 ) ( 0 ) f n , 1 ( 3 r 1 ) ( 0 ) f n , k n ( 3 r 1 ) ( 0 ) f 1 , 1 ( 3 m 4 ) ( 0 ) f 1 , k 1 ( 3 m 4 ) ( 0 ) f n , 1 ( 3 m 4 ) ( 0 ) f n , k n ( 3 m 4 ) ( 0 ) ]

Moreover for finding GH in a simpler way, an expression of the uth derivative of PDF Generalized Erlang Distribution is derived for c = 3 as:

f i , j ( 3 r 1 ) ( 0 ) = f i , j ( c r + c 1 ) ( 0 ) = { ( 1 ) r j c ( 3 r 1 ) ! ( j 1 ) ! ( r j ) ! β i c r if j r 0 if j > r

4. Application

In this section, we illustrate an example of our method in finding the PDF of the Generalized Erlang Distribution using the GH Matrix.

We suppose that A ~ N M G E ( k , β , c ) , and G i j ~ G E r l ( j , β i , c ) , for c > 1 , β = ( β 1 , β 2 , , β n ) and k = ( k 1 , k 2 , , k n ) , m = k i . Then the PDF of A is f A ( t ) = i = 1 n j = 1 k i Y i j f G i j ( t ) where Y = ( G H ) 1 e 1 T .

Example 1. We consider the distribution A ~ N M G E ( k , β , 2 ) , with β = ( 3 , 4 , 5 ) and let k = ( 2 , 3 , 1 ) , then m = i = 1 3 k i = 6 . Then the PDF of A is f A ( t ) = i = 1 3 j = 1 k i Y i j f G i j ( t ) where Y = ( Y 11 , Y 12 , Y 21 , Y 22 , Y 23 , Y 31 ) . We have:

f G i j ( t ) = 2 e ( x β i ) 2 x 1 + 2 j β 2 j ( j 1 ) ! , then f G 11 ( t ) = 2 3 e t 2 3 t , f G 12 ( t ) = 2 9 e t 2 3 t 3 , f G 21 ( t ) = 1 2 e t 2 4 t , f G 22 ( t ) = 1 8 e t 2 4 t 3 , f G 23 ( t ) = 1 64 e t 2 4 t 5 , f G 31 ( t ) = 2 5 e t 2 5 t

To find the entries of the GH matrix we must use Corollary 1 to get GH as follows:

G H = [ 1 1 1 1 1 1 f 1 , 1 ( 0 ) f 1 , 2 ( 0 ) f 2 , 1 ( 0 ) f 2 , 2 ( 0 ) f 2 , 3 ( 0 ) f 3 , 1 ( 0 ) f 1 , 1 ( 3 ) ( 0 ) f 1 , 2 ( 3 ) ( 0 ) f 2 , 1 ( 3 ) ( 0 ) f 2 , 2 ( 3 ) ( 0 ) f 2 , 3 ( 3 ) ( 0 ) f 3 , 1 ( 3 ) ( 0 ) f 1 , 1 ( 5 ) ( 0 ) f 1 , 2 ( 5 ) ( 0 ) f 2 , 1 ( 5 ) ( 0 ) f 2 , 2 ( 5 ) ( 0 ) f 2 , 3 ( 5 ) ( 0 ) f 3 , 1 ( 5 ) ( 0 ) f 1 , 1 ( 7 ) ( 0 ) f 1 , 2 ( 7 ) ( 0 ) f 2 , 1 ( 7 ) ( 0 ) f 2 , 2 ( 7 ) ( 0 ) f 2 , 3 ( 7 ) ( 0 ) f 3 , 1 ( 7 ) ( 0 ) f 1 , 1 ( 9 ) ( 0 ) f 1 , 2 ( 9 ) ( 0 ) f 2 , 1 ( 9 ) ( 0 ) f 2 , 2 ( 9 ) ( 0 ) f 2 , 3 ( 9 ) ( 0 ) f 3 , 1 ( 9 ) ( 0 ) ]

and by Corollary 1,

f i , j ( 2 r 1 ) ( 0 ) = { ( 1 ) r j 2 ( 2 r 1 ) ! ( j 1 ) ! ( r j ) ! β i 2 r if 1 j r 0 if j > r

for r = 1 , 2 , 3 , 4 , 5 = m 1 ,

For r = 1 , if 1 j r , j = 1 , we conclude that:

f i , j ( 1 ) ( 0 ) f ( x ) = { 2 β i 2 , if j = 1 0 , else

and for r = 2 , if 1 j r , j = 1 , 2 , then:

f i , j ( 2 ) ( 0 ) = { 12 β i 4 , if j = 1 12 β i 4 , if j = 2 0 , else

and for r = 3 , if 1 j r , j = 1 , 2 , 3 , then:

f i , j ( 3 ) ( 0 ) = { 120 2 β i 6 , if j = 1 240 β i 6 , if j = 2 120 β i 6 , if j = 3 0 , else

and for r = 4 , if 1 j r , j = 1 , 2 , 3 , then:

f i , j ( 4 ) ( 0 ) = { 1680 β i 8 , if j = 1 5040 β i 8 , if j = 2 5040 β i 8 , if j = 3 0 , else

and for r = 5 , if 1 j r , j = 1 , 2 , 3 , then:

f i , j ( 5 ) ( 0 ) = { 30240 β i 10 , if j = 1 120960 β i 10 , if j = 2 181440 β i 10 , if j = 3 0 , else

then we get:

G H = [ 1 1 1 1 1 1 2 3 0 1 2 0 0 2 5 4 3 4 3 3 4 3 4 0 12 25 40 9 80 9 15 8 15 4 15 8 24 25 560 27 560 9 105 16 315 16 315 16 336 125 1120 9 4480 9 945 32 945 8 2835 16 6048 625 ]

then the inverse of (GH) is given by:

( G H ) 1 = [ 2349 4 23247 4 121743 16 118557 40 57213 140 243 14 81 2 405 4293 8 4239 20 297 10 9 7 1408 13376 16864 95216 15 29696 35 736 21 64 736 3248 3 1384 3 2384 35 64 21 64 608 2288 3 4264 15 1312 35 32 21 3125 4 28125 4 134375 16 71875 24 10625 28 625 42 ]

By applying the Theorem 2, Y = ( G H ) 1 e 1 T , where e 1 = ( 1 0 0 ) . Next,

Y = ( G H ) 1 e 1 T = [ 2349 4 81 2 1408 64 64 3125 4 ] .

Then we get Y 11 = 2349 4 , Y 12 = 81 2 , Y 21 = 1408 , Y 22 = 64 , Y 23 = 64 , Y 31 = 3125 4 .

We insure that the sum of entries of Y is given by Y 11 + Y 12 + Y 21 + Y 22 + Y 23 + Y 31 = 1 as we introduced before.

Finally, the PDF of A (Figure 1) that follows New Mixed Generalized Distribution is:

f A ( t ) = i = 1 n j = 1 k i Y i j f G i j ( t ) = 783 2 e t 2 3 t + 9 e t 2 3 t 3 704 e t 2 4 t + 8 e t 2 4 t 3 e t 2 4 t 5 + 625 2 e t 2 5

where the CDF of A (Figure 2) that follows New Mixed Generalized Erlang distribution is:

F A ( t ) = i = 1 n j = 1 k i Y i j F G i j ( t ) = 1 3125 4 e t 2 5 27 4 e t 2 3 ( 93 + 2 t 2 ) + 2 e t 2 4 ( 704 + t 4 )

However the Reliability of A (Figure 3) that follows New Mixed Generalized Erlang distribution is:

Figure 1. PDF of A ~ N M G E ( k , β , 2 ) .

Figure 2. CDF of A ~ N M G E ( k , β , 2 ) .

Figure 3. Reliability of A ~ N M G E ( k , β , 2 ) .

Figure 4. Hazard of A ~ N M G E ( k , β , 2 ) .

R A ( t ) = i = 1 n j = 1 k i Y i j R G i j ( t ) = 1125 4 e t 2 5 + 27 4 e t 2 3 ( 93 + 2 t 2 ) 2 e t 2 4 ( 704 + t 4 )

But the hazard rate function of A (Figure 4) is:

h A ( t ) = i = 1 n j = 1 k i Y i j c e ( t β ) c t 1 + c j β c j ( j 1 ) ! i = 1 n j = 1 k i Y i j Γ ( j , t c β c ) Γ ( j ) = 783 2 e t 2 3 t + 9 e t 2 3 t 3 704 e t 2 4 t + 8 e t 2 4 t 3 e t 2 4 t 5 + 625 2 e t 2 5 t 3125 4 e t 2 5 + 27 4 e t 2 3 ( 93 + 2 t 2 ) 2 e t 2 4 ( 704 + t 4 )

While the moment generating function of A is:

Φ A ( t ) = i = 1 3 j = 1 k i Y i j Φ G i j ( t ) = n = 0 t n n ! ( 2349 Γ ( 1 + n 2 ) 4 + 81 ( 3 ) 3 Γ ( 2 + n 2 ) 2 1408 ( 4 ) n Γ ( 1 + n 2 ) + 64 ( 4 ) n Γ ( 2 + n 2 ) 32 ( 4 ) n Γ ( 3 + n 2 ) + 3125 ( 5 ) n Γ ( 1 + n 2 ) 4 )

Finally, the moment of order n of A is:

E [ A n ] = i = 1 n j = 1 k i Y i j E [ G i j n ] = 2349 4 ( 3 n 2 1 ( 1 + 1 ) ( 1 + n 2 1 ) ) + 81 2 ( 3 n 2 2 ( 2 + 1 ) ( 2 + n 2 1 ) ) 1408 ( 4 n 2 1 ( 1 + 1 ) ( 1 + n 2 1 ) ) + 64 ( 4 n 2 2 ( 2 + 1 ) ( 2 + n 2 1 ) ) 64 ( 4 n 2 3 ( 3 + 1 ) ( 3 + n 2 1 ) ) + 3125 4 ( 5 n 2 1 ( 1 + 1 ) ( 1 + n 2 1 ) )

5. Conclusion

A modified form of pdf, cdf, mgf, reliability function, hazard function and moments for the New Mixed Generalized Erlang distribution were established. The proof has been done by writing the PDF of the New Mixed Generalized Erlang distribution as a linear combination of the PDF of the Generalized Erlang distribution. Eventually, we are concerned about finding Y i j whenever you find it that’s an open problem may be discussed in the coming article.

Conflicts of Interest

The authors declare no conflicts of interest regarding the publication of this paper.

References

[1] Shittu, O.I. and Adepoju, K.A. (2013) On the Beta-Nakagami Distribution. Progress in Applied Mathematics, 5, 49-58.
[2] Mansoor, M., Tahir, M.H., Alzaatreh, A., Cordeiro, G.M., Zubair, M. and Ghazali, S.S. (2016) An Extended Frechet Distribution: Properties and Applications. Journal of Data Science, 14, 167-188.
https://doi.org/10.6339/JDS.201601_14(1).0010
[3] Kadri, T. and Halat, A. (2022) The New Mixed Hypoexponential-G Family. arXiv: 2211.06585.
[4] Smaili, K., Kadri, T. and Kadry, S. (2013) Hypoexponential Distribution with Different Parameters. Applied Mathematics, 4, 624-631.
https://doi.org/10.4236/am.2013.44087
[5] Gnedenko, B.V. and Kovalenko, I.N. (1989) Introduction to Queueing Theory. Birkhauser Boston Inc, Boston.
https://doi.org/10.1007/978-1-4615-9826-8
[6] Warsono, W. (2009) Moment Properties of the Generalized Gamma Distribution. In Seminar Nasional Sains, Matematika, Informatika dan Aplikasinya VI UNILA, Fak. Mipa Universitas Lampung, 157-162.
[7] Temme, N.M. (1996) Special Functions: An Introduction to the Classical Functions of Mathematical Physics. John Wiley & Sons, New York.
https://doi.org/10.1002/9781118032572
[8] Smaili, K., Kadri, T. and Kadry, S. (2014) A Modified-Form Expressions for the Hypoexponential Distribution. British Journal of Mathematics & Computer Science, 4, 322-332.
https://doi.org/10.9734/BJMCS/2014/6317

Copyright © 2024 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.