Exploring a New Lifetime Distribution for Modelling the Waiting Time of Bank Customers

Abstract

The fitting of lifetime distribution in real-life data has been studied in various fields of research. With the theory of evolution still applicable, more complex data from real-world scenarios will continue to emerge. Despite this, many researchers have made commendable efforts to develop new lifetime distributions that can fit this complex data. In this paper, we utilized the KM-transformation technique to increase the flexibility of the power Lindley distribution, resulting in the Kavya-Manoharan Power Lindley (KMPL) distribution. We study the mathematical treatments of the KMPL distribution in detail and adapt the widely used method of maximum likelihood to estimate the unknown parameters of the KMPL distribution. We carry out a Monte Carlo simulation study to investigate the performance of the Maximum Likelihood Estimates (MLEs) of the parameters of the KMPL distribution. To demonstrate the effectiveness of the KMPL distribution for data fitting, we use a real dataset comprising the waiting time of 100 bank customers. We compare the KMPL distribution with other models that are extensions of the power Lindley distribution. Based on some statistical model selection criteria, the summary results of the analysis were in favor of the KMPL distribution. We further investigate the density fit and probability-probability (p-p) plots to validate the superiority of the KMPL distribution over the competing distributions for fitting the waiting time dataset.

Share and Cite:

Ogumeyo, S. , Ehiwario, J. and Opone, F. (2024) Exploring a New Lifetime Distribution for Modelling the Waiting Time of Bank Customers. Journal of Applied Mathematics and Physics, 12, 194-209. doi: 10.4236/jamp.2024.121015.

1. Introduction

Lifetime distributions have become increasingly popular in statistical modeling with the advent of real-life data fitting. They have gained wide application in research areas such as survival, reliability, competing risk, flood frequency, and wind speed analyses. Owing to these advancements, different methodologies have been introduced in the literature to expand the applicability of lifetime distributions in modeling real-world scenarios. Some of these methods include the Transformed-Transformer (T-X) generator introduced by [1] , the Weibull-generator proposed by [2] , the alpha power transformation introduced by [3] , the Topp-Leone-generator studied by [4] , the alpha power transformed Weibull-generator introduced by [5] , and the continuous Bernoulli-generator developed by [6] . This study revisits the Lindley distribution introduced by [7] to provide a background for the research. The Lindley distribution is a one-parameter lifetime distribution that was introduced in the context of Bayesian statistics as a counter-example of fiducial statistics. [8] studied the mathematical properties of the Lindley distribution in detail and applied it to a waiting time dataset, reigniting the interest of researchers. This sparked the introduction of several modifications of the Lindley distribution, including the extended Lindley distribution introduced by [9] , the Lindley-Exponential distribution studied by [10] , a generalized two-parameter Lindley distribution treated by [11] , a three-parameter generalized Lindley distribution proposed by [12] , the Marshall-Olkin generalized Lindley distribution studied by [13] , and the new alpha power transformed power Lindley distribution developed by [14] . One relevant modification of the Lindley distribution is the power Lindley distribution developed by [15] . Suppose a random variable X follows the Lindley distribution, a random variable defined by the transformation T = X 1 a is said to follow the power Lindley distribution with cumulative distribution and probability density functions, respectively, obtained as:

F P L ( t , a , b ) = 1 ( 1 + b t a 1 + b ) e b t a , t > 0 , a , b > 0 , (1)

and

f P L ( t , a , b ) = a b 2 1 + b ( 1 + t a ) t a 1 e b t a , t > 0 , a , b > 0. (2)

The authors have demonstrated the relevance of the power Lindley distribution over existing generalized Lindley distributions in many instances. However, recent papers have established an extended version of the distribution. These include the Exponentiated Power Lindley (EXPL) distribution introduced by [16] , Extended Power Lindley (EPL) distribution due to [17] , Transmuted Power Lindley (TPL) distribution developed by [18] , Kumaraswamy Power Lindley (KPL) distribution proposed by [19] , Odd Log-Logistic Power Lindley (OLLPL) distribution due to [20] , and Topp-Leone Power Lindley (TLPL) distribution proposed by [21] . In this paper, we use the KM transformation technique developed by [22] to explore a new horizon of the power Lindley distribution. The rest of the paper is organized into the following sections: Section 2 contains the formulation and the derivation of the mathematical treatments of the proposed Kavya-Manoharan Power Lindley (KMPL) distribution. Section 3 presents the parameter estimation, simulations, and data fittings of the KMPL distribution. Finally, Section 4 concludes the paper.

2. The Kavya-Manoharan Power Lindley (KMPL) Distribution

Recently, [22] developed a transformation technique for generalizing existing lifetime distributions which they referred to as the KM transformation family of distributions. This new family is defined by the cdf as:

F K M ( t , ω ) = e e 1 ( 1 e G ( t , ω ) ) , t > 0 , (3)

and the associated density function obtained as:

f K M ( t , ω ) = e e 1 g ( t , ω ) e G ( t , ω ) , t > 0 , (4)

where ω is a vector of parameter(s) from the baseline distribution.

By inserting (1) and (2) into (3) and (4), we build the Kavya-Manoharan Power Lindley (KMPL) distribution with the cdf and pdf, respectively, defined as:

F K M P L ( t , a , b ) = e e 1 ( 1 exp ( ( 1 + b t a 1 + b ) e b t a 1 ) ) , t > 0 , a , b > 0 , (5)

and the associated density is obtained as:

f K M P L ( t , a , b ) = a b 2 ( e 1 ) ( 1 + b ) ( 1 + t a ) t a 1 exp ( b t a + ( 1 + b t a 1 + b ) e b t a ) , t > 0 , a , b > 0. (6)

An interesting feature of the KM transformation is the retention of the parameter(s) of the baseline distribution without adding extra parameter(s).

The survival and hazard rate functions of the KMPL distribution are, respectively, obtained by manipulating (5) and (6) as follows:

S K M P L ( t , a , b ) = 1 e e 1 ( 1 e ( ( 1 + b t a 1 + b ) e b t a 1 ) ) , (7)

and

h K M P L ( t , a , b ) = a b 2 ( 1 + t a ) t a 1 exp ( b t a + ( 1 + b t a 1 + b ) e b t a ) ( 1 + b ) [ exp ( ( 1 + b t a 1 + b ) e b t a ) 1 ] . (8)

The graphical representation of the density function and the hazard rate functions of the KMPL distribution are shown in Figure 1.

The shapes of the density plots in Figure 1 indicate that the KMPL distribution

Figure 1. The density (a) and hazard (b) plots of the KMPL distribution.

accommodates decreasing (reversed-J), left (right)-skewed unimodal and symmetric shapes. Whereas, the hazard plots suggest decreasing, increasing, and inverted bathtub hazard rate properties.

Consequently, other mathematical properties of the KMPL distribution are treated in the following subsections.

2.1. Quantile Function

The quantile function of a lifetime distribution is an essential mathematical treatment used in generating random samples from the distribution, especially for simulation purposes. The quantile function Q u is obtained by solving the system of non-linear equation Q u = F 1 ( u ) , where u satisfies the condition 0 < u < 1 . By this approach, we obtain the quantile function of the KMPL distribution as follows:

e e 1 ( 1 exp ( ( 1 + b t a 1 + b ) e b t a 1 ) ) = u ,

( 1 + b + b t a ) e b t a = ( b + 1 ) [ ln { 1 u ( e 1 ) e } + 1 ] ,

multiplying through by e ( b + 1 ) , we have:

( 1 + b + b t a ) e ( b t a + b + 1 ) = b + 1 e b + 1 [ ln { 1 u ( e 1 ) e } + 1 ] ,

Clearly, W 1 [ b + 1 e b + 1 [ ln { 1 u ( e 1 ) e } + 1 ] ] gives the principal solution for the Lambert W function ( 1 + b + b t a ) . Further simplification yields,

Q u = { 1 1 b 1 b W 1 [ b + 1 e b + 1 [ ln { 1 u ( e 1 ) e } + 1 ] ] } 1 a , 0 < u < 1. (9)

From (9), we can further derive mathematical expressions for the median, lower, and upper quartiles of the KMPL distribution, respectively, as follows:

Q 1 2 = { 1 1 b 1 b W 1 [ b + 1 e b + 1 [ ln { 1 e 1 2 e } + 1 ] ] } 1 a ,

Q 1 4 = { 1 1 b 1 b W 1 [ b + 1 e b + 1 [ ln { 1 e 1 4 e } + 1 ] ] } 1 a ,

and

Q 3 4 = { 1 1 b 1 b W 1 [ b + 1 e b + 1 [ ln { 1 3 ( e 1 ) 4 e } + 1 ] ] } 1 a .

A quantile-based skewness and kurtosis have been suggested by [23] and [24] , respectively. The authors defined the Galton Skewness and Moors’ Kurtosis as:

S G = Q 6 8 2 Q 4 8 + Q 2 8 Q 6 8 Q 2 8 and K M = Q 7 8 Q 5 8 + Q 3 8 Q 1 8 Q 6 8 Q 2 8 .

Figure 2 shows the Galton Skewness and Moors’ Kurtosis for the KMPL distribution.

2.2. The rth Ordinary Moments

The rth ordinary moment of a random variable T following the density function specified in (6) is defined as:

E [ T r ] = 0 t r f K M P L ( t , a , b ) d t = a b 2 ( e 1 ) ( 1 + b ) 0 ( 1 + t a ) t r + a 1 exp ( b t a + ( 1 + b t a 1 + b ) e b t a ) d t , r = 1 , 2 , 3 , 4 , (10)

Recall the Maclaurin series expansion of the exponential function as:

Figure 2. The Galton Skewness (left) and Moors’ Kurtosis (right) for the KMPL distribution.

e k t = n = 0 ( 1 ) n ( k t ) n n ! . (11)

So that,

exp { ( 1 + b t a 1 + b ) e b t a } = n = 0 1 n ! ( 1 + b t a 1 + b ) n e n b t a ,

( 1 + b t a 1 + b ) n = p = 0 ( n p ) ( b t a 1 + b ) p ,

inserting these expressions into (10), yields

E [ T r ] = a e 1 n = 0 p = 0 b 2 + p n ! ( b + 1 ) p + 1 ( n p ) [ 0 t r + a ( p + 1 ) 1 e b t a ( n + 1 ) d t + 0 t r + a ( p + 2 ) 1 e b t a ( n + 1 ) d t ] . (12)

Evaluating the first integral part of (12), we have:

0 t r + a ( p + 1 ) 1 e b t a ( n + 1 ) d t = Γ ( p + r a + 1 ) a [ b ( n + 1 ) ] p + r a + 1 ,

similarly, the second integral part of (12) yields:

0 t r + a ( p + 2 ) 1 e b t a ( n + 1 ) d t = Γ ( p + r a + 2 ) a [ b ( n + 1 ) ] p + r a + 2 .

Finally, by inserting the solution of the integrals into (12), the rth ordinary moment of the KMPL distribution is obtained as:

E [ T r ] = ( e 1 ) 1 n = 0 p = 0 b 2 + p n ! ( b + 1 ) p + 1 ( n p ) [ Γ ( p + r a + 1 ) [ b ( n + 1 ) ] p + r a + 1 + Γ ( p + r a + 2 ) [ b ( n + 1 ) ] p + r a + 2 ] . (13)

The mean of the KMPL distribution is derived from (13) when r = 1, given as:

E [ T ] = ( e 1 ) 1 n = 0 p = 0 b 2 + p n ! ( b + 1 ) p + 1 ( n p ) [ Γ ( p + 1 a + 1 ) [ b ( n + 1 ) ] p + 1 a + 1 + Γ ( p + 1 a + 2 ) [ b ( n + 1 ) ] p + 1 a + 2 ] .

Moreover, the variance, skewness, and kurtosis of the KMPL distribution can be generated from (13) as follows:

variance = E [ T 2 ] ( E [ T ] ) 2 ,

skewness = E [ T 3 ] 3 E [ T 2 ] E [ T ] + 2 ( E [ T ] ) 3 ( E [ T 2 ] ( E [ T ] ) 2 ) 3 2 ,

and

kurtosis = E [ T 4 ] 4 E [ T 3 ] E [ T ] + 6 E [ T 2 ] ( E [ T ] ) 2 3 ( E [ T ] ) 4 ( E [ T 2 ] ( E [ T ] ) 2 ) 2 .

Table 1 shows the numerical evaluation of the mean, variance, skewness, and kurtosis of the KMPL distribution for varying values of the parameters.

From Table 1, we noticed that the mean of the KMPL distribution is monotone decreasing in parameter b, and increasing in parameter a. Whereas, the variance and skewness are both decreasing in parameters a and b. The positive (negative) skewness values also indicate that the KMPL distribution is suitable for modeling right (left)-skewed data sets.

2.3. Probability Weighted Moments

The Probability Weighted Moments (PWMs) of a random variable T with legitimate pdf f ( t ) and cdf F ( t ) , is specified by:

Table 1. Numerical evaluation of the moments of KMPL distribution.

E [ T r F s ( t ) ] = t r f ( t ) F s ( t ) d t , (14)

Utilizing (14), we define the ( r , s ) t h PWMs of the KMPL distribution as follows:

E [ T r F s ( t ) ] = 0 t r f K M P L ( t , a , b ) F K M P L s ( t , a , b ) d t , (15)

By simplification,

f ( t ) F s ( t ) = a b 2 ( 1 + t a ) t a 1 ( e 1 ) ( 1 + b ) exp ( b t a + ( 1 + b t a 1 + b ) e b t a ) × { e e 1 ( 1 exp ( ( 1 + b t a 1 + b ) e b t a 1 ) ) } s

( 1 exp ( ( 1 + b t a 1 + b ) e b t a 1 ) ) s = k = 0 ( s k ) ( 1 ) k e ( ( 1 + b t a 1 + b ) e b t a 1 ) k ,

e ( ( 1 + b t a 1 + b ) e b t a ) ( k + 1 ) = m = 0 ( k + 1 ) m m ! ( 1 + b t a 1 + b ) m e m b t a ,

again,

( 1 + b t a 1 + b ) m = p = 0 ( m p ) ( b t a ) p ( 1 + b ) p ,

substituting these expressions into (15), we have:

E [ T r F s ( t ) ] = a k , m , p = 0 ( s k ) ( m p ) ( 1 ) k b 2 + p ( k + 1 ) m e s k m ! ( e 1 ) s + 1 ( b + 1 ) p + 1 0 t a ( p + 1 ) + r 1 ( 1 + t a ) e b t a ( m + 1 ) d t , (16)

employing a similar approach used in (13), we further simplify (16) as:

E [ T r F s ( t ) ] = k , m , p = 0 ( s k ) ( m p ) ( 1 ) k b 2 + k ( k + 1 ) m e s k m ! ( e 1 ) s + 1 ( b + 1 ) p + 1 [ Γ ( p + r a + 1 ) [ b ( m + 1 ) ] p + r a + 1 + Γ ( p + r a + 2 ) [ b ( m + 1 ) ] p + r a + 2 ] . (17)

2.4. Renyi Entropy

An entropy of a random variable T measures the degree of variability associated with the random variable. For a random variable T having the pdf in (6), the Renyi entropy is specified as follows:

τ R ( υ ) = 1 1 υ log 0 f K M P L υ ( t , a , b ) d t = ( a b 2 ) υ ( e 1 ) υ ( 1 + b ) υ 0 ( 1 + t a ) υ t υ ( a 1 ) e ( υ b t a + υ ( 1 + b t a 1 + b ) e b t a ) d t (18)

by simplifying the exponential function using Maclaurin series and binomial expansion, we have:

τ R ( υ ) = 1 1 υ log [ a υ e 1 n = 0 p = 0 ( n p ) υ n b 2 υ + p n ! ( b + 1 ) p + 1 [ 0 t a ( p + υ ) υ e b t a ( n + υ ) d t + 0 t a ( υ + p + 1 ) υ e b t a ( n + υ ) d t ] ] (19)

Evaluating the integrals in (19), yields

τ R ( υ ) = 1 1 υ log [ a υ 1 e 1 n = 0 p = 0 n ( n p ) υ n b 2 υ + p n ! ( b + 1 ) p + 1 [ Γ ( υ + p υ a 1 a ) [ b ( n + υ ) ] υ + p υ a 1 a + Γ ( υ + p υ a 1 a + 1 ) [ b ( n + υ ) ] υ + p υ a 1 a + 1 ] ] (20)

2.5. Distribution of Order Statistics

Suppose that X 1 : n < X 2 : n < < X n : n , is the order statistics of independent observations with sample size n generated from KMPL distribution, then the pdf of the rth order statistics, say T = X n : n is given by:

f r : n ( t , a , b ) = 1 B ( r , n r + 1 ) j = 0 n r ( n r j ) ( 1 ) j f ( t , a , b ) F r + j 1 ( t , a , b ) , (21)

substituting the cdf and pdf in (5) and (6) into (21), and employing the same approach in (16), we obtain:

f r : n ( t , a , b ) = a B ( r , n r + 1 ) m = 0 j = 0 n r k = 0 r + j 1 p = 0 m ( n r j ) ( r + j 1 k ) ( m p ) × ( 1 ) j + k b 2 + p ( k + 1 ) m e r + j k 1 m ! ( e 1 ) r + j ( b + 1 ) p + 1 ( 1 + t a ) t a ( p + 1 ) 1 e b t a ( m + 1 ) . (22)

Furthermore, the sth moment of the rth order statistics of the KMPL distribution is derived from (22) as:

E [ T r s ] = 1 B ( r , n r + 1 ) m = 0 j = 0 n r k = 0 r + j 1 p = 0 m ( n r j ) ( r + j 1 k ) ( m p ) × ( 1 ) j + k b 2 + p ( k + 1 ) m e r + j k 1 m ! ( e 1 ) r + j ( b + 1 ) p + 1 [ Γ ( p + r a + 1 ) [ b ( m + 1 ) ] p + r a + 1 + Γ ( p + r a + 2 ) [ b ( m + 1 ) ] p + r a + 2 ] . (23)

3. Parameter Estimation, Simulations, and Data Fitting

3.1. Parameter Estimation

The maximum likelihood estimation approach has been widely patronized for model parameter estimation in literature. Here, we adopt the same approach in estimating the unknown parameters of the KMPL distribution. Suppose i = 1 n t i are independent samples of size n generated from the KMPL distribution, given the pdf in (6), we therefore express the likelihood function of the KMPL distribution as:

L ( a , b , t 1 , , t n ) = i = 1 n f K M P L ( t i , a , b ) = i = 1 n [ a b 2 ( 1 + t i a ) t i a 1 ( e 1 ) ( 1 + b ) exp ( b t i a + ( 1 + b t i a 1 + b ) e b t i a ) ] . (24)

The log-likelihood function associated with (24) is achieved by obtaining its natural logarithm as:

l ( a , b , t 1 , , t n ) = i = 1 n ln [ f K M P L ( t i , a , b ) ] = n ln a + 2 n ln b n ln ( e 1 ) n ln ( 1 + b ) + ( a 1 ) i = 1 n ln ( t i ) + i = 1 n ln ( 1 + t i a ) b i = 1 n ( t i a ) + i = 1 n ( ( 1 + b t a 1 + b ) e b t a ) . (25)

Taking the first derivative of the log-likelihood function in (25) with respect to the parameters and equating it to zero, yields the corresponding Maximum Likelihood Estimates (MLEs). This can be mathematically expressed as:

l ( a , b , t 1 , , t n ) a = n a + i = 1 n t i a ln ( t i ) 1 + t i a + i = 1 n ln ( t i ) b i = 1 n t i a ln ( t i ) + b i = 1 n { t i a e b t i a ln ( t i ) [ 1 b + 1 ( 1 + b t i a 1 + b ) ] } ,

l ( a , b , t 1 , , t n ) b = 2 n b n b + 1 i = 1 n ( t i a ) + i = 1 n { t i a e b t i a [ 1 ( b + 1 ) 2 ( 1 + b t i a 1 + b ) ] } .

Obviously, the MLEs cannot be resolved analytically, thus the need to employ various statistical programs such as R, Python, Mathematica, etc. Here, we adapt the fitdistrplus package in R to obtain the MLEs of the KMPL distribution.

3.2. Monte Carlo Simulation Study

A Monte Carlo simulation study is carried out to investigate the asymptotic behavior of the maximum likelihood estimates of the parameters of the KMPL distribution. To implement this, we utilize the quantile function in (9) to generate random samples at different choices of parameter values given as (a = 0.8, b = 0.5), (a = 0.8, b = 2), (a = 1.5, b = 0.5) and (a = 1.5, b = 2). For each parameter value, the simulation is performed 1000 times at different sample sizes n = 25, 50, 100, 200, and 500. Standard statistical tools such as bias, Mean Square Error (MSE), and coverage probability of 100 ( 1 α ) % confidence intervals of the MLEs are computed. The mathematical expressions of these quantities are defined by:

Bias = 1 N i = 1 N ( ω ^ i ω ) , ω = ( a , b ) ,

1) Mean Square Error (MSE) = 1 N i = 1 N ( ω ^ i ω ) 2 ,

2) Coverage probability of 100 ( 1 α ) % CIs given by:

1 N i = 1 N I ( ω ^ i Z α / 2 s e ( ω ^ i ) < ω < ω ^ i + Z α / 2 s e ( ω ^ i ) ) ,

where I ( . ) is the indicator function and s e ( ω ^ i ) is the standard error associated to ω ^ i .

Table 2 and Table 3 hold the simulation results of the quantities computed for each parameter estimate.

Remarks:

1) From the results in Table 2, the bias and mean square error of the MLEs decreases (increases) as the sample size n increases. Also, the MLE b has a negative (positive) bias, whereas the MLE a is strictly positive bias;

2) The coverage probability investigated at 0.90 and.95 confidence intervals as

Table 2. The bias and mean square error of the MLEs.

Table 3. The coverage probability of 100 ( 1 α ) % confidence interval of the MLEs.

displayed in Table 3 approaches the nominal level of 90% and 95% confidence intervals, respectively.

3.3. Data Fitting

In this subsection, we describe the usefulness of the KMPL distribution in real-life data fittings. The waiting time of 100 bank customers is considered for this purpose. The data set was originally used by [10] to illustrate the superiority of the Lindley distribution over the exponential distribution. This data was also employed by [25] to illustrate the flexibility of the quasi-Lindley distribution. The data is given as follows: 0.8, 0.8, 1.3, 1.5, 1.8, 1.9,1.9, 2.1, 2.6, 2.7,2.9, 3.1, 3.2, 3.3,3.5, 3.6, 4.0, 4.1, 4.2, 4.2,4.3, 4.3, 4.4, 4.4, 4.6, 4.7, 4.7, 4.8, 4.9, 4.9,5.0, 5.3, 5.5, 5.7, 5.7, 6.1, 6.2, 6.2, 6.2, 6.3,6.7, 6.9, 7.1, 7.1, 7.1, 7.1, 7.4, 7.6, 7.7, 8.0,8.2, 8.6, 8.6, 8.6, 8.8, 8.8, 8.9, 8.9, 9.5, 9.6,9.7, 9.8, 10.7, 10.9, 11.0, 11.0, 11.1, 11.2, 11.2, 11.5,11.9, 12.4, 12.5, 12.9, 13.0, 13.1, 13.3, 13.6, 13.7, 13.9,14.1, 15.4, 15.4, 17.3, 17.3, 18.1, 18.2, 18.4, 18.9, 19.0,19.9, 20.6, 21.3, 21.4, 21.9, 23.0, 27.0, 31.6, 33.1, 38.5.

Comparably lifetime distributions which are extensions of the power Lindley distribution are considered to fit the data alongside the KMPL distribution. Specifically, the fits of the Exponentiated Power Lindley (EXPL), Extended Power Lindley (EPL), Transmuted Power Lindley (TPL), Odd Log-Logistic Power Lindley (OLLPL), and the Topp-Leone Power Lindley (TLPL) distributions are compared with the one attained by the KMPL distribution.

For the purpose of model comparison, the Akaike Information Criterion (AIC), corrected Akaike Information Criterion (AICc), Bayesian Information Criterion (BIC), and the Hannan-Quinn Information Criterion (HQIC) are examined. These information criteria are mathematically specified by:

AIC = 2 l * + 2 p , AICc = AIC + 2 p ( p + 1 ) n p 1 ,

BIC = 2 l * + p ln ( n ) , HQIC = 2 l * + 2 p ln [ ln ( n ) ] ,

where n is the sample size and p is the number of the parameter(s) in the model. The summary result of the analysis is shown in Table 4.

Remark:

A smaller value of the AIC, AICc, BIC, and HQIC suggests a suitable model deemed fit for analyzing a particular data under study. Table 4 provides a detailed summary of the fits attained by the various distributions for the waiting time data. From this table, we observe that the KMPL distribution has the least value in terms of AIC, AICc, BIC, and HQIC. It is therefore reasonable to conclude that the KMPL distribution outperformed the competing distributions in

Table 4. Summary results for the waiting time data set.

fitting the data under study. A model’s goodness of fit can also be investigated via graphical representation. Here, we examine the density fit and probability-probability (p-p) plots of the distributions for the waiting time data as shown, respectively, in Figure 3 and Figure 4.

4. Conclusion

In this paper, we explored a new horizon of the power Lindley distribution using the KM transformation technique developed by [22] . The resulting distribution was referred to as Kavya-Manoharan Power Lindley (KMPL) distribution. One

Figure 3. The density fits of the distributions for the waiting time data.

Figure 4. The probability-probability (p-p) plots of the distributions for the waiting time data.

unique advantage of this transformation technique is the ability to increase the flexibility of the baseline distribution without adding extra parameter(s) as evident in many methodologies. The mathematical treatments of the KMPL distribution were studied in detail and the maximum likelihood estimation method was adapted to estimate the supposed unknown parameters of the KMPL distribution. The effectiveness of these Maximum Likelihood Estimates (MLEs) was investigated via a Monte Carlo simulation study. A real data set that consists of the waiting time of 100 bank customers was employed to illustrate the relevance of the KMPL distribution. Some selected lifetime distributions that are generalizations of the power Lindley distribution were considered to fit the data set alongside the KMPL distribution. The summary results of the data fitting revealed that the KMPL distribution performed reasonably better than the competing distributions.

Conflicts of Interest

The authors declare no conflicts of interest regarding the publication of this paper.

References

[1] Alzaatreh, A., Lee, C. and Famoye, F. (2013) A New Method for Generating Families of Continuous Distributions. Metron, 71, 63-79.
https://doi.org/10.1007/s40300-013-0007-y
[2] Bourguignon, M., Silva, R.B. and Cordeiro, G.M. (2014) The Weibull-G Family of Probability Distributions. Journal of Data Science, 12, 53-68.
https://doi.org/10.6339/JDS.201401_12(1).0004
[3] Mahdavi, A. and Kundu, D. (2017) A New Method for Generating Distributions with an Application to Exponential Distribution. Communications in Statistics—Theory and Methods, 46, 6543-6557.
https://doi.org/10.1080/03610926.2015.1130839
[4] Tahir, M.H., Cordeiro, G.M., Alzaatreh, M.A. and Zubair, M. (2018) A New Generalized Family of Distributions from Bounded Support. Journal of Data Science, 16, 251-276.
https://doi.org/10.6339/JDS.201804_16(2).0003
[5] Elbatal, I., Elgarhy, M. and Kibria, B.M.G. (2021) Alpha Power Transformed Weibull-G Family of Distributions: Theory and Applications. Journal of Statistical Theory and Applications, 20, 340-354.
https://doi.org/10.2991/jsta.d.210222.002
[6] Ubaka, O.N. and Ewere, F. (2023) The Continuous Bernoulli-Generated Family of Distributions: Theory and Applications. Reliability: Theory and Application, 18, 428-441.
[7] Lindley, D.V. (1958) Fiducial Distributions and Bayes Theorem. Journal of the Royal Statistical Society, Series B, 20, 102-107.
https://doi.org/10.1111/j.2517-6161.1958.tb00278.x
[8] Ghitany, M., Atieh, B. and Nadarajah, S. (2008) Lindley Distribution and Its Applications. Mathematics and Computers in Simulation, 78, 493-506.
https://doi.org/10.1016/j.matcom.2007.06.007
[9] Bakouch, H., Al-Zahrani, B., Al-Shomrani, A., Marchi, V. and Louzad, F. (2012) An Extended Lindley Distribution. Journal of the Korean Statistical Society, 41, 75-85.
https://doi.org/10.1016/j.jkss.2011.06.002
[10] Bhati, D., Malik, M.A. and Vaman, H.J. (2015) Lindley-Exponential Distribution: Properties and Applications. Metron, 73, 335-357.
https://doi.org/10.1007/s40300-015-0060-9
[11] Ekhosuehi, N., Opone, F.C. and Odobaire, F. (2018) A New Generalized Two-Parameter Lindley Distribution (NG2PLD). Journal of Data Science, 16, 459-466.
https://doi.org/10.6339/JDS.201807_16(3).0006
[12] Ekhosuehi, N. and Opone, F. (2018) A Three Parameter Generalized Lindley Distribution: Its Properties and Application. Statistica, 78, 233-249.
[13] Algarni, A. (2021) On a New Generalized Lindley Distribution: Properties, Estimation, and Applications. PLOS ONE, 16, e0244328.
https://doi.org/10.1371/journal.pone.0244328
[14] Ahsan-ul-Haq, M., Choudhary, S.M., Al-Marshadi, A.H. and Aslam, M. (2022) A New Generalization of Lindley Distribution for Modeling of Wind Speed Data. Energy Reports, 8, 1-11.
https://doi.org/10.1016/j.egyr.2021.11.246
[15] Ghitany, M., Al-Mutairi, D., Balakrishnan, N. and Al-Enezi, I. (2013) Power Lindley Distribution and Associated Inference. Computational Statistics and Data Analysis, 64, 20-33.
https://doi.org/10.1016/j.csda.2013.02.026
[16] Warahena-Liyanage, G. and Pararai, M. (2014) A Generalized Power Lindley Distribution with Applications. Asian Journal of Mathematics and Applications, 2014, ama0169.
[17] Alkarni, S.H. (2015) Extended Power Lindley Distribution: A New Statistical Model for Non-Monotone Survival Data. European Journal of Statistics and Probability, 3, 19-34.
[18] Mansour, M.M. and Hamed, S.M. (2015) A New Generalization of Power Lindley Distribution with Applications to Lifetime Data. Journal of Statistics: Advances in Theory and Applications, 13, 33-65.
https://doi.org/10.18642/jsata_7100121463
[19] Oluyede, B.O., Yang, T. and Makubate, B. (2016) A New Class of Generalized Power Lindley Distribution with Application to Lifetime Data. Asian Journal of Mathematics and Applications, 2016, ama0279.
[20] Alizadeh, M., MirMostafaee, S.M.T.K. and Ghosh, I. (2017) A New Extension of Power Lindley Distribution for Analyzing Bimodal Data. Chilean Journal of Statistics, 8, 67-86.
[21] Opone, F., Ekhosuehi, N. and Omosigho, S. (2022) Topp-Leone Power Lindley Distribution (TLPLD): Its Properties and Application. Sankhya A, 84, 597-608.
https://doi.org/10.1007/s13171-020-00209-0
[22] Kavya, P. and Manoharan, M. (2021) Some Parsimonious Models for Lifetimes and Applications. Journal of Statistical Computation and Simulation, 91, 3693-3708.
https://doi.org/10.1080/00949655.2021.1946064
[23] Galton, F. (1883) Enquiries into Human Faculty and Its Development. Macmillan and Company, London.
https://doi.org/10.1037/14178-000
[24] Moors, J.J. (1988) A Quantile Alternative for Kurtosis. Statistician, 37, 25-32.
https://doi.org/10.2307/2348376
[25] Opone, F. and Ekhosuehi, N. (2018) Methods of Estimating the Parameters of the Quasi-Lindley Distribution. Statistica, 78, 183-193.

Copyright © 2024 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.