Properties, Inference and Applications of Inverse Power Two-Parameter Weighted Lindley Distribution

Abstract

We proposed a new extension of three-parametric distribution” called the inverse power two-parameter weighted Lindley (IPWL) distribution capable of modeling a upside-down bathtub hazard rate function. This distribution is studied to get basic structural properties such as reliability measures, moments, inverse moments and its related measures. Simulation studies are done to present the performance and behavior of maximum likelihood estimates of the IPWL distribution parameters. Finally, we perform goodness of fit measures and test statistics using a real data set to show the performance of the new distribution.

Share and Cite:

El-Monsef, M. and Al-Kzzaz, H. (2020) Properties, Inference and Applications of Inverse Power Two-Parameter Weighted Lindley Distribution. Open Journal of Statistics, 10, 889-904. doi: 10.4236/ojs.2020.105052.

1. Introduction

Lindley distribution is one way to describe the lifetime of a wide variety of fields, including biology, engineering and medicine. The Lindley distribution with one-parameter was proposed by Lindley  with the probability density function (pdf)

$f\left(y;\theta \right)=\frac{{\theta }^{2}}{1+\theta }\left(1+y\right)\mathrm{exp}\left(-\theta y\right),\text{ }y>0,\theta >0.$ (1)

Ghitany et al.  presented a comprehensive treatment of the mathematical properties of the Lindley distribution and showed that the Lindley distribution is a better fit than the exponential distribution based on the waiting time at the bank for service. Shanker et al.  discussed the two-parameter Lindley distribution with a probability density function given by

$f\left(y;\beta ,\theta \right)=\frac{{\theta }^{2}}{\beta +\theta }\left(1+\beta y\right)\mathrm{exp}\left(-\theta y\right),y>0,\beta ,\theta >0.$ (2)

Ghitany et al.  discussed the two-parameter weighted Lindley distribution and its structural properties including moments, hazard rate function, mean residual life function, estimation of parameters and applications to modelling survival time data. The corresponding probability density function can be obtained as

$f\left(z;\beta ,\theta \right)=\frac{{\theta }^{\beta +1}}{\left(\beta +\theta \right)\Gamma \left(\beta \right)}{z}^{\beta -1}\left(1+z\right)\mathrm{exp}\left(-\theta z\right),z>0,\beta ,\theta >0.$ (3)

“Researchers have more interest to generate inverted distributions under inverse transformation, i.e. inverted beta by Dubey , inverse Gaussian by Folks and Chhikara  and inverse Weibull by Calabria and Pulcini  ”. A few inverted statistical distributions such as inverted Rayleigh (IR), inverted Weibull (IW), and inverted Gamma (IG) are available to model such upside-down bathtub data. These distributions have been extensively used in the various real-life applications. Recently, the inverse Lindley distribution, proposed by Sharma et al. , considers the inverse of a random variable with a Lindley distribution. In other words, if a random variable Y with pdf (1), then the random variable $X={Y}^{-1}$ follows the inverse Lindley distribution with pdf defined by:

$f\left(x;\theta \right)=\frac{{\theta }^{2}}{1+\theta }\left(\frac{1+x}{{x}^{3}}\right)\mathrm{exp}\left(-\frac{\theta }{x}\right),x>0,\theta >0.$ (4)

Alkarni  proposed the extended inverse two-parameter Lindley distribution as a statistical inverse model for upside-down bathtub survival data. More specifically, if a random variable Y with pdf (2), then the random variable $X={Y}^{-1/\alpha }$ follows the extended inverse two-parameter Lindley distribution with the following pdf

$f\left(x;\alpha ,\beta ,\theta \right)=\frac{\alpha {\theta }^{2}}{\beta +\theta }\left(\frac{\beta +{x}^{\alpha }}{{x}^{2\alpha +1}}\right)\mathrm{exp}\left(-\frac{\theta }{{x}^{\alpha }}\right),x>0,\beta ,\theta >0.$ (5)

In this paper, we proposed a new inverse two-parameter weighted Linley distribution which offers more flexibility with upside-down bathtub or unimodal hazard rate named the inverse power two-parameter weighted Lindley (IPWL) distribution. Reliability measures such as survival function, hazard rate function and reverse hazard function are provided in Section 3. Section 4 explains the statistical properties of the model for example moments, inverse moments and its related measures. Section 5 is about some different estimation methods for example, the method of moments, the method of least squares, the method of maximum likelihood and approximate confidence intervals. Section 6 explains the simulation studies of true parameters estimated from the method of maximum likelihood estimation. Real-life data set is also provided in Section 7 to explain the flexibility of the observed model as compared to some existing models. Finally, Section 8 concludes the study.

2. The Inverse Power Weighted Lindley Distribution

Let Z be a random variable with pdf (3), then the random variable $X={Z}^{-1/\alpha }$ is said to follow the IPWL distribution with pdf

$f\left(x;\alpha ,\beta ,\theta \right)=\frac{\alpha {\theta }^{\beta +1}}{\left(\beta +\theta \right)\Gamma \left(\beta \right)}\left(\frac{1+{x}^{\alpha }}{{x}^{\alpha \left(\beta +1\right)+1}}\right)\mathrm{exp}\left(-\frac{\theta }{{x}^{\alpha }}\right),x>0,\alpha ,\beta ,\theta >0$ (6)

and the cumulative distribution function is given by

$F\left(x;\alpha ,\beta ,\theta \right)=\frac{\theta \text{ }\Gamma \left(\beta ,\theta \text{ }{x}^{-\alpha }\right)+\Gamma \left(\beta +1,\theta \text{ }{x}^{-\alpha }\right)}{\left(\beta +\theta \right)\Gamma \left(\beta \right)}$ (7)

where $\Gamma \left(a,x\right)={\int }_{x}^{\infty }\text{ }{y}^{a-1}\mathrm{exp}\left(-y\right)\text{d}y$ is the upper incomplete gamma function.

By omitting the dependence on the positive parameters $\alpha ,\beta$ and $\theta$ in (6) and (7), we have $f\left(x;\alpha ,\beta ,\theta \right)=f\left(x\right)$ and $F\left(x;\alpha ,\beta ,\theta \right)=F\left(x\right)$. The pdf (6) can be shown as a mixture of two distributions as follows:

$f\left(x\right)=p{f}_{1}\left(x\right)+\left(1-p\right){f}_{2}\left( x \right)$

where $p=\theta /\left(\theta +\beta \right)$ and

${f}_{i}\left(x\right)=\frac{\alpha {\theta }^{\beta +i-1}}{\Gamma \left(\beta +i-1\right)}\frac{1}{{x}^{\alpha \left(\beta +i-1\right)+1}}\mathrm{exp}\left(-\frac{\theta }{{x}^{\alpha }}\right)$

is the pdf of a generalized inverse gamma distribution with shape parameters $\beta +i-1,\alpha$ and scale parameter $\theta$ where $i=1,2$.

Theorem 1. The probability density function $f\left(x\right)$ of the IPWL distribution is unimodal in x.

Proof. The first derivative of $f\left(x\right)$ is given by

${f}^{\prime }\left(x\right)=-\frac{\psi \left(x\right)}{g\left(x\right)}f\left( x \right)$

where

$\psi \left(x\right)=a{x}^{2\alpha }+b{x}^{\alpha }+c,g\left(x\right)=\left(1+{x}^{\alpha }\right){x}^{\alpha +1}$

with

$a=1+\alpha \beta ,b=1+\alpha \left(1+\beta -\theta \right),c=-\alpha \theta$

Let $D={\left(1+\alpha \left(1+\beta -\theta \right)\right)}^{2}+4\alpha \theta \left(1+\alpha \beta \right)$ be the discriminant of $\psi \left(x\right)$. The second derivative of $f\left(x\right)$ given by

${f}^{″}\left(x\right)=-\frac{1}{g\left(x\right)}\left[\left[{g}^{\prime }\left(x\right)+\psi \left(x\right)\right]{f}^{\prime }\left(x\right)+{\psi }^{\prime }\left(x\right)f\left(x\right)\right]$

where ${\psi }^{\prime }\left(x\right)=2a\text{ }\alpha {x}^{2\alpha -2}+b\alpha {x}^{\alpha -1}$. Clearly, $D>0$ and $\psi \left(x\right)$ is a unimodal quadratic function with maximum value at the point ${x}_{0}$ since, ${f}^{″}\left({x}_{0}\right)=$ $-\left({\psi }^{\prime }\left({x}_{0}\right)/g\left({x}_{0}\right)\right)f\left({x}_{0}\right)<0$, $f\left(x\right)$ has a global maximum at ${x}_{0}$ ; hence, the mode of $f\left(x\right)$ is given by

${x}_{0}={\left(\frac{-b+\sqrt{D}}{2a}\right)}^{\frac{1}{\alpha \text{ }}}$

Figure 1 presents the pdf for the IPWL distribution for some values of $\alpha ,\beta$ and $\theta$.

Special Cases of the IPWL Distribution

At particular cases, the IPWL distribution contains some special distribution model.

• When $\beta =1$, we have the generalized inverse Lindley distribution (GIL) with the pdf is given by

${f}_{GIL}\left(x\right)=\frac{\alpha {\theta }^{2}}{\left(1+\theta \right)}\left(\frac{1+{x}^{\alpha }}{{x}^{2\alpha +1}}\right)\mathrm{exp}\left(-\frac{\theta }{{x}^{\alpha }}\right),\text{ }x>0,\alpha ,\theta >0,$

Figure 1. Plots of the pdf of the IPWL distribution for different parameter values.

and the cumulative distribution function is given by

${F}_{GIL}\left(x\right)=\left(1+\frac{\theta }{\left(1+\theta \right){x}^{\alpha }}\right)\mathrm{exp}\left(-\frac{\theta }{{x}^{\alpha }}\right)$

• When $\alpha =\beta =1$, we have the inverse Lindley distribution (IL) with the pdf is given by

${f}_{IL}\left(x\right)=\frac{{\theta }^{2}}{\left(1+\theta \right)}\left(\frac{1+x}{{x}^{3}}\right)\mathrm{exp}\left(-\frac{\theta }{x}\right),x>0,\theta >0,$

and the cumulative distribution function is given by

${F}_{IL}\left(x\right)=\left(1+\frac{\theta }{\left(1+\theta \right)x}\right)\mathrm{exp}\left(-\frac{\theta }{x}\right).$

3. Reliability Measures

In this section, we discuss the survival function, the hazard rate function, the reverse hazard rate function and odds function for the IPWL distribution.

3.1. Survival Function

Survival function for the IPWL distribution is defined as. According to the cumulative distribution function presented in (7), we have

$S\left(x\right)=1-\frac{\theta \text{ }\Gamma \left(\beta ,\theta \text{ }{x}^{-\alpha }\right)+\Gamma \left(\beta +1,\theta \text{ }{x}^{-\alpha }\right)}{\left(\beta +\theta \right)\Gamma \left(\beta \right)}$ (8)

3.2. Hazard Rate Function

The hazard rate function is defined as $h\left(x\right)=f\left(x\right)/S\left(x\right)$. Using pdf (6), the hazard rate function for the IPWL distribution is given by

$h\left(x\right)=\frac{\alpha {\theta }^{\beta +1}\left(\frac{1+{x}^{\alpha }}{{x}^{\alpha \left(\beta +1\right)+1}}\right)\mathrm{exp}\left(-\frac{\theta }{{x}^{\alpha }}\right)}{\left(\beta +\theta \right)\Gamma \left(\beta \right)-\theta \text{ }\Gamma \left(\beta ,\theta \text{ }{x}^{-\alpha }\right)-\Gamma \left(\beta +1,\theta \text{ }{x}^{-\alpha }\right)}$ (9)

Figure 2 represents the behavior for the IPWL hazard rate function for different values of its parameters

3.3. Reverse Hazard Rate Function

The ratio between the probability density function to its distribution function is called reversed hazard function. This concept seems more appropriate for analyzing censored data and it is also natural in discussing lifetimes with reversed time scale. Let X be a random variable follows the IPWL distribution. The reversed hazard function of X is defined by:

$r\left(x\right)=\frac{\alpha {\theta }^{\beta +1}\left(\frac{1+{x}^{\alpha }}{{x}^{\alpha \left(\beta +1\right)+1}}\right)\mathrm{exp}\left(-\frac{\theta }{{x}^{\alpha }}\right)}{\theta \text{ }\Gamma \left(\beta ,\theta \text{ }{x}^{-\alpha }\right)+\Gamma \left(\beta +1,\theta \text{ }{x}^{-\alpha }\right)}$ (10)

Figure 2. Plots of the hazard rate function of the IPWL distribution for different parameter values.

3.4. Odds Function

The odds function can be written as

$O\left(x\right)=\frac{\theta \text{ }\Gamma \left(\beta ,\theta \text{ }{x}^{-\alpha }\right)+\Gamma \left(\beta +1,\theta \text{ }{x}^{-\alpha }\right)}{\left(\beta +\theta \right)\Gamma \left(\beta \right)-\theta \text{ }\Gamma \left(\beta ,\theta \text{ }{x}^{-\alpha }\right)-\Gamma \left(\beta +1,\theta \text{ }{x}^{-\alpha }\right)}$ (11)

4. Statistical Properties

This section investigates the statistical properties of the IPWL distribution such as the moments, inverse moments and the coefficients of skewness and kurtosis.

Moments

Let X be a random variable that follows the IPWL distribution with pdf (6), then the rth raw moment (about the origin) ${{\mu }^{\prime }}_{r}=E\left({X}^{r}\right),r=1,2,\cdots$ is given by

${{\mu }^{\prime }}_{r}=\frac{{\theta }^{r/\alpha }\left(\alpha \left(\beta +\theta \right)-r\right)\Gamma \left(\frac{\alpha \beta -r}{\alpha }\right)}{\alpha \left(\beta +\theta \right)\Gamma \left(\beta \right)}$ (12)

It can be noticed that, for the rth raw moment to exist, the constraint $r<\alpha \beta$ must be satisfied. From (12), the mean and the variance of the IPWL distribution can be defined, respectively, as

$\mu =\frac{{\theta }^{1/\alpha }\left(\alpha \left(\beta +\theta \right)-1\right)\Gamma \left(\frac{\alpha \beta -1}{\alpha }\right)}{\alpha \left(\beta +\theta \right)\Gamma \left(\beta \right)}\text{ }\text{ }\text{ }\text{ }\text{ }$

${\sigma }^{2}=\frac{{\theta }^{2/\alpha }}{\alpha \left(\beta +\theta \right)\Gamma \left(\beta \right)}\left[\left(\alpha \left(\beta +\theta \right)-2\right)\Gamma \left(\frac{\alpha \beta -2}{\alpha }\right)-\frac{{\left\{\left(\alpha \left(\beta +\theta \right)-1\right)\Gamma \left(\frac{\alpha \beta -1}{\alpha }\right)\right\}}^{2}}{\alpha \left(\beta +\theta \right)\Gamma \left(\beta \right)}\right]$

The coefficient of skewness and kurtosis measures can be obtained by moments based relations suggested by Pearson and given by;

${\beta }_{1}=\frac{{\left({{\mu }^{\prime }}_{3}-3{{\mu }^{\prime }}_{2}\mu +2{\mu }^{3}\right)}^{2}}{{\left({{\mu }^{\prime }}_{2}-{\mu }^{2}\right)}^{3}},\alpha \beta >3$

and

${\beta }_{2}=\frac{{{\mu }^{\prime }}_{4}-4{{\mu }^{\prime }}_{3}\mu +6{{\mu }^{\prime }}_{2}{\mu }^{2}-3{\mu }^{4}}{{\left({{\mu }^{\prime }}_{2}-{\mu }^{2}\right)}^{2}},\alpha \beta >4$

upon substituting for the raw moments in (12).

The coefficient of variation (CV) is calculated by

$CV=\frac{\text{ }\text{ }{\left[\alpha \left(\beta +\theta \right)\left[\alpha \left(\beta +\theta \right)-2\right]\Gamma \left(\beta \right)\Gamma \left(\frac{\alpha \beta -2}{\alpha }\right)-{\left\{\left(\alpha \left(\beta +\theta \right)-1\right)\Gamma \left(\frac{\alpha \beta -1}{\alpha }\right)\right\}}^{2}\right]}^{1/2}}{\left(\alpha \left(\beta +\theta \right)-1\right)\Gamma \left(\frac{\alpha \beta -1}{\alpha }\right)\text{ }}×100$

As we mentioned above, that raw moment of the IPWL distribution will exist only when $r<\alpha \beta$. Therefore, the evaluation of inverse moments may be of interest. The rth raw inverse moment (about the origin) is given by

${{\mu }^{\prime }}_{{r}^{-1}}=\frac{{\theta }^{-r/\alpha }\left(\alpha \left(\beta +\theta \right)+r\right)\Gamma \left(\frac{\alpha \beta +r}{\alpha }\right)}{\alpha \left(\beta +\theta \right)\Gamma \left(\beta \right)}$ (13)

Table 1 represents values of mean, variance, coefficient of skewness, coefficient of kurtosis, mode and coefficient of variation. It is observed that the shape of the IPWL distribution is right skewed for all values of parameters.

5. Estimation and Inference of the Parameters

In this section, we consider four methods of estimation and inference techniques to estimate the parameters of the proposed distribution.

Table 1. Values of some important measures of the proposed distribution at different parameter combinations.

5.1. Method of Moments Estimates

Let ${x}_{1},{x}_{2},\cdots ,{x}_{n}$ be a random sample of size n is drawn from population of the IPWL distribution with pdf (6). In the method of moments, we equate k (number of parameters) sample moments with the corresponding population moments. Using Equation (12), the first three moments of the population of the IPWL distribution to the corresponding moments of the sample are

$\frac{{\theta }^{r/\alpha }\left(\alpha \left(\beta +\theta \right)-r\right)\Gamma \left(\frac{\alpha \beta -r}{\alpha }\right)}{\alpha \left(\beta +\theta \right)\Gamma \left(\beta \right)}={m}_{r}$ (14)

where ${m}_{r}=\left({\sum }_{i=1}^{n}\text{ }\text{ }{x}_{i}^{r}\right)/n,r=1,2,3$ is the first three sample moments .

The exact solution of the above equations for unknown parameters is not possible. Therefore, it is more appropriate to use numerical methods in R or MATHEMATICA program.

5.2. Least Square Estimates

Swain et al.  proposed the least square estimator (LSE) to estimate the parameters of Beta distributions in Johnsons translation system. Let $F\left({X}_{\left(i\right)}\right)$ be the distribution function of the ordered random sample ${x}_{\left(1\right)}<{x}_{\left(2\right)}<\cdots <{x}_{\left(n\right)}$, where ${x}_{1},{x}_{2},\cdots ,{x}_{n}$ is a random sample of size n from the IPWL distribution. Then, the expectation of the empirical cumulative distribution function is defined as

$E\left[F\left({X}_{\left(i\right)}\right)\right]=\frac{i}{n+i};\text{ }i=1,2,\cdots ,n$

The least square estimates (LSEs) ${\stackrel{^}{\alpha }}_{LS}$, ${\stackrel{^}{\beta }}_{LS}$ and ${\stackrel{^}{\theta }}_{LS}$ of $\alpha ,\beta$ and $\theta$ are obtained by minimizing

$Z\left(\alpha ,\beta ,\theta \right)=\underset{i=1}{\overset{n}{\sum }}{\left(F\left({x}_{\left(i\right)};\alpha ,\beta ,\theta \right)-\frac{i}{n+i}\right)}^{2}$ (15)

with respect to $\alpha ,\beta$ and $\theta$, Therefore ${\stackrel{^}{\alpha }}_{LS}$, ${\stackrel{^}{\beta }}_{LS}$ and ${\stackrel{^}{\theta }}_{LS}$ can be obtained as the solution of the following system of non linear equations:

$\frac{\partial Z\left(\alpha ,\beta ,\theta \right)}{\partial \alpha }=\underset{i=1}{\overset{n}{\sum }}{\frac{\partial F\left(x;\alpha ,\beta ,\theta \right)}{\partial \alpha }|}_{x={x}_{\left(i\right)}}\left(F\left({x}_{\left(i\right)};\alpha ,\beta ,\theta \right)-\frac{i}{n+i}\right)=0$ (16)

$\frac{\partial Z\left(\alpha ,\beta ,\theta \right)}{\partial \beta }=\underset{i=1}{\overset{n}{\sum }}{\frac{\partial F\left(x;\alpha ,\beta ,\theta \right)}{\partial \beta }|}_{x={x}_{\left(i\right)}}\left(F\left({x}_{\left(i\right)};\alpha ,\beta ,\theta \right)-\frac{i}{n+i}\right)=0$ (17)

$\frac{\partial Z\left(\alpha ,\beta ,\theta \right)}{\partial \theta }=\underset{i=1}{\overset{n}{\sum }}{\frac{\partial F\left(x;\alpha ,\beta ,\theta \right)}{\partial \theta }|}_{x={x}_{\left(i\right)}}\left(F\left({x}_{\left(i\right)};\alpha ,\beta ,\theta \right)-\frac{i}{n+i}\right)=0$ (18)

5.3. Maximum Likelihood Estimates

Let ${x}_{1},{x}_{2},\cdots ,{x}_{n}$ be a random sample of size n from the IPWL distribution with pdf (6). The log-likelihood function is given by

$\begin{array}{c}L=n\mathrm{ln}\alpha +n\left(1+\beta \right)\mathrm{ln}\theta -n\mathrm{ln}\left(\beta +\theta \right)-n\mathrm{ln}\Gamma \left(\beta \right)\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}-\left(1+\alpha +\alpha \beta \right)\underset{i=1}{\overset{n}{\sum }}\mathrm{ln}{x}_{i}+\underset{i=1}{\overset{n}{\sum }}\mathrm{ln}\left(1+{x}_{i}^{\alpha }\right)-\theta \underset{i=1}{\overset{n}{\sum }}{x}_{i}^{-\alpha }\end{array}$ (19)

Now, we take the first derivative of the log-likelihood equation with respect to parameters and equate to zero to get the ML estimate of unknown parameters of the IPWL distribution. The score functions are:

$\frac{\partial L}{\partial \alpha }=\frac{n}{\alpha }-\left(1+\beta \right)\underset{i=1}{\overset{n}{\sum }}\mathrm{ln}{x}_{i}+\theta \underset{i=1}{\overset{n}{\sum }}\text{ }\text{ }{x}_{i}^{-\alpha }\mathrm{ln}{x}_{i}+\underset{i=1}{\overset{n}{\sum }}\frac{{x}_{i}^{\alpha }\mathrm{ln}{x}_{i}}{1+{x}_{i}^{\alpha }}$ (20)

$\frac{\partial L}{\partial \beta }=-\frac{n}{\beta +\theta }-n\text{ }\psi \left(\beta \right)+n\mathrm{ln}\theta -\alpha \underset{i=1}{\overset{n}{\sum }}\mathrm{ln}{x}_{i}$ (21)

$\frac{\partial L}{\partial \theta }=\frac{n}{\theta }+\frac{n\beta }{\theta }-\frac{n}{\beta +\theta }-\underset{i=1}{\overset{n}{\sum }}\text{ }\text{ }{x}_{i}^{-\alpha }$ (22)

where $\text{ }\psi \left(\beta \right)=\frac{\text{d}}{\text{d}\beta }\mathrm{ln}\beta$ is the digamma function.

The maximum likelihood estimators (MLEs) ${\stackrel{^}{\alpha }}_{MLE}$, ${\stackrel{^}{\beta }}_{MLE}$ and ${\stackrel{^}{\theta }}_{MLE}$ are obtained by solving the above three non-linear equations. Here, we used non-linear maximization techniques to get the solution.

5.4. Approximate Confidence Intervals

For interval estimation of the parameter vector $\Theta ={\left(\alpha ,\beta ,\theta \right)}^{\text{T}}$ ; The elements of the expected Fisher information matrix $I=\left[{I}_{ij}\right],i,j=1,2,3$ from a single observation are given by

${I}_{11}=E\left[-\frac{{\partial }^{2}}{\partial {\alpha }^{2}}L\right]=\frac{n}{{\alpha }^{2}}+\theta \underset{i=1}{\overset{n}{\sum }}\text{ }\text{ }{x}_{i}^{-\alpha }{\left(\mathrm{ln}{x}_{i}\right)}^{2}-\underset{i=1}{\overset{n}{\sum }}\left(\frac{{x}_{i}^{\alpha }{\left(\mathrm{ln}{x}_{i}\right)}^{2}}{1+{x}_{i}^{\alpha }}-\frac{{x}_{i}^{2\alpha }{\left(\mathrm{ln}{x}_{i}\right)}^{2}}{{\left(1+{x}_{i}^{\alpha }\right)}^{2}}\right)$

${I}_{22}=E\left[-\frac{{\partial }^{2}}{\partial {\beta }^{2}}L\right]=n\text{ }{\psi }^{\prime }\left(\beta \right)-\frac{n}{{\left(\beta +\theta \right)}^{2}}$

${I}_{33}=E\left[-\frac{{\partial }^{2}}{\partial {\theta }^{2}}L\right]=\frac{n}{{\theta }^{2}}+\frac{n\beta }{{\theta }^{2}}-\frac{n}{{\left(\beta +\theta \right)}^{2}}$

${I}_{12}={I}_{21}=E\left[-\frac{{\partial }^{2}}{\partial \alpha \beta }L\right]=\underset{i=1}{\overset{n}{\sum }}\mathrm{ln}{x}_{i}$

${I}_{13}={I}_{31}=E\left[-\frac{{\partial }^{2}}{\partial \alpha \theta }L\right]=-\underset{i=1}{\overset{n}{\sum }}\text{ }\text{ }{x}_{i}^{-\alpha }\mathrm{ln}{x}_{i}$

${I}_{23}={I}_{32}=E\left[-\frac{{\partial }^{2}}{\partial \beta \theta }L\right]=-\frac{n}{\theta }-\frac{n}{{\left(\beta +\theta \right)}^{2}}$

where ${\psi }^{\prime }\left(\beta \right)=\frac{\text{d}}{\text{d}\beta }\psi \left(\beta \right)$ is the trigamma function.

From standard large-sample theory of maximum likelihood estimators Lehmann and Casella , we have as $n\to \infty ,\sqrt{n}\left(\stackrel{^}{\Theta }-\Theta \right)$ is asymptotically normal with (vector) mean zero and variance matrix ${I}^{-1}$, and $\stackrel{^}{\Theta }$ is asymptotically efficient in the sense that

$\sqrt{n}\left(\stackrel{^}{\Theta }-\Theta \right)\stackrel{d}{\to }{N}_{3}\left(0,{I}^{-1}\right)$

where $\stackrel{d}{\to }$ denotes convergence in distribution and ${I}^{-1}$ is the inverse of the expected Fisher information matrix I. The asymptotic variances and covariance of the MLEs $\stackrel{^}{\Theta }$ are given by:

$V\left(\stackrel{^}{\alpha }\right)=\frac{{I}_{22}{I}_{33}-{I}_{23}^{2}}{n\Delta },V\left(\stackrel{^}{\beta }\right)=\frac{{I}_{11}{I}_{33}-{I}_{13}^{2}}{n\Delta },V\left(\stackrel{^}{\theta }\right)=\frac{{I}_{11}{I}_{22}-{I}_{12}^{2}}{n\Delta },$

$\begin{array}{l}Cov\left(\stackrel{^}{\alpha },\stackrel{^}{\beta }\right)=\frac{{I}_{13}{I}_{23}-{I}_{12}{I}_{33}}{n\Delta },Cov\left(\stackrel{^}{\alpha },\stackrel{^}{\theta }\right)=\frac{{I}_{12}{I}_{23}-{I}_{13}{I}_{22}}{n\Delta },\\ Cov\left(\stackrel{^}{\beta },\stackrel{^}{\theta }\right)=\frac{{I}_{13}{I}_{12}-{I}_{11}{I}_{23}}{n\Delta }\end{array}$

where $\Delta =\mathrm{det}\left(I\right)$ is the determinant of the matrix I. The corresponding asymptotic $100\left(1-\delta \right)%$ confidence interval of $\Theta$, are given by

$\stackrel{^}{\Theta }±{z}_{\delta /2}\sqrt{\stackrel{^}{Var\left(\stackrel{^}{\Theta }\right)}}$

where $\stackrel{^}{Var\left(\stackrel{^}{\Theta }\right)}$ is the MLE of $Var\left(\stackrel{^}{\Theta }\right)$ and ${z}_{\delta /2}$ is the upper $\delta /2$ quantile of the standard normal distribution.

6. Simulation

In this section, we perform simulation for different sample sizes to examine the performance of the method of maximum likelihood estimation for the IPWL parameters. The simulations are applied as follow:

• Set initial values of $n,\alpha ,\beta$ and $\theta$.

• Numerically, the data are generated from the equation $F\left(x\right)=u$, where u is uniformly distributed $\left(0,1\right)$, and $F\left(x\right)$ is cumulative distribution function of the IPWL distribution.

• Each sample size is replicated 1000 times.

Average biases and means squared errors (MSEs) are evaluated in Table 2 in which, we indicate that the MSEs of the MLEs of the parameters limit to zero as the sample size increases. According to first-order asymptotic theory, the mean estimates of the parameters tend to be closer to the true parameter values as the sample size n increases.

Table 2. Bias and MSE for the parameter $\alpha ,\beta$ and $\theta$.

7. Application

In this section, we perform the goodness of fit of the IPWL distribution using maximum likelihood estimate of the parameter to represents the potentiality of the new model as compared to some other existing life-time models by using a real-life data set.

The real-life data set was discussed previously by   . Data consists of 46 observations reported on active repair times (hours) for an airborne communication transceiver. The observed values are

This data set is used to compare the new generated distribution with other five alternative distributions such as:

• The Rayleigh (R) distribution with the pdf

$f\left(x\right)=\frac{x}{{\lambda }^{2}}\mathrm{exp}\left(\frac{-{x}^{2}}{2{\lambda }^{2}}\right)$

where $x>0,\lambda >0$.

• The inverted Rayleigh (IR) distribution with the pdf

$f\left(x\right)=\frac{2\lambda }{{x}^{3}}\mathrm{exp}\left(\frac{-\lambda }{{x}^{2}}\right)$

where $x>0,\lambda >0$.

• The Gamma (G) distribution with the pdf

$f\left(x\right)=\frac{{\theta }^{\lambda }}{\Gamma \left(\lambda \right)}{x}^{\lambda -1}\mathrm{exp}\left(-\theta x\right)$

where $x>0,\lambda ,\theta >0$.

• The inverted Gamma (IG) distribution with the pdf

$f\left(x\right)=\frac{{\theta }^{\lambda }}{\Gamma \left(\lambda \right)}{x}^{-\left(\lambda +1\right)}\mathrm{exp}\left(-\frac{\theta }{x}\right)$

where $x>0,\lambda ,\theta >0$.

• The Weibull (W) distribution with the pdf

$f\left(x\right)=\lambda \theta {x}^{\theta -1}\mathrm{exp}\left(-\lambda {x}^{\theta }\right)$

where $x\text{>}0,\lambda ,\theta \text{>}0$.

Goodness of fit measures have been applied for the real data set using the log likelihood function (-L), Akaike Information Criterion (AIC), Bayesian Information Criterion (BIC), the consistent Akaike information Criterion (CAIC) and the sum of squares (SS) defined by:

$\text{AIC}=-2L+2q$

$\text{BIC}=-2L+q\mathrm{ln}\left( n \right)$

$\text{CAIC}=-2L+\frac{2qn}{n-q-1}$

$\text{SS}=\underset{i=1}{\overset{n}{\sum }}{\left(F\left({x}_{i};\stackrel{^}{\Theta }\right)-\frac{i-0.375}{n+0.25}\right)}^{2}$

where q is the number of parameters, n is the sample size, and $F\left({x}_{i};\stackrel{^}{\Theta }\right)$ estimated cumulative distribution function of theoretical models. The model with the lowest values of goodness of fit measures provides the best fit for data set.

Tests statistics such as Cramér-von Mises ${W}_{n}^{2}$, Anderson-Darling ${A}_{n}^{2}$, Watson ${U}_{n}^{2}$, Liao-Shimokawa ${L}_{n}$ and Kolmogrov-Smirnov $K-S$ with its respective p-value are considered in order to verify which distribution fits better to data set. These tests display the differences between the proposed cumulative distribution function and the empirical cumulative distribution function from the data to verify the fit of the distributions (p-value > 0.05).

Table 3, indicates that inverse power two-parameter weighted Lindley distribution provides a better fit model for the data set than the other models. The tests shown in Table 4 observe that the Rayleigh distribution and the inverted Rayleigh distribution not fit the data set (p-value < 0.05) and the proposed distribution shows the lowest test statistics with the largest p-values. Thus, the inverse power two-parameter weighted Lindley distribution fits well the data set.

The probability-probability (P-P) plots and cdf plots of the fitted distributions for the real data set are presented in Figure 3 and Figure 4. Its provides that the

Table 3. The goodness of fit measures for the data.

Figure 3. P-P plots of the considered distribution for the real data set.

Table 4. goodness-of-fit test statistics for the data.

Figure 4. Fitted cdf plots of the considered distribution for the real data set.

inverse power two-parameter weighted Lindley distribution obtain a greater approximation between the empirical and the theoretical curves therefore, the proposed distribution was the one which best adjusted to the real data set.

8. Conclusion

In this paper, we proposed a new three-parameter inverse distribution which shows the upside-down bathtub shape for its hazard rate and studied it in detail. This distribution becomes flexible for statistical analysis of positive data and the new density function can be expressed as a two-component mixture of a generalized inverse gamma distribution, which provides some explicit expression for reliability measures, the moments, and its related measures. The estimation of parameters is approached by the method of the moments, least square, maximum likelihood, and approximate confidence intervals for distribution parameters. We also perform the behavior of the estimated parameters by using the method of maximum likelihood estimation. The real-life data has been presented for the demonstration of enhanced flexibility and better fit of the observed model as compared to some other well-known existing models.

Conflicts of Interest

The authors declare no conflicts of interest regarding the publication of this paper.

  Lindley, D.V. (1958) Fiducial Distributions and Bayes’ Theorem, Journal of the Royal Statistical Society, Series B, 20, 102-107. https://doi.org/10.1111/j.2517-6161.1958.tb00278.x  Ghitany, M.E., Atieh, B. and Nadarajah, S. (2008) Lindley Distribution and Its Application. Mathematics and Computers in Simulation, 78, 493-506. https://doi.org/10.1016/j.matcom.2007.06.007  Shanker, R., Sharma, S. and Shanker, R. (2013) A Two-Parameter Lindley Distribution for Modeling Waiting and Survival Times Data. Applied Mathematics, 4, 363-368. https://doi.org/10.4236/am.2013.42056  Ghitany, M.E., Alqallaf, F., Al-Mutairi, D.K. and Husain, H.A. (2011) A Two Parameter Weighted Lindley Distribution and Its Applications to Survival Data. Mathematics and Computers in Simulation, 81, 1190-1201. https://doi.org/10.1016/j.matcom.2010.11.005  Dubey, S.D. (1970) Compound Gamma, Beta and F Distributions. Metrika, 16, 27-31. https://doi.org/10.1007/BF02613934  Folks, J. and Chhikara, R. (1978) The Inverse Gaussian Distribution and Its Statistical Application: A Review. Journal of the Royal Statistical Society, Series B (Methodological), 40, 263-289. https://doi.org/10.1111/j.2517-6161.1978.tb01039.x  Calabria, R. and Pulcini, G. (1990) On the Maximum Likelihood and Least Squares Estimation in the Inverse Weibull Distribution. Statistics Applicata, 2, 53-66.  Sharma, V.K., Singh, S.K., Singh, U. and Agiwal, V. (2015) The Inverse Lindley Distribution: A Stress-Strength Reliability Model with Application to Head and Neck Cancer Data. Journal of Industrial and Production Engineering, 32, 162-173. https://doi.org/10.1080/21681015.2015.1025901  Alkarni, S.H. (2015) Extended Inverse Lindley Distribution: Properties and Application. Springer-Plus, 4, 1-13. https://doi.org/10.1186/s40064-015-1489-2  Swain, J., Venkatraman, S. and Wilson, J. (1988) Least Squares Estimation of Distribution Function in Johnsons Translation System. Journal of Statistical Computation and Simulation, 29, 271-297. https://doi.org/10.1080/00949658808811068  Lehmann, L.E. and Casella, G. (1998) Theory of Point Estimation. 2nd Edition. Springer, NY.  Alven, W.H. (1964) Reliability Engineering, Prentice-Hall, Englewood Cliffs, NJ.  Chhikara, R.S. and Folks, J.L. (1977) The Inverse Gaussian Distribution as a Lifetime Model. Technometrics, 19, 461-468. https://doi.org/10.1080/00401706.1977.10489586  Almetwaly, E.M. and Almongy, H.M. (2018) Estimation of the Generalized Power Weibull Distribution Parameters Using Progressive Censoring Schemes. International Journal of Probability and Statistics, 7, 51-61.     customer@scirp.org +86 18163351462(WhatsApp) 1655362766  Paper Publishing WeChat 