A New Stochastic Restricted Liu Estimator for the Logistic Regression Model ()

Weibing Zuo^{*}, Yingli Li^{}

College of Mathematics and Statistics, North China University of Water Resources and Electric Power, Zhengzhou, China.

**DOI: **10.4236/ojs.2018.81003
PDF
HTML XML
924
Downloads
2,085
Views
Citations

College of Mathematics and Statistics, North China University of Water Resources and Electric Power, Zhengzhou, China.

In order to overcome the well-known multicollinearity problem, we propose a new Stochastic Restricted Liu Estimator in logistic regression model. In the mean square error matrix sense, the new estimation is compared with the Maximum Likelihood Estimation, Liu Estimator Stochastic Restricted Maximum Likelihood Estimator etc. Finally, a numerical example and a Monte Carlo simulation are given to explain some of the theoretical results.

Keywords

Multicollinearity, Liu Estimator, Stochastic Restricted Liu Estimator, Scalar Mean Squared Error Matrix

Share and Cite:

Zuo, W. and Li, Y. (2018) A New Stochastic Restricted Liu Estimator for the Logistic Regression Model. *Open Journal of Statistics*, **8**, 25-37. doi: 10.4236/ojs.2018.81003.

1. Introduction

Consider the following multiple logistic regression model is

${y}_{i}={\pi}_{i}+{\epsilon}_{i},i=1,\cdots ,n,$ (1.1)

which follows Bernoulli distribution with parameter ${\pi}_{i}$ as

${\pi}_{i}=\frac{\mathrm{exp}\left({{x}^{\prime}}_{i}\beta \right)}{1+\mathrm{exp}\left({{x}^{\prime}}_{i}\beta \right)},$ (1.2)

where
$\beta $ is a
$\left(p+1\right)\times 1$ vector of coefficients and
${x}_{i}$ is the i^{th} row of X, which is an
$n\times \left(p+1\right)$ data matrix with P explanatory variables,
${\epsilon}_{i}$ is independent with mean zero and variance
${\pi}_{i}\left(1-{\pi}_{i}\right)$ of the response
${y}_{i}$ . The maximum likelihood method is the most commonly used method of estimating parameters and the Maximum Likelihood Estimator (MLE) is defined as

${\stackrel{^}{\beta}}_{\text{MLE}}={C}^{-1}{X}^{\prime}\stackrel{^}{W}Z,$ (1.3)

where
$C={X}^{\prime}\stackrel{^}{W}X$ ;
$\stackrel{^}{W}=diag\left[{\stackrel{^}{\pi}}_{i}\left(1-{\stackrel{^}{\pi}}_{i}\right)\right]$ and Z is the column vector with i^{th} element equals
$\mathrm{log}\left({\stackrel{^}{\pi}}_{i}\right)+\frac{{y}_{i}-{\stackrel{^}{\pi}}_{i}}{{\stackrel{^}{\pi}}_{i}\left(1-{\stackrel{^}{\pi}}_{i}\right)}$ , which is an asymptotically unbiased estimate of
$\beta $ . The covariance matrix of
${\stackrel{^}{\beta}}_{MLE}$ is

$Cov\left({\stackrel{^}{\beta}}_{MLE}\right)={\left({X}^{\prime}\stackrel{^}{W}X\right)}^{-1}={C}^{-1},$ (1.4)

Multicollinearity inflates the variance of the Maximum Likelihood Estimator (MLE) in the logistic regression. Therefore, MLE is no longer the best estimate of parameter in the logistic regression model.

To overcome the problem of multicollinearity in the logistic regression, many scholars conducted a lot of research. Schaffer et al. (1984) [1] proposed Ridge Logistic Regression (RLR). Aguilera et al. (2006) [2] proposed Principal Component Logistic Estimator (PCLE). Nja et al. (2013) [3] proposed Modified Logistic Ridge Regression Estimator (MLRE). Inan and Erdogan (2013) [4] proposed Liu-type estimator (LLE).

Some scholars also improve estimation by limiting unknown parameters in the model which may be exact or stochastic. Where additional linear restriction on parameter vector is assumed to hold, Duffy and Santer (1989) [5] proposed Restricted Maximum Likelihood Estimator (RMLE), Siray et al. (2014) [6] proposed Restricted Liu Estimator (RLE), Asar Y et al. (2016) [7] proposed Restricted Ridge Estimator. Where additional stochastic linear restriction on parameter vector is assumed to hold, Nagarajah V, Wijekoon P (2015) [8] proposed Stochastic Restricted Maximum Likelihood Estimator (SRMLE), Varathan N, Wijekoon P (2016) [9] proposed Stochastic Restricted Liu Maximum Likelihood Estimator (SRLMLE), Varathan N, Wijekoon P (2016) [10] proposed Stochastic Restricted Ridge Maximum Likelihood Estimator (SRRMLE).

In this article, we propose a new estimator which is called the Stochastic Restricted Liu Estimator (SRLE) when the linear stochastic restrictions are available in addition to the logistic regression model. The article is structured as follows. Model specifications and the new estimators are proposed in Section 2. Section 3 is derived to compare the mean square error matrix (MSEM) of SRLE, MLE etc. Section 4 is a Numerical Example. A Monte Carlo Simulation is used to verify the above theoretical results shown in Section 5.

2. The Proposed Estimators

For the unrestricted model given in Equation (1.1), the LLE proposed by Liu (1993), Urgan and Tez (2008), Mansson et al. (2012) is defined as

${\stackrel{^}{\beta}}_{LLE}={Z}_{d}{\stackrel{^}{\beta}}_{MLE},$ (2.1)

where $0<d<1$ is a parameter and ${Z}_{d}={\left(C+I\right)}^{-1}\left(C+dI\right)$ . The bias and variance matrices of the LLE:

$Bias\left({\stackrel{^}{\beta}}_{LLE}\right)=\left({Z}_{d}-I\right)\beta ={b}_{1},$ (2.2)

$Cov\left({\stackrel{^}{\beta}}_{LLE}\right)={Z}_{d}{C}^{-1}{Z}_{d},$ (2.3)

In addition to sample model (1.1), let us be given some prior information about $\beta $ in the form of a set of j independent linear stochastic restrictions as follows:

$h=H\beta +v;\text{\hspace{0.17em}}E\left(v\right)=0,\text{\hspace{0.17em}}Cov\left(v\right)=\Psi ,$ (2.4)

where H is a $q\times \left(p+1\right)$ of full rank $q\le \left(p+1\right)$ known elements, h is an $q\times 1$ stochastic known vector and v is an $q\times 1$ random vector of disturbances with dispersion matrix $\Psi $ and mean 0, and $\Psi $ is assumed to be known $q\times q$ positive definite matrix. Further, it is assumed that v is stochastically independent of ${\epsilon}^{*}=\left({\epsilon}_{1},\cdots ,{\epsilon}_{n}\right)$ , i.e. $E\left({\epsilon}^{*}{v}^{\prime}\right)=0$ .

For the restricted model specified by Equations (1.1) and (2.4), the SRMLE proposed by Varathan Nagarajah and Pushpakanthie (2015), the SRLMLE proposed by Varathan N, Wijekoon P (2016) are denoted as

${\stackrel{^}{\beta}}_{SRMLE}={\stackrel{^}{\beta}}_{MLE}+{C}^{-1}{H}^{\prime}{\left(\Psi +H{C}^{-1}{H}^{\prime}\right)}^{-1}\left(h-H{\stackrel{^}{\beta}}_{MLE}\right),$ (2.5)

${\stackrel{^}{\beta}}_{SRLMLE}={Z}_{d}{\stackrel{^}{\beta}}_{SRMLE},$ (2.6)

respectively, the bias and variance matrices of the SRMLE and SRLMLE:

$Bias\left({\stackrel{^}{\beta}}_{SRMLE}\right)=0,$ (2.7)

$Bias\left({\stackrel{^}{\beta}}_{SRLMLE}\right)=\left({Z}_{d}-I\right)\beta ={b}_{1},$ (2.8)

$Cov\left({\stackrel{^}{\beta}}_{SRMLE}\right)={C}^{-1}-{C}^{-1}{H}^{\prime}{\left(\Psi +H{C}^{-1}{H}^{\prime}\right)}^{-1}H{C}^{-1}=A,$ (2.9)

and

$Cov\left({\stackrel{^}{\beta}}_{SRLMLE}\right)={Z}_{d}A{Z}_{d},$ (2.10)

respectively.

We propose the Mix Maximum Likelihood Estimator (MME) [11] in logistic regression model which through analogy OME [12] in linear model. Defined as follows

${\stackrel{^}{\beta}}_{MME}={\left(C+{H}^{\prime}{\Psi}^{-1}H\right)}^{-1}\left({X}^{\prime}\stackrel{^}{W}y+{H}^{\prime}{\Psi}^{-1}h\right),$ (2.11)

the bias and variance matrices of the MME: $Bias\left({\stackrel{^}{\beta}}_{MME}\right)=0$ ,

$COV\left({\stackrel{^}{\beta}}_{MME}\right)={\left(C+{H}^{\prime}{\Psi}^{-1}H\right)}^{-1}={C}^{-1}-{C}^{-1}{H}^{\prime}{\left({\Psi}^{-1}+H{C}^{-1}{H}^{\prime}\right)}^{-1}=B.$

In this paper, we propose a new estimator which is named Stochastic Restricted Liu Estimator. Defined as follows

${\stackrel{^}{\beta}}_{SRLE}={Z}_{d}{\stackrel{^}{\beta}}_{MME},$ (2.12)

the bias and variance matrices of the SRLE:

$Bias\left({\stackrel{^}{\beta}}_{SRLE}\right)=E\left({\stackrel{^}{\beta}}_{SRLE}\right)-\beta =\left({Z}_{d}-I\right)\beta ={b}_{1},$ (2.13)

and

$Cov\left({\stackrel{^}{\beta}}_{SRLE}\right)=D\left({\stackrel{^}{\beta}}_{SRLE}\right)={Z}_{d}B{Z}_{d},$ (2.14)

respectively.

Now we will give a theorem and a lemma that will be used in the following paragraphs.

Theorem 2.1. [13] (Rao and Toutenburg, 1995) Let $A$ : $n\times n$ such that $A>0$ and $B\ge 0$ . Then $A+B\ge 0$ .

Lemma 2.1. [14] (Rao et al., 2008) Let the two $n\times n$ matrices $M>0$ , $N\ge 0$ , then $M>N$ if ${\lambda}_{\mathrm{max}}\left(N{M}^{-1}\right)<1$ .

3. Mean Square Error Matrix (MSEM) Comparisons of the Estimators

In this section, we will compare SRLE with MLE, LLE, SRMLE, SRLMLE under the standard of MSEM.

First, the MSEM of $\stackrel{^}{\beta}$ which is an estimator of $\beta $ is

$MSEM\left(\stackrel{^}{\beta}\right)=Cov\left(\stackrel{^}{\beta}\right)+\left[Bias\left(\stackrel{^}{\beta}\right)\right]{\left[Bias\left(\stackrel{^}{\beta}\right)\right]}^{\prime},$ (3.1)

where $Bias\left(\stackrel{^}{\beta}\right)$ is the bias vector and $Cov\left(\stackrel{^}{\beta}\right)$ is the dispersion matrix. For two given estimators ${\stackrel{^}{\beta}}_{1}$ and ${\stackrel{^}{\beta}}_{2}$ , the estimator ${\stackrel{^}{\beta}}_{2}$ is considered to be better than ${\stackrel{^}{\beta}}_{1}$ in the MSEM criterion, if and only if

$\Delta \left({\stackrel{^}{\beta}}_{1},{\stackrel{^}{\beta}}_{2}\right)=MSEM\left({\stackrel{^}{\beta}}_{1}\right)-MSEM\left({\stackrel{^}{\beta}}_{2}\right)\ge 0,$ (3.2)

The scalar mean square error matrix (MSE) is defined as

$MSE\left(\stackrel{^}{\beta}\right)=tr\left(MSEM\left(\stackrel{^}{\beta}\right)\right),$ (3.3)

Note that the MSEM criterion is always superior over the scalar MSE criterion, we only consider the MSEM comparisons among the estimators.

3.1. MSEM Comparisons of the MLE and SRLE

In this section, we make the MSEM comparison between the MLE and SRLE.

First, the MSEM of MLE and SRLE as

$MSEM\left({\stackrel{^}{\beta}}_{MLE}\right)={C}^{-1},$ (3.4)

and

$MSEM\left({\stackrel{^}{\beta}}_{SRLE}\right)={Z}_{d}B{Z}_{d}+{b}_{1}{{b}^{\prime}}_{1},$ (3.5)

respectively.

We now compare these two estimates to the criterion of the MSEM

$\begin{array}{c}{\Delta}_{1}=MSEM\left({\stackrel{^}{\beta}}_{MLE}\right)-MSEM\left({\stackrel{^}{\beta}}_{SRLRE}\right)\\ ={C}^{-1}-{Z}_{d}B{Z}_{d}-{b}_{1}{{b}^{\prime}}_{1}\\ ={C}^{-1}-\left({Z}_{d}B{Z}_{d}+{b}_{1}{{b}^{\prime}}_{1}\right)\\ ={M}_{1}-{N}_{1},\end{array}$ (3.6)

where ${M}_{1}={C}^{-1}$ and ${N}_{1}={Z}_{d}B{Z}_{d}+{b}_{1}{{b}^{\prime}}_{1}$ . Obviously, ${b}_{1}{{b}^{\prime}}_{1}$ is non-negative definite matrices, ${C}^{-1}$ and ${Z}_{d}B{Z}_{d}$ are positive definite. Using Theorem 2.1, it is clear that ${N}_{1}$ is positive define matrix. By Lemma 2.1, if ${\lambda}_{\mathrm{max}}\left({N}_{1}{M}_{1}^{-1}\right)<1$ , where ${\lambda}_{\mathrm{max}}\left({N}_{1}{M}_{1}^{-1}\right)$ is the largest eigen value of ${N}_{1}{M}_{1}{}^{-1}$ then ${M}_{1}-{N}_{1}$ is positive definite matrix. Based on the above discussions, the following theorem can be proved.

Theorem 3.1. For the restricted linear model specified by Equations (1.1) and (2.4), the SRLE is superior to MLE if and only if ${\lambda}_{\mathrm{max}}\left({N}_{1}{M}_{1}^{-1}\right)<1$ in the MSEM sense.

3.2. MSEM Comparisons of the LLE and SRLE

First, the MSEM of LLE as

$MSEM\left({\stackrel{^}{\beta}}_{LLE}\right)={Z}_{d}{C}^{-1}{Z}_{d}+{b}_{1}{{b}^{\prime}}_{1}.$ (3.7)

We now compare these two estimates to the criterion of the MSEM

$\begin{array}{c}{\Delta}_{2}=MSEM\left({\stackrel{^}{\beta}}_{LLE}\right)-MSEM\left({\stackrel{^}{\beta}}_{SRLRE}\right)\\ ={Z}_{d}{C}^{-1}{Z}_{d}-{Z}_{d}B{Z}_{d}+{b}_{2}{{b}^{\prime}}_{2}-{b}_{2}{{b}^{\prime}}_{2}\\ ={Z}_{d}D{Z}_{d}\end{array}$ (3.8)

where $D={C}^{-1}{H}^{\prime}{\left({\Psi}^{-1}+H{C}^{-1}{H}^{\prime}\right)}^{-1}H{C}^{-1}$ . Obviously, ${Z}_{d}D{Z}_{d}$ is positive definite. Based on the above discussions, the following theorem can be proved.

Theorem 3.2. For the restricted linear model specified by Equations (1.1) and (2.4), the SRLE is always superior to LLE in the MSEM sense.

3.3. MSEM Comparisons of the SRMLE and SRLE

First, the MSEM of SRMLE as

$MSEM\left({\stackrel{^}{\beta}}_{SRLE}\right)=A.$ (3.9)

We now compare these two estimates to the criterion of the MSEM

$\begin{array}{c}{\Delta}_{3}=MSEM\left({\stackrel{^}{\beta}}_{SRMLE}\right)-MSEM\left({\stackrel{^}{\beta}}_{SRLRE}\right)\\ ={C}^{-1}-{C}^{-1}{H}^{\prime}{\left(\Psi +H{C}^{-1}{H}^{\prime}\right)}^{-1}H{C}^{-1}-{Z}_{d}B{Z}_{d}-{b}_{1}{{b}^{\prime}}_{1}\\ ={C}^{-1}-\left[F+{Z}_{d}B{Z}_{d}+{b}_{1}{{b}^{\prime}}_{1}\right]\\ ={M}_{1}-{N}_{3}\end{array}$ (3.10)

where $F={C}^{-1}{H}^{\prime}{\left(\Psi +H{C}^{-1}{H}^{\prime}\right)}^{-1}H{C}^{-1}$ and ${N}_{3}=F+{Z}_{d}B{Z}_{d}+{b}_{1}{{b}^{\prime}}_{1}$ . Obviously, ${b}_{1}{{b}^{\prime}}_{1}$ is non-negative definite matrices, $F$ and ${Z}_{d}B{Z}_{d}$ are positive definite. Using Theorem 2.1, it is clear that ${N}_{3}$ is positive define matrix. By Lemma 2.1, if ${\lambda}_{\mathrm{max}}\left({N}_{3}{M}_{1}^{-1}\right)<1$ , where ${\lambda}_{\mathrm{max}}\left({N}_{3}{M}_{1}^{-1}\right)$ is the largest eigen value of ${N}_{3}{M}_{1}^{-1}$ then ${M}_{1}-{N}_{3}$ is positive definite matrix. Based on the above discussions, the following theorem can be proved.

Theorem 3.3. For the restricted linear model specified by Equations (1.1) and (2.4), the SRLE is superior to SRMLE if and only if ${\lambda}_{\mathrm{max}}\left({N}_{3}{M}_{1}^{-1}\right)<1$ in the MSEM sense.

3.4. MSEM Comparisons of the SRLMLE and SRLE

First, the MSEM of SRMLE as

$MSEM\left({\stackrel{^}{\beta}}_{SRLMLE}\right)={Z}_{d}A{Z}_{d}+{b}_{1}{{b}^{\prime}}_{1}.$ (3.11)

Now, we consider the following difference

$\begin{array}{c}{\Delta}_{4}=MSEM\left({\stackrel{^}{\beta}}_{SRLMLE}\right)-MSEM\left({\stackrel{^}{\beta}}_{SRLRE}\right)\\ ={Z}_{d}A{Z}_{d}-{Z}_{d}B{Z}_{d}+{b}_{1}{{b}^{\prime}}_{1}-{b}_{1}{{b}^{\prime}}_{1}\\ ={Z}_{d}D{Z}_{d}-{Z}_{d}F{Z}_{d}\\ ={M}_{4}-{N}_{4}\end{array}$ (3.12)

where ${M}_{4}={Z}_{d}D{Z}_{d}$ and ${N}_{4}={Z}_{d}F{Z}_{d}$ . Obviously, $D$ , ${M}_{4}$ and ${N}_{4}$ are positive definite matrices. By Lemma 2.1, if ${\lambda}_{\mathrm{max}}\left({N}_{4}{M}_{4}^{-1}\right)<1$ , where ${\lambda}_{\mathrm{max}}\left({N}_{4}{M}_{4}^{-1}\right)$ is the largest eigen value of ${N}_{4}{M}_{4}^{-1}$ then ${M}_{4}-{N}_{4}$ is positive definite matrix. Based on the above discussions, the following theorem can be proved.

Theorem 3.4. For the restricted linear model specified by Equations (1.1) and (2.4), the SRLE is superior to SRLMLE if and only if ${\lambda}_{\mathrm{max}}\left({N}_{4}{M}_{4}^{-1}\right)<1$ in the MSEM sense.

4. Numerical Example

In this section, we now consider the data set of IRIS from UCI to illustrate our theoretical results.

A binary logistic regression model is set where the dependent variable is as follows. If the plant is Iris-setosa, it is indicated with 0 and if the plant is Iris-versicolor, it is 1. The explanatory variables is as follows. ${x}_{1}$ : Sepal. Length; ${x}_{2}$ : Petal. Length; and ${x}_{3}$ : Petal. Width.

The sample consists of the first 80 observations. The correlation matrix can be seen in Table A1 (Appendix A). From Table A1 (Appendix A), it can be seen that the correlations among the regressors are all greater than 0.80 and some of them are close to 0.98 and the condition number is 55.4984 showing that there is a severe multicollinearity problem in this data.

From Table A2 (Appendix A) we can conclude that:

1) With the increase of d, the MSE values of the estimators are decreasing which are LRE, SRRMLE, SRLRE, SRLMLE, SRLE. 2) With the increase of d, the MSE values of the estimators are same which are MLE, SRMLE, MME. 3) The new estimator is always superior to the other estimators.

5. Monte Carlo Simulation

To illustrate the above theoretical results, the Monte Carlo Simulation is used for data Simulation. Following McDonald and Galarneau (1975) [15] and Kibria (2003) [16] , the explanatory variables are generated using the following equation.

${x}_{ij}={\left(1-{\rho}^{2}\right)}^{1/2}{z}_{ij}+\rho {z}_{i,p},\text{\hspace{0.17em}}\text{\hspace{0.17em}}i=1,2,\cdots ,n,\text{\hspace{0.17em}}\text{\hspace{0.17em}}j=1,2,\cdots ,p,$ (5.1)

where ${z}_{ij}$ are pseudo-random numbers from standardized normal distribution and ${\rho}^{2}$ represents the correlation between any two explanatory variables.

In this section, we set $\rho $ to take 0.70, 0.80, 0.99 and n to take 20, 100, 200 for the dependent variable with two and four explanatory variables. The dependent variable ${y}_{i}$ in (1.1) is obtained from the Bernoulli ( ${\pi}_{i}$ ) distribution where

${\pi}_{i}=\frac{\mathrm{exp}\left({{x}^{\prime}}_{i}\beta \right)}{1+\mathrm{exp}\left({{x}^{\prime}}_{i}\beta \right)}$ . The parameter values of ${\beta}_{1},\cdots ,{\beta}_{p}$ are chosen so that $\underset{j=1}{\overset{p}{\sum}}{\beta}_{j}^{2}}=1$ and ${\beta}_{1}=\cdots ={\beta}_{p}$ . Further for the Liu parameter d, some selected values is chosen so that $0\le d\le 1$ . Moreover, for the restriction, we choose

$H=\left(\begin{array}{cccc}1& -1& 0& 0\\ 0& 1& -1& 0\\ 0& 0& 1& -1\end{array}\right),\text{\hspace{0.17em}}\text{\hspace{0.17em}}h=\left(\begin{array}{c}1\\ -2\\ 1\end{array}\right)\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{and}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\Psi =\left(\begin{array}{ccc}1& 0& 0\\ 0& 1& 0\\ 0& 0& 1\end{array}\right),$ (5.2)

The simulation is repeated 2000 times by generating new pseudo-random numbers and the simulated MSE values of the estimators are obtained using the following equation

$\begin{array}{c}MS\stackrel{^}{E}\left({\stackrel{^}{\beta}}^{*}\right)=Mean\left\{tr\left[MSEM\left(\stackrel{^}{\beta},\beta \right)\right]\right\}\\ =\frac{1}{2000}{\displaystyle \underset{n=1}{\overset{2000}{\sum}}{\left(\stackrel{^}{\beta}-\beta \right)}^{\prime}\left(\stackrel{^}{\beta}-\beta \right)}\end{array}$ (5.3)

The results of the simulation are reported in Tables A3-A9 (Appendix A) and also displayed in Figures A1-A3 (Appendix B).

From Tables A3-A9, Figures A1-A3, we can conclude that:

1) The MSE values of all the estimators are increasing along with the increase of $\rho $ ; 2) The MSE values of all the estimators are decreasing along with the increase of n; 3) SRLE is always superior to the MLE, LLE, SRMLE, SRLMLE for all d, n and $\rho $ .

6. Conclusion Remarks

In this paper, we proposed the Stochastic Restricted Liu Estimator (SRLE) for logistic regression model when the linear stochastic restriction was available. In the sense of MSEM, we got the necessary and sufficient condition or sufficient condition that SRLE was superior to MLE, LLE, SRMLE and SRLMLE and Verify its superiority by using Monte Carlo simulation. How to reduce the new estimation’s bias is the focus of our next step which guaranteed mean square error does not increase.

Acknowledgements

This work was supported by the Natural Science Foundation of Henan Province of China (No. 152300410112).

Appendix A

Table A1. The correlation matrix of the dataset.

Table A2. The estimated MSEM values for different d.

Table A3. The estimated MSEM values for different d when $n=20$ and $\rho =0.70$ .

Table A4. The estimated MSEM values for different d when $n=20$ and $\rho =0.80$ .

Table A5. The estimated MSEM values for different d when $n=20$ and $\rho =0.99$ .

Table A6. The estimated MSEM values for different d when $n=100$ and $\rho =0.7$ .

Table A7. The estimated MSEM values for different d when $n=100$ and $\rho =0.8$ .

Table A8. The estimated MSEM values for different d when $n=200$ and $\rho =0.8$ .

Table A9. The estimated MSEM values for different d when $n=200$ and $\rho =0.99$ .

Appendix B

Figure A1. The estimated MSE values for MLE, LLE, SRMLE, SRLMLE and SRLE for $n=20$ .

Figure A2. The estimated MSE values for MLE, LLE, SRMLE, SRLMLE and SRLE for $n=100$ .

Figure A3. The estimated MSE values for MLE, LLE, SRMLE, SRLMLE and SRLE for $n=200$ .

Conflicts of Interest

The authors declare no conflicts of interest.

[1] | Schaefer, R.L., Roi, L.D. and Wolfe, R.A. (2007) A Ridge Logistic Estimator. Communication in Statistics-Theory and Methods, 13, 99-113. |

[2] |
Aguilera, A.M., Escabias, M. and Valderrama, M.J. (2006) Using Principal Components for Estimating Logistic Regression with High-Dimensional Multicollinear Data. Computational Statistics & Data Analysis, 50, 1905-1924.
https://doi.org/10.1016/j.csda.2005.03.011 |

[3] | Ogoke, U.P., Nduka, E.C. and Nja, M.E. (2013) The Logistic Regression Model with a Modified Weight Function in Survival Analysis. Mathematical Theory & Modeling, 3, 12-17. |

[4] |
Inan, D. and Erdogan, B.E. (2013) Liu-Type Logistic Estimator. Communication in Statistics—Simulation and Computation, 42, 1578-1586.
https://doi.org/10.1080/03610918.2012.667480 |

[5] |
Duffy, D.E. and Santner, T.J. (1989) On the Small Sample Prosperities of Norm-Restricted Maximum Likelihood Estimators for Logistic Regression Models. Communications in Statistics—Theory and Methods, 18, 959-980.
https://doi.org/10.1080/03610928908829944 |

[6] |
Siray, G.ü., Toker, S. and Kaciranlar, S. (2014) On the Restricted Liu Estimator in the Logistic Regression Model. Communication in Statistics—Simulation and Computation, 44, 217-232. https://doi.org/10.1080/03610918.2013.771742 |

[7] |
Asar, Y., Arashi, M. and Wu, J. (2017) Restricted Ridge Estimator in the Logistic Regression Model. Communication in Statistics—Simulation and Computation, 46.
https://doi.org/10.1080/03610918.2016.1206932 |

[8] |
Nagarajah, V. and Wijekoon, P. (2015) Stochastic Restricted Maximum Likelihood Estimator in Logistic Regression Model. Open Journal of Statistics, 5, 837-851.
https://doi.org/10.4236/ojs.2015.57082 |

[9] | Varathan, N. and Wijekoon, P. (2016) Logistic Liu Estimator under Stochastic Linear Restrictions. Statistical Papers, 1-18. |

[10] |
Varathan, N. and Wijekoon, P. (2016) Ridge Estimator in Logistic Regression under Stochastic Linear Restrictions. British Journal of Mathematics & Computer Science, 15, 1-14. https://doi.org/10.9734/BJMCS/2016/24585 |

[11] | Zuo, W.-B. and Li, Y.-L. (2017) Mixed Maximum Likelihood Estimator in Logistic Regression Model. Journal of Henan Institute of Education, 26, 1-6. |

[12] |
Theil, H. and Goldberger, A.S. (1961) On Pure and Mixed Estimation in Economics. International Economic Review, 2, 65-77. https://doi.org/10.2307/2525589 |

[13] |
Rao, C.R. and Toutenburg, H. (1995) Linear Models: Least Squares and Alternatives. Second Edition, Springer-Verlag, New York.
https://doi.org/10.1007/978-1-4899-0024-1 |

[14] | Rao, C.R., Toutenburg, H., Shalabh and Heumann, C. (2008) Linear Model and Generalizations. Springer, Berlin. |

[15] |
Golam Kibria, B.M. (2003) Performance of Some New Ridge Regression Estimators. Communication in Statistics—Simulation and Computation, 32, 419-435.
https://doi.org/10.1081/SAC-120017499 |

[16] |
McDonald, G.C. and Galarneau, D.I. (1975) A Monte Carlo Evaluation of Some Ridge-Type Estimators. Journal of the American Statistical Association, 70, 407-416.
https://doi.org/10.1080/01621459.1975.10479882 |

Journals Menu

Contact us

+1 323-425-8868 | |

customer@scirp.org | |

+86 18163351462(WhatsApp) | |

1655362766 | |

Paper Publishing WeChat |

Copyright © 2024 by authors and Scientific Research Publishing Inc.

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.