Bayesian Regularized Quantile Regression Analysis Based on Asymmetric Laplace Distribution

Abstract

In recent years, variable selection based on penalty likelihood methods has aroused great concern. Based on the Gibbs sampling algorithm of asymmetric Laplace distribution, this paper considers the quantile regression with adaptive Lasso and Lasso penalty from a Bayesian point of view. Under the non-Bayesian and Bayesian framework, several regularization quantile regression methods are systematically compared for error terms with different distributions and heteroscedasticity. Under the error term of asymmetric Laplace distribution, statistical simulation results show that the Bayesian regularized quantile regression is superior to other distributions in all quantiles. And based on the asymmetric Laplace distribution, the Bayesian regularized quantile regression approach performs better than the non-Bayesian approach in parameter estimation and prediction. Through real data analyses, we also confirm the above conclusions.

Share and Cite:

Tang, Q. , Zhang, H. and Gong, S. (2020) Bayesian Regularized Quantile Regression Analysis Based on Asymmetric Laplace Distribution. Journal of Applied Mathematics and Physics, 8, 70-84. doi: 10.4236/jamp.2020.81006.

1. Introduction

Since the pioneering work by Koenker and Bassett in 1978, quantile regression (QR) has been deeply studied and widely applied to descript the elaborate relationship between the dependent variable and predictors [1]. Compared with the traditional mean regression, quantile regression has more robustness to data with outliers. In 1999, Koenker and Machado connected the asymmetric Laplace distribution (ALD) to QR model and defined a goodness-of-fit criterion for quantile regression, which is the natural analog of R2 statistic of least squares regression [2]. In 2001, Yu and Moyeed first proposed a Bayesian quantile regression model that the error follows an asymmetric Laplace ( AL) distribution, and proved the maximization of likelihood-based inference used independently distributed asymmetric Laplace densities was equivalent to the minimization of the loss function [3]. In 2010, Hewson and Yu suggested quantile regression model for binary data within the Bayesian framework [4]. In 2011, Reich et al. introduced Bayesian spatial quantile regression model [5]. In 2013, Sriram et al. showed that the misspecified likelihood in the ALD approach still leads to consistent results [6]. In 2009, Kozumi and Kobayashi built a more efficient Gibbs sampler for fitted the quantile regression model based on a location-scale mixture of the asymmetric Laplace distribution to draw samples from the posterior distribution [7]. In 2012, Khare and Hobert proved that this new sampling algorithm converges at a geometric rate [8]. In 2015, Sriram proposed a correction to the MCMC iterations to construct asymptotically valid intervals [9].

In 2004, Koenker added the Lasso regularization method to the mixed-effect quantile regression model for the first time, and the Lasso penalty made the random effect shrink to zero [10]. In 2007, Wang et al. considered the least absolute deviance (LAD) estimate with adaptive Lasso penalty (LAD-lasso) and proved its oracle property [11]. In 2008, Li and Zhu considered quantile regression with the Lasso penalty and developed its piecewise linear solution path [12]. In 2009, Wu and Liu studied the quantile regression with the SCAD method and the adaptive Lasso method [13]. In 2008, Park and Casella studied the Lasso penalty from the Bayesian angle, and proposed that the hierarchical model can be effectively solved by the Gibbs sampler, thereby introducing the regularization method [14]. In 2010, Li et al. studied the regularization method in quantile regression from the perspective of Bayesian and proposed to set the prior distribution of parameters to Laplace prior, and use Gibbs sampler to sampling Bayesian Lasso quantile regression [15]. In 2012, Alhamzawi et al. proposed Bayesian adaptive Lasso quantile regression, by setting different penalty parameters for different variables, and setting the penalty parameter to inverse gamma distribution, and the inverse gamma priori Parameters are treated as unknowns and estimated along with other parameters [16]. In 2018, Adlouni et al. showed that a regularized quantile regression model with B-Splines based on five penalties (Lasso, Ridge, SCAD0, SCAD1 and SCAD2) in Bayesian framework [17].

Based on the existing literature, the Bayesian quantile regression is realized by expressing the asymmetric Laplace distribution as scale mixtures of the standard normal distribution and the standard exponential distribution, and the Gibbs sampler is used to simulate the distributed parameters. The regularized quantile regression under the Bayesian framework is compared with the non-Bayesian regularized quantile regression method. Finally, the prostate cancer data sets are used to illustrate the advantages and disadvantages of these two approaches.

2. Methods

2.1. Quantile Regression

Given data { ( x i , y i ) , i = 1 , , n } , with covariate vector x i = ( x i 1 , x i 2 , , x i k ) and y = ( y 1 , , y n ) is the response variable. The θ th quantile regression model for the response y i given x i takes the form of

Q y i ( θ | x i ) = x i β ( θ ) , (1)

where Q y i ( θ | x i ) = F y i 1 ( θ | x i ) is the inverse cumulative distribution function and β ( θ ) is the unknown coefficients vector that is dependent on the quantile θ , θ ( 0 , 1 ) .

The regression parameter β can be estimated by minimizating the following objective function

min β i = 1 n ρ θ ( y i x i β ) , (2)

where ρ θ ( u ) = u ( θ I ( u < 0 ) ) is the loss function and I ( ) denotes the indicator function.

In 2001, Yu and Moyeed [3] argued that the minimization problem in equation (2) is equivalent to maximizing the likelihood function of y i by assuming y i ’s are random variables from a skewed Laplace distribution with μ = x i β and σ = 1 . The density function of a skewed Laplace distribution is given by

f ( y | μ , σ , θ ) = θ ( 1 θ ) σ exp { ρ θ ( y μ ) σ } , (3)

where, σ is the scale parameter, μ is the location parameter and θ is the asymmetrc parameter.

Then the likelihood function of the sample y = ( y 1 , y 2 , , y n ) can be expressed as

L ( y | μ , σ , θ ) = θ n ( 1 θ ) n σ n exp { 1 σ i = 1 n ρ θ ( y i μ ) } . (4)

Tsionas [18], Kozumi and Kobayashi [7] have demonstrated that ALD can be viewed as a mixture of standard an exponential distribution, exp ( 1 ) and a standard normal distribution N ( 0 , 1 ) . Assume that z and υ represent a standard exponential distribution and a standard normal distribution, respectively.

For ω = 1 2 θ θ ( 1 θ ) , f = 2 θ ( 1 θ ) , if μ A L D ( 0 , σ , θ ) , random variable μ = ω z + ϕ σ 1 2 z υ , Therefore, it can be known that the independent variable y i of the quantile regression is equivalent to

y i = x i β + ω z i + ϕ σ 1 2 z i υ i . (5)

2.2. Bayesian Quantile Regression with Lasso and Adaptive Lasso Penalty

The Bayesian quantile regression parameter estimation model with Lasso penalty (Li and Zhu) [12] is

min β i n ρ θ ( y i x i β ) + λ j = 1 k | β j | , (6)

Li et al. [15] set the prior distribution of the parameter β j to a Laplace prior p ( β j | σ , λ ) = ( σ λ / 2 ) k exp { σ λ j = 1 k | β j | } . It is also assumed that the error term μ i obeys the asymmetric Laplace distribution (ALD).

Bayesian quantile regression with adaptive Lasso penalty (BQR-AL) is based on different penalty parameters are applied to different regression coefficients. Therefore, the parameter estimation model of BQR-AL is

min β i = 1 n ρ θ ( y i x i β ) + j = 1 k | λ j β j | , (7)

Alhamzawi and Yu [16] proposed different penalization parameters for different regression coefficients, and the prior of the penalty parameters is set to the inverse gamma distribution, and treat the hyperparameters of the inverse gamma prior as unknowns. Thus, the Laplace prior on β j by

p ( β j | σ , λ j ) = σ 1 2 2 λ j exp { σ 1 2 | β j | λ j } , (8)

Andrews and Mallows [19] mentioned that

ξ 2 exp { ξ | t | } = 0 1 2 π s exp { t 2 2 s } ξ 2 2 exp { ξ 2 2 s } d s , ξ > 0. (9)

Let η = σ 1 2 λ j , (8) can be written as

p ( β j | σ , λ j ) = η 2 exp { η | β j | } , (10)

also equivalent to

p ( β j | σ , λ j ) = 0 1 2 π s j exp { β j 2 / 2 s j } η 2 2 exp { η 2 s j / 2 } d s j , (11)

so there are

p ( β j | σ , λ j 2 ) = 0 1 2 π s j exp { β j 2 / 2 s j } σ 2 λ j 2 exp { σ s j / 2 λ j 2 } d s j . (12)

The prior distribution of λ j 2 is set to the inverse gamma prior, so the distribution density function of λ j 2 is

p ( λ j 2 | δ , γ ) = γ δ Γ ( δ ) ( λ j 2 ) 1 δ exp { γ λ j 2 } , (13)

where δ > 0 and γ > 0 are two hyperparameters. Yi and Xu [20] pointed out that the values of these two hyperparameters determine the degree of compression of the variables in (13), because small γ large δ will lead to greater compression, so in order to avoid the impact of special values on the estimation of regression coefficients, we consider δ and γ as unknown parameters.

In summary, the Bayesian quantile regression hierarchical model with adaptive Lasso penalty is

y i = x i β + ω z i + ϕ σ 1 2 z i υ i , p ( β 0 ) 1 , p ( υ i ) = 1 2 π exp { υ i 2 2 } , p ( z i | σ ) = σ exp { σ z i } ,

p ( β j , s j | σ , λ j 2 ) = 1 2 π s j exp { β j 2 / 2 s j } σ 2 λ j 2 exp { σ s j / 2 λ j 2 } , p ( λ j 2 | δ , γ ) = τ δ Γ ( δ ) ( λ j 2 ) 1 δ exp { γ λ j 2 } , p ( σ ) = σ a 1 exp { b σ } , p ( γ , δ ) = γ 1 . (14)

2.3. Gibbs Sampling

From the hierarchical model, the joint posterior density function of each parameter is

p ( β , z , s , σ , λ 1 , , λ k | y , Χ ) p ( y | β , z , σ , Χ ) i = 1 n p ( z i | σ ) j = 1 k p ( β j , s j | σ , λ j 2 ) p ( λ j 2 | γ , δ ) p ( σ ) p ( γ , δ ) i = 1 n σ σ 1 ϕ 2 z i exp { σ ( y i x i β ω z i ) 2 2 ϕ 2 z i σ z i } × j = 1 k 1 2 π s j exp { β j 2 / 2 s j } σ 2 λ j 2 exp { σ s j / 2 λ j 2 } × γ δ Γ ( δ ) ( λ j 2 ) 1 δ exp { γ λ j 2 } γ 1 σ a 1 exp { b σ } ,

where

y = ( y 1 , y 2 , , y n ) , Χ = ( x 1 , x 2 , , x n ) , z = ( z 1 , z 2 , , z n ) , s = ( s 1 , s 2 , , s k ) .

The full condition posterior distribution of each parameter is

β 0 | N ( β ¯ 0 , s β 0 2 ) , β j | N ( β ¯ j , s β j 2 ) , z i | GIG ( 1 2 , α , l ) 2 , s j | GIG ( 1 2 , σ 2 λ j 2 , β j 2 ) ,

σ | G ( a 1 , a 2 ) 3 , λ j 2 | GIG ( 1 + δ , σ s j 2 + γ ) , γ | G ( k δ 1 , j = 1 k λ j 2 ) . (15)

Here

α = σ ω 2 ϕ 2 + 2 σ , l = σ ϕ 2 ( y i x i β ) 2 , a 1 = 3 2 n + k + a , a 2 = i = 1 n [ ( y i x i β ω z i ) 2 ϕ 2 z i + z i ] + j = 1 k s j 2 λ j 2 + b . (16)

Since the full condition posterior distribution p ( δ | ) ( Γ ( δ ) ) k γ k δ j = 1 k λ j 2 δ

of δ does not have a closed form, it is a logarithmic convex function. Gilks [21] proposed using the adaptive rejection sampling algorithm to sample this distribution.

3. Simulation Studies

Based on the MCMC algorithm of Gibbs sampling, Bayesian estimation is carried out on the model. The simulation studies used to compare the regularized quantile regression under the Bayesian and the non-Bayesian framework. These methods include Bayesian quantile regression with adaptive Lasso penalty (BQR-AL), Bayesian quantile regression with Lasso penalty (BQR-L), quantile regression with Lasso penalty (QR-L), quantile regression with SCAD penalty (QR-SCAD) and quantile regression (QR).

3.1. Independent and Identically Distributed Random Errors

Here, we follow the same simulation strategy introduced by Li, Xi and Lin [15] in the simulation studies 1, 2 and 3 with different parameter values for the error distributions.

We consider a linear model

y i = x i β + ε i , i = 1 , , n

where ε i ’s have the θ th quantile equal to zero.

For i.i.d. random errors, this paper will consider the following four forms of simulation

Simulation 1: β = ( 3 , 1.5 , 0 , 0 , 2 , 0 , 0 , 0 ) ,

Simulation 2: β = ( 0.85 , 0.85 , 0.85 , 0.85 , 0.85 , 0.85 , 0.85 , 0.85 ) ,

Simulation 3: β = ( 5 , 0 , 0 , 0 , 0 , 0 , 0 , 0 ) ,

Simulation 4: β = ( 3 , , 3 , 10 0 , , 0 10 , 3 , , 3 10 ) .

In the first three simulation studies, the rows of x is generated in a multivariate normal distribution N ( 0 , Σ ) with ( Σ ) i j = 0.5 | i j | . In Simulation 4, we first generate Ζ 1 and Ζ 2 from N ( 0 , 1 ) , then let x j = Ζ 1 + ν j , j = 1 , , 10 , x j N ( 0 , 1 ) , j = 11 , , 20 , x j = Ζ 2 + ν j , where j = 21 , , 30 , where ν j N ( 0 , 0.01 ) , j = 1 , , 10 , 21 , , 30 .

In each simulation, we consider the error distributions in our simulation follows

1) A normal distribution N ( μ , 1 ) with the θ th quantile equal to zero.

2) A Laplace distribution L a p l a c e ( μ , 1 ) with the θ th quantile equal to zero.

3) A t distribution with three degrees of freedom, t ( 3 ) .

4) A χ 2 distribution with three degrees of freedom, χ ( 3 ) 2 .

5) A asymmetric Laplace distribution A L D ( μ , 0.5 , 1 ) with the θ th quantile equal to zero.

The number of observations in one simulated sample is n = 200. The simulation is repeated 50 times for each error distribution. The evaluation index is the median mean absolute deviation (MMAD), i.e.

MMAD = median ( 1 / 200 i = 1 200 | x i β ^ x i β t r u e | )

The quantile regression model for the quantile θ = ( 0.3 , 0.5 , 0.7 ) is estimated separately. The simulation results are shown in Figures 1-4.

Figure 1 shows that under the condition of simulation 1, that is, sparse coefficients, the MMD values of BQR-AL and BQR-L are lower than those of the frequency school method, in which the MMAD value satisfying the error term compliance distribution A L D ( μ , 0.5 , 1 ) is smaller.

Figure 2 corresponds to the dense coefficient in simulation 2, and it can be seen that the MMAD values of BQR-AL and BQR-L are very small and close to each other.

Figure 3 corresponds to the case where the coefficient of the simulation 3 is sparse, and the same conclusion as the simulation 1 can still be obtained. And the effect of Bayesian regularized quantile regression is more obvious.

For simulation 4, since the number of variables is larger than the sample size, the design matrix is a singular matrix, so the QR and QR-SCAD methods cannot be run in this simulation, and the other methods can still operate normally. This also proves the advantages of the regularization method. The results are shown in Figure 4.

Figure 1. The panels represent the MMADs in simulation 1.

Figure 2. The panels represent the MMADs in simulation 2.

Figure 3. The panels represent the MMADs in simulation 3.

Figure 4. The panels represent the MMADs in simulation 4.

Figures 1-4 show that, in terms of the MMAD, BQR-AL and BQR-L method performs better than the other regularized quantile regression method. The results of the MMAD for simulation 1 - 4 are reported in Figures 1-4. From these simulation, we can learn the following results:

1) In the above simulation, the MMADs of BQR-AL and BQR-L tend to give lower MMAD compared with the other regularized quantile regression under non-Bayesian for all distributions under considerations. It is shown that the stability and repeatability of the Bayesian regularized quantile regression are better.

2) In the case of sparse and very sparse regression coefficient, the MMAD value of BQR-AL is the smallest. In the case of dense regression coefficient, the MMAD value of BQR-L is smaller. Moreover, the estimation effect of the two methods is similar.

3) The BQR-AL and BQR-L methods can achieve good results under all error term distributions. It is shown that the regularized Bayesian quantile regression method is robust to the assumption of the error term, and the two methods are satisfactory even if the error term deviates from ALD.

4) No matter what the distribution of the original data is, when the error distribution is ALD, the regularized quantile regression method under Bayesian framework has high accuracy, especially the BQR-AL method, and the estimated value of its parameters is the closest to the real value.

In addition to observing the MMADs of each method, this paper can also observe the estimation of its parameters. Due to the limited space, in this paper, the parameter estimation results of error obeying ALD distribution in simulation 1 are simulated:

It can be seen from the parameter estimates in Table 1, the QR class method generally gives less biased parameter estimates, but this does not guarantee a good quantile prediction, as implied by the MMADs in Figure 4.

3.2. Non-i.i.d. Random Errors

Consider the following model when the error term is subject to a non-i.i.d.

y = 2 + x 1 + x 2 + x 3 + ( 1 + x 3 ) ε ,

where x 1 N ( 0 , 1 ) , x 3 U ( 0 , 1 ) , x 2 x 1 + x 3 + z , where z N ( 0 , 1 ) and ε N ( 0 , 1 ) . The remaining five noise variables x 4 , x 5 , x 6 , x 7 , x 8 are generated from the independent standard normal distribution. The results are shown in Table 2 and are based on 50 repetitions, each with sample size n = 200.

It can be known from Table 2 that the BQR-L method have smaller MMAD values, indicating that the regularized quantile regression method under the Bayesian framework is also superior to the regularized quantile regression method under the non-Bayesian framework when the error obeys the non-i.i.d..

3.3. Prostate Cancer Data Set

This section mainly analyzes prostate cancer data in the “bayesQR” package [22]. The dataset was first proposed by Stamey et al. [23], including a medical

Table 1. Parameter estimates for simulation 1 in the case of i.i.d.

Table 2. MMADs of each method in the case of non-i.i.d.

record of 97 male patients undergoing radical prostatectomy, containing the level of prostate antigen y (lpsa) and eight influencing factors. These influencing factors were: log cancer volume (lcavol), log prostate weight (lweight), age, log of the amount of benign prostatic hyperplasia (lbph), seminal vesicle invasion (svi), log of capsular penetration (lcp), Gleason score (gleason) and percentage of Gleason score 4 or 5 (pgg 45). As with the numerical simulation of the second part, still consider θ = ( 0.3 , 0.5 , 0.7 ) here.

In Table 3, we will compare three methods: QR, BQR-L, and BQR-AL. The QR method will use the “quantreg” package in R and the default rank method to get the confidence interval. Here, the 95% interval is considered and parameter estimation is performed. For the Bayesian method, the MCMC algorithm is used to perform 5000 simulations on the posterior distribution by default, and the

first 1000 samples are discarded. The results are shown in Table 3.

It can be seen from Table 3 that the parameter estimates of the BQR-L and BQR-AL methods are very close to the classical quantile regression and the credible intervals of the BQR-L and BQR-AL methods are narrower than the QR, in which the BQR-AL interval to be more accurate than BQR-L, from this point of view, the Bayesian quantile regression method with penalty is estimated to be more accurate than the non-Bayesian regularized quantile regression method.

Of course, we can also intuitively understand by drawing images. In this section, for a more intuitive observation, the estimated values of the various methods for θ = 0.7 are plotted, and similar results are obtained for other quantiles. In order to be intuitive, the estimated values of each method will be translated, and the image is drawn as shown in Figure 5.

Table 3. Parameter estimation and 95% interval of prostate cancer data.

Figure 5. Regression estimates under five different methods and 95% credible intervals for BQR-AL and BQR-L.

It can be clearly seen from Figure 5 that the quantile regression with Bayesian can provide a credible interval, while the non-Bayesian quantile regression is not always available and the credible interval of BQR-AL is narrower than BQR-L. The Bayesian quantile regression with penalty is estimated to be more accurate than the non-Bayesian regularized quantile regression. The effect of BQR-AL is better.

4. Conclusion

Bayesian quantile regression with adaptive Lasso penalty is an extension and improvement of the Lasso method. Adaptive Lasso penalty is based on different penalty parameters are applied to different regression coefficients. This method can effectively eliminate the influence of noise variables and obtain more accurate parameter estimation. Through the Gibbs sampling algorithm, this paper systematically compares the regularized quantile regression under the non-Bayesian and Bayesian framework, and finds that when the error term obeys the independent identically distributed or heteroscedasticity distribution, both BQR-AL and BQR-L have higher accuracy and are superior to non-Bayesian methods. When the error obeys ALD, the BQR-AL method has the highest accuracy for the MMAD under the same quantile, and its parameter estimate is also the closest to the true value in general. In the real data set, we can also find the same conclusion. Therefore, we can say that the Bayesian penalty regression method can get a good effect under the condition that the coefficient is sparse or dense, and it can be described in full aspect at different quantile points, and it will occupy a very important position in the future high-dimensional data analysis.

Funding

This work was supported by the National Natural Science Foundation of China [grant numbers 61763008, 71762008]; Guangxi Science and Technology Plan Project [grant numbers 2018GXNSFAA294131, 2018GXNSFAA050005, 2016GXNSFAA380194].

Conflicts of Interest

The authors declare no conflicts of interest regarding the publication of this paper.

References

[1] Koenker, R. and Bassett, G. (1978) Regression Quantiles. Econometrica, 46, 33-50.
https://doi.org/10.2307/1913643
[2] Koenker, R. and Machado, J.A.F. (1999) Goodness of Fit and Related Inference Processes for Quantile Regression. Journal of the American Statistical Association, 94, 1296-1310.
https://doi.org/10.1080/01621459.1999.10473882
[3] Yu, K. and Moyeed, R.A. (2001) Bayesian Quantile Regression. Statistics & Probability Letters, 54, 437-447.
https://doi.org/10.1016/S0167-7152(01)00124-9
[4] Hewson, P. and Yu, K. (2010) Quantile Regression for Binary Performance Indicators. Applied Stochastic Models in Business & Industry, 24, 401-418.
https://doi.org/10.1002/asmb.732
[5] Reich, B.J., Fuentes, M. and Dunson, D.B. (2011) Bayesian Spatial Quantile Regression. Publications of the American Statistical Association, 106, 6-20.
https://doi.org/10.1198/jasa.2010.ap09237
[6] Sriram, K., Ramamoorthi, R.V. and Ghosh, P. (2013) Posterior Consistency of Bayesian Quantile Regression Based on the Misspecified Asymmetric Laplace Density. Bayesian Analysis, 8, 479-504.
https://doi.org/10.1214/13-BA817
[7] Kozumi, H. and Kobayashi, G. (2009) Gibbs Sampling Methods for Bayesian Quantile Regression. Journal of Statistical Computation and Simulation, 81, 1565-1578.
https://doi.org/10.1080/00949655.2010.496117
[8] Khare, K. and Hobert, J.P. (2012) Geometric Ergodicity of the Gibbs Sampler for Bayesian Quantile Regression. Journal of Multivariate Analysis, 112, 108-116.
https://doi.org/10.1016/j.jmva.2012.05.004
[9] Sriram, K. (2015) A Sandwich Likelihood Correction for Bayesian Quantile Regression Based on the Misspecified Asymmetric Laplace Density. Statistics & Probability Letters, 107, 18-26.
https://doi.org/10.1016/j.spl.2015.07.035
[10] Koenker, R. (2004) Quantile Regression for Longitudinal Data. Journal of Multivariate Analysis, 91, 74-89.
https://doi.org/10.1016/j.jmva.2004.05.006
[11] Wang, H., Li, G. and Jiang, G. (2007) Robust Regression Shrinkage and Consistent Variable Selection through the LAD-Lasso. Journal of Business & Economic Statistics, 25, 347-355.
https://doi.org/10.1198/073500106000000251
[12] Li, Y. and Zhu, J. (2008) L1-Norm Quantile Regression. Journal of Computational and Graphical Statistics, 17, 163-185.
https://doi.org/10.1198/106186008X289155
[13] Wu, Y. and Liu, Y. (2009) Variable Selection in Quantile Regression. Statistica Sinica, 19, 801-817.
[14] Park, T. and Casella, G. (2008) The Bayesian Lasso. Journal of the American Statistical Association, 103, 681-686.
https://doi.org/10.1198/016214508000000337
[15] Li, Q., Xi, R. and Lin, N. (2010) Bayesian Regularized Quantile Regression. Bayesian Analysis, 5, 533-556.
https://doi.org/10.1214/10-BA521
[16] Alhamzawi, R., Yu, K. and Benoit, D.F. (2012) Bayesian Adaptive Lasso Quantile Regression. Statistical Modelling, 12, 279-297.
https://doi.org/10.1177/1471082X1101200304
[17] Adlouni, S.E., Salaou, G. and Sthilaire, A. (2018) Regularized Bayesian Quantile Regression. Communication in Statistics-Simulation and Computation, 47, 277-293.
https://doi.org/10.1080/03610918.2017.1280830
[18] Tsionas, E. (2003) Bayesian Quantile Inference. Journal of Statistical Computation and Simulation, 73, 659-674.
https://doi.org/10.1080/0094965031000064463
[19] Andrews, D.F. and Mallows, C.L. (1974) Scale Mixtures of Normal Distributions. Journal of the Royal Statistical Society, 36, 99-102.
https://doi.org/10.1111/j.2517-6161.1974.tb00989.x
[20] Yi, N. and Xu, S. (2008) Bayesian LASSO for Quantitative Trait Loci Mapping. Genetics, 179, 1045-1055.
https://doi.org/10.1534/genetics.107.085589
[21] Gilks, W.R. and Wild, P. (1992) Adaptive Rejection Sampling for Gibbs Sampling. Journal of the Royal Statistical Society, 41, 337-348.
https://doi.org/10.2307/2347565
[22] Benoit, D.F., Al-Hamzawi, R., Yu, K.M. and Dirk, V.P. (2017) BayesQR: Bayesian Quantile Regression. R Package Version 2.3.
https://cran.r-project.org/package=bayesQR
https://doi.org/10.18637/jss.v076.i07
[23] Stamey, T.A. (1989) Prostate Specific Antigen in the Diagnosis and Treatment of Adenocarcinoma of the Prostate. II. Radical Prostatectomy Treated Patients. The Journal of Urology, 141, 1076-1083.
https://doi.org/10.1016/S0022-5347(17)41175-X

Copyright © 2024 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.