Penalized Spline Estimation for Nonparametric Multiplicative Regression Models

Abstract

In this paper, we consider the estimation problem of the unknown link function in the nonparametric multiplicative regression model. Combining the penalized splines technique, the least product relative error estimation method is proposed, where an effective model degree of freedom is de-fined, then the smoothing parameter is chosen by some information criteria. Simulation studies show that these strategies work well. Some asymptotic properties are established. A real data set is analyzed to illustrate the usefulness of the proposed approach. Finally, some possible extensions are discussed.

Share and Cite:

Chen, W. , Wan, M. , Xu, J. , Zhong, J. , Xia, Y. and Zhang, M. (2024) Penalized Spline Estimation for Nonparametric Multiplicative Regression Models. Open Access Library Journal, 11, 1-16. doi: 10.4236/oalib.1111352.

1. Introduction

In many applications, such as the studies of financial and biomedical data, the response variable usually is positive. For modelling the relationship between the positive response and a set of explanatory variables, a natural idea is that first take an appropriate transformation for the response, e.g., the logarithmic transformation, then some common regression models, such as the linear regression or quantile regression, can be employed based on the transformed data. As argued by [1] , the least square or least absolute deviation criteria are both based on absolute errors, which is not desirable in many practical applications. Rather, the relative errors are more of a concern. In the early literature, many authors have contributed fruitfully to this issue, see [2] [3] and [4] . Especially, since the work of [1] , more attention has focused on the multiplicative regression (MR) model, and various extensions have been investigated, for example, see [5] [6] [7] and references therein.

It is worth noting that all the existing studies on MR are in the framework of linear (parametric) or semi-parametric MR models. Once the parametric forms or structures are misspecified, the resulting estimation and inference may be biased and the resulting conclusions become unreliable. Rather, nonparametric modelling is conceptually appealing and more robust. To the best of our knowledge, there are seldom studies on this problem. To fill this gap, we will address this problem in detail in this paper.

When estimating the nonparametric function g(z) in some semi-parametric MR models, such as the partially linear MR model ( [6] [8] [9] ), single index MR model ( [10] [11] [12] ), varying coefficient MR model ( [13] ), and others ( [14] ), almost all the researchers use the local linear smoothing technique and approximate it in a neighborhood of z for obtaining its estimation, where a good choice of the bandwidth is quietly critical and its value is sensitive to the performance of the resulting estimation and inference. Besides, due to the fact that the value of the function at every observation of z is estimated separately, the optimal selection of bandwidth for all observations may be not the same and the numerical problem will become untractable then the sample size is large. As a result, the researchers had to make a compromise and assume that the bandwidths used for estimating g(z) are the same.

In the nonparametric regression literature, spline-based methods, such as regression splines, smoothing splines, and penalized splines, are also popular and applied extensively in many fields. Recently, [15] proposed the multiplicative additive models based on the least product relative error criterion (LPRE), where the B-spline basis functions are used to estimate the nonparametric functions. Approximating a smooth function by a spline function has many desirable benefits, as presented in [16] and [17] . It is well known that the number and location of internal knots should be well-addressed when using the B-spline approximation approach. Too many knots may cause overfitting, few knots then bring underfitting. Although some information criteria can be adopted to select an appropriate number of knots, the overall computation burden is rather heavy. Fortunately, the penalized spline (P-splines) can avoid this difficulty and have gained remarkable popularity, especially due to computational expediency and easy adaptability to more complex models, see [18] . However, this technique has not been studied in the context of MR, and this is the first work to employ P-splines estimation for MR, although this issue is of importance and meaningful. All these motivate us to conduct this study in a formal manner.

This paper is organized as follows. In Section 2, we first introduce the nonparametric multiplicative regression model. Combining the penalized splines and the least product relative error criterion, a new estimation method is proposed, and some remarks about the selection of smoothing parameters and knots, and the asymptotic properties of the proposed estimator are given. Some simulation studies are carried out to assess the performance of our method under finite sample situations in Section 3. To illustrate its usefulness, we also apply our method to one real data set in Section 4. Finally, some discussions in Section 5 conclude the paper.

2. Methodology

In this section, we mainly introduce the model and estimation approach. At the same time, some related issues will be answered in detail.

2.1. Model and Estimation

Consider the following nonparametric multiplicative regression model

Y = exp ( g ( Z ) ) ε , (1)

where Y is the response variable, Z is a p-vector of covariates, ε is the positive random error and independent of Z , g ( ) is an unknown link function. Without loss of generality and for simplicity, we will assume that p = 1 in the following, i.e., the covariate Z is univariate. But the method discussed later can be easily be extended to the general case. Assume that an i.i.d. sample ( Y i , Z i ) , i = 1, , n from model (1) are observed. For model identifiability, it is required that E ( ε ) = E ( ε 1 ) in [5] . On the other hand, after taking logarithmic transformation on bosh sides of Equation (1), it follows that

ln ( Y ) = g ( Z ) + ln ( ε ) = ( g ( Z ) + c ) + ( ln ( ε ) c ) g ˜ ( Z ) + ln ( ε ˜ ) , (2)

holds for any real number c, which means that the former requirement is not enough to ensure the nonparametric component unique. This phenomenon has been found by [8] , where they imposed a strong condition on the model, namely, E ( ln ( ε ) ) = 0 . In our opinion, this is not necessary. The unknown constant c can be set to be E ( ln ( ε ) ) such that E ( ln ( ε ˜ ) ) = 0 holds. Thus the function g ( ) is identifiable. Meanwhile, suppose that one obtains a initial estimator of g ( ) , denoted by g ˜ n ( ) . Combining the Equation (2), a more efficient estimator of g ( ) , denoted by g ^ n ( ) , can be obtained by subtracting the mean of E ( ln ( ε ) ) from the initial estimator g ˜ n ( ) , where

g ^ n ( z ) = g ˜ n ( ) c ¯ , c ¯ = 1 n i = 1 n [ ln ( Y i ) g ˜ n ( Z i ) ] . (3)

In this process, it only needs that E ( ln ( ε ) ) , the expectation of ln ( ε ) is finite, which is allowed to be nonzero. Therefore, our condition is weaker than that stated in [8] .

Following the LPRE technique used in [5] , the estimator of g ( ) in model (1) can be obtained by minimizing the objective function

S n ( g ) = 1 n i = 1 n { | Y i exp ( g ( Z i ) ) Y i | × | Y i exp ( g ( Z i ) ) exp ( g ( Z i ) ) | } . (4)

When all Z i ’s are mutually different, the computation involved in Equation (4) can be accomplished individually for each g ( Z i ) . However, the resulting estimator is not continuous, which is unsatisfactory in some applications. To overcome these drawbacks, we approximate g ( ) by B-spline functions. Exactly speaking, let < a = t 0 < t 1 < < t k n < t k n + 1 = b < be a sequence of knots, where [ a , b ] denotes the support of Z. As discussed in [19] , a set of B-spline basis function of degree d can be constructed and denoted by φ j ( ) , j = 1 , , K n = k n + 1 + d , where d = 2 or 3 corresponds to the quadratic spline or the cubic spline. Then g ( z ) can be approximated by a linear combination of basis functions, i.e., g ( z ) j = 1 K n α j φ j ( z ) α Τ B ( z ) , where α = ( α 1 , , α K n ) Τ is the spline coefficient vector and B ( z ) = ( φ 1 ( z ) , , φ K n ( z ) ) Τ . Further, the model (1) can be rewritten as

Y = exp ( α Τ B ( Z ) ) ε . (5)

Meanwhile, α can be estimated by α , the minimizer of the loss function

L n ( α ) = 1 n i = 1 n { | Y i exp ( α Τ B i ) Y i | × | Y i exp ( α Τ B i ) exp ( α Τ B i ) | } = 1 n i = 1 n [ Y i exp ( α Τ B i ) + Y i 1 exp ( α Τ B i ) 2 ] ,

where B i = B ( Z i ) . Then the B-spline estimator of g ( ) can be obtained through Equation (3), where g ˜ n ( z ) is replaced by α Τ B ( z ) .

However, for reasons discussed in Section 1, here we estimate the nonparametric function in model (1) using penalized splines by adding a roughness penalty to the above minimization problem. In the literature, there are mainly two kinds of definition of roughness. One is based on the integrated squared q-th derivative of the spline function, the other is based on the q-th order difference of the spline coefficient vector, which is more popular for its simplicity and adopted in this paper. Exactly, let Δ denote the backward difference operator. Then we have Δ α j = α j α j 1 and Δ 2 α j = α j 2 α j 1 + α j 2 . In General, let Δ q = Δ Δ q 1 . With these notations, the P-spline estimator of g ( ) is defined as g ^ , the corrected spline function in (3), where g ˜ n ( z ) is replaced by α ^ Τ B ( z ) , and α ^ is the minimizer of

Q n ( α ) = L n ( α ) + λ j = q + 1 K n | Δ q α j | γ ,

where γ > 0 and λ > 0 is the smoothing parameter controlling the smoothness of the fitting vurve. If λ = 0 , the resulting estimator becomes the B-spline estimator. In practice, common choices of q and γ are q = 2 and γ = 2 for computational convenience. At this time, the above minimization problem can be rewritten as

Q n ( α ) = L n ( α ) + λ α Τ D Τ D α , (6)

where D is the ( K n 2 ) × K n matrix representation of the second order difference operator Δ 2 .

2.2. Computation

Note that for given smoothing parameter and knots, the loss function (6) is differential and convex, minimizing it is not difficult, and some common algorithms can be applied. In the later simulation studies and real data analysis, we employ the function nmk in the R package dfoptim.

However, other alternatives are still helpful. As in the least square mean regression and penalized quantile regression (e.g. [20] ), an iterative reweighed least square (IRLS) algorithm can be developed. Meanwhile, the measure for the effective model degree of freedom can be introduced based on the trace of the hat matrix, as used in [20] . In the literature, the degree of freedom usually is adopted as the measure of model complexity to facilitate further model comparison, which has not been discussed for multiplicative regression using local smoothing technique. Thus, the smoothing parameter selection can be achieved naturally using some existing information criteria or cross-validation.

When the smoothing parameter and knots are prepared, α ^ can also be seen as the root of the first derivative of objective function (6), namely,

1 n i = 1 n [ Y i exp ( α ^ Τ B i ) Y i 1 exp ( α ^ Τ B i ) ] B i + 2 λ D Τ D α ^ = 0, (7)

which can be rewritten as

1 n i = 1 n 2 ( ln Y i α ^ Τ B i ) W i ( α ^ ) B i + 2 λ D Τ D α ^ = 0, (8)

where

W i ( α ^ ) = Y i exp ( α ^ Τ B i ) Y i 1 exp ( α ^ Τ B i ) 2 ( ln Y i α ^ Τ B i ) . (9)

Write Y ˜ = ( ln Y 1 , , ln Y n ) Τ , B = ( B 1 , , B n ) and W ( α ^ ) = d i a g ( W 1 ( α ^ ) , , W n ( α ^ ) ) . The IRLS algorithm is implemented as follows.

Step 0. For given smoothing parameter λ , provide an initial value of α , denoted as α ^ 0 , which can be taken as the unpenalized B-spline estimator, or the ordinary least square estimator based on the logarithmic transformed data. Assume that we have the k-th iterative value α ^ k .

Step 1. At the k + 1 -th step, update the parameter by

α ^ k + 1 = ( B W ( α ^ k ) B Τ + 2 λ D Τ D ) 1 B W ( α ^ k ) Y ˜ .

Repeat Step 1 until convergence. The value at the terminated step is defined as the final estimator of α , also denoted by α ^ . At the same time, as in the linear regression, the hat matrix here can be defined through

H λ = B Τ ( B W ( α ^ ) B Τ + 2 λ D Τ D ) 1 B W ( α ^ ) .

2.3. Choice of Smoothing Parameter and Knots

To implement the estimator, the smoothing parameter λ and the number and location of knots had to be determined in advance. Compared with the knots, the smoothing parameter plays a more significant role in the penalized spline estimation. As done in [20] , we set the trace of the hat matrix mentioned above to be the model degrees of freedom, denoted by d f = t r ( H λ ) , where t r ( A ) denotes the trace of a square matrix A. In spline context, some criterions have been proposed to select the smoothing parameter λ , such as the Bayesian Information Criterion (BIC), Generalized Approximation Cross-Validation criterion (GACV) and Generalized Cross-Validation criterion (GCV). Especially, in this paper, they are implemented by minimizing the functions:

BIC ( λ ) = ln ( L 1 n ( α ^ ) ) + d f ln n 2 n ,

GACV ( λ ) = L 1 n ( α ^ ) 1 d f / n ,

GCV ( λ ) = i = 1 n W i ( α ^ ) ( ln ( Y i ) α ^ Τ B i ) / n ( 1 d f / n ) 2 ,

respectively.

[18] suggested some rules for the number and location of knots. In the same spirits, we follow the strategy in [20] , where quadratic B-splines with 20 equally-spaced quantiles knots are used unless otherwise specified. With the BIC, GACV, GCV criterions above, we use 51 equally-spaced log-scaled grid points on log 10 ( λ ) [ 5,5 ] for the choice of λ . All these settings work well in the numerical studies and a real example analysis.

2.4. Asymptotic Results

For any probability measure P, define L 2 -norm f 2 = ( f 2 d P ) 1 / 2 . Denote p = d + 1 . To derive the asymptotic results, some regularity conditions are needed and listed as follows.

(A1) The true function in model (1) is g 0 . Suppose that g 0 G = { g C r [ a , b ] : g ( j ) M 0 , j = 1, , r , | g ( r ) ( z 1 ) g ( r ) ( z 2 ) | M 1 | z 1 z 2 | } , where M 0 and M 1 are some positive constants, is the superior norm, q r d + 1 .

(A2) Define h i = t i t i 1 . Assume that max i | h i + 1 h i | = o ( 1 / k n ) . Moreover, the ratio of maximum and minimum spacings of knots is uniformly bounded.

(A3) k n = o ( n ) . (A4) The covariate Z has abounded support [ a , b ] with corresponding density f Z ( . ) , which has second derivative and is bounded away from zero and infinity.

(A5) ε is independent of Z, and E ( ε ) = E ( ε 1 ) , E ( ln ( ε ) ) is finite.

(A6) E ( ε + ε 1 ) < + .

Conditions (A1)-(A3) are common requirements in the penalized spline theory. (A4) is the regularization condition used in the study of MR. (A5) is an identification condition for the LPRE estimation, which is similar to the zero mean condition in the classical linear mean regression. (A6) is required for proof, which is also used in [15] .

Theorem 2.1 Under conditions (A1)-(A6) mentioned above, we have that

1) If r = p , k n = O ( n 1 / ( 2 p + 1 ) ) and λ = O ( n β ) with β > ( 1 + p q ) / ( 2 p + 1 ) , then it yields that g ^ g 0 2 2 = O P ( n 2 p / ( 2 p + 1 ) ) .

2) If r = q , λ = O ( n 1 / ( 2 q + 1 ) ) and k n = O ( n β ) with 1 > β > 1 / ( 2 q + 1 ) , then it yields that g ^ g 0 2 2 = O P ( n 2 q / ( 2 q + 1 ) ) .

Proof. Note that

Q n ( α ) = 1 n i = 1 n [ Y i exp ( α Τ B i ) + Y i 1 exp ( α Τ B i ) 2 ] + λ α Τ D Τ D α = 1 n i = 1 n [ ε i exp ( R i + ( g ( Z i ) α Τ B i ) ) + ε i 1 exp ( R i ( g ( Z i ) α Τ B i ) ) 2 ] + λ α Τ D Τ D α ,

where ε i = Y i / exp ( g 0 ( Z i ) ) , R i = g ( Z i ) g ( Z i ) , and g ( Z ) = α B ( Z ) denotes the best spline function approximation. Define X = n 1 i = 1 n B i B i Τ , G λ = X + λ D Τ D and θ = G λ 1 / 2 ( α α ) . Then we have that g ( Z i ) α Τ B i = B i Τ ( α α ) = B i Τ G λ 1 / 2 θ . Let a n denote the convergence rate in the Theory. Since θ ^ = a n ( α ^ α ) = arg min θ [ Q n ( α + a n 1 θ ) Q n ( α ) ] , we only need to prove that for every ν > 0 , there exists a large constant δ > such that

P ( inf θ 2 = δ Q n ( α + a n 1 θ ) Q n ( α ) ) 1 ν .

In the line of the proof of Theorem 1 in the Appendix of [17] , some similar techniques can be applied. Finally, by the Corollary 1 of [16] , the desirable results are derived. The proof is completed.

3. Numerical Studies

In this section, numerical studies were conducted to evaluate the finite sample performance of our proposed method under various situations. To fairly compare the un-penalized B-spline estimator with the penalized spline estimators obtained by the BIC, GCV, GACV criterions and for simplicity, we set the degree of spline basis to be q = 2 , the number of internal knots k n = 20 , which are located on equally-spaced quantiles for all methods. All results below are based on 500 replicates, where the sample size n = 100 , 300 , 500 , respectively. All simulations are implemented using the software R.

3.1. Model A and Results

Model A. We generated ( Y , Z ) from the following model

Y = exp ( 1 + 2 sin ( π Z 30 ) ) ε , (10)

where Z ~ U n i f ( 0,50 ) and is independent with the random error ε . Further, three error distributions are considered, namely,

Case (i): ε = exp ( U ) , U ~ N ( 0,1 ) ,

Case (ii): ε = exp ( U ) , U ~ U n i f ( 2,2 ) ,

Case (iii): ε ~ U n i f ( 0.0001,4.635506 ) .

Note that under case (i)-(ii), it holds that E ( ε ε 1 ) = 0 and E ( log ( ε ) ) = 0 . But under case (iii), E ( ε ε 1 ) = 0 , E ( log ( ε ) ) = 0.5339771 . We use the averaged Integrated Absolute Bias (IABIAS) and Mean Integrated Square Error (MISE), where for one estimator g ^ j ( j = 1 , , 500 ) obtained from the j-th sample,

IABIAS = 1 500 j = 1 500 [ 1 n g r i d k = 1 n g r i d | g ^ j ( u k ) g 0 ( u k ) | ] ,

MISE = 1 500 j = 1 500 [ 1 n g r i d k = 1 n g r i d [ g ^ j ( u k ) g 0 ( u k ) ] 2 ] ,

at the fixed grid points { u k } equally-spaced in [0, 50] and n g r i d = 501 . The values below them are the associated sample standard deviation. To study the different smoothing parameter selection methods at a finer scale, we also calculate the mean and standard deviation of effective model degree of freedom (DF).

Table 1 reported the results. It can be seen that all three penalized estimators

Table 1. Results of Model A under cases (i)-(iii) with different sample sizes and criterions (×10−2).

have smaller IABIAS and MISE than those of the unpenalized B-spline estimator. This indicates the benefits of the proposed method. As the sample size increases, the differences between them decrease and become comparable. The model degree using GACV are almost larger than those of using BIC and GCV in all settings, and those using BIC are the smallest, which is also observed in [20] . These imply that GACV tends to select more complex models, BIC tends to select simpler models, GCV makes a compromise and lies between them. Figure 1 displays the average estimated curves and their 95% point-wise confidence band under cases (i) with sample size n = 300 for model A. We can see that all estimated curves are close to the true one, but the B-spline estimation has a drastic deviation near the boundaries. Figure 2 shows the boxplots of the estimates of g ( z ) at z = 15,30,45 , respectively. We can see that most estimates are centered around the true values of ( 3,1, 1 ) with small deviations, although the BS estimator has a little larger standard deviation. The corresponding QQ-plots of them are presented in Figure 3, which show that the resulting estimators seem to converge asymptotically to the normal distribution. Estimated curves under other situations are similar, and are not displayed, but are available in the supplementary materials.

Figure 1. Estimate of g ( Z ) for Model A under case (i) with sample size 300. The solid thick line (black) and solid thin line (red) correspond to the true and estimated curves, respectively. The dashed lines (blue) are the 95% point-wise confidence bands.

Figure 2. Box-plots of g ( Z ) at Z = 15,30,45 , respectively, for Model A under case (i) with sample size 300.

3.2. Model B and Results

We generated ( Y , Z ) from the following model

Y = exp ( | Z 25 | 3 5000 ) ε , (11)

where Z ~ U n i f ( 0,50 ) and is independent with the random error ε . Others are the same as defined in Model A. We reported the results in Table 2 and displayed the average estimated curves and their 95% point-wise confidence band under cases (i) with sample size n = 300 for model B in Figure 4. Again, results are similar to those presented in Table 1 and Figures 1-3 are found. More related figures can be available in the supplementary materials.

4. Real Data Analysis

In this section, we will analyze the ethanol data to illustrate our proposed method. This data is available in the R package lattice, and aims to investigate the relationship between the emissions of nitrogen oxides, denoted by Y, and various settings of the engine compression ratio and equivalence ratio, denoted by Z, when ethanol fuel is burned in a single-cylinder engine. In this data set, 88 observations are collected. [13] analyzed it in the framework of varying coefficient multiplicative regression model based on the LPRE criterion and local linear approximation technique. His results demonstrate that various settings of the engine compression ratio have little effect in Y, as illustrated in page 280 therein. Therefore, here we only consider the covariate Z in the following model.

Figure 3. QQ-plots of g ( Z ) at Z = 15,30,45 , respectively, for Model A under case (i) with sample size 300.

Table 2. Results of Model B under cases (i)-(iii) with different sample sizes and criterions (×10−2).

Figure 4. Estimate of g ( Z ) for Model B under case (i) with sample size 300. The solid thick line (black) and solid thin line (red) correspond to the true and estimated curves, respectively. The dashed lines (blue) are the 95% point-wise confidence bands.

Y = exp ( g ( Z ) ) ε .

The estimated curves of g with 8 and 20 internal knots quadratic P-splines are plotted in Figure 5, which have similar trends to the plot in [13] except a vertical shift. We can see that all the four curves are almost identical, which maybe caused by the data’s own well fitness to the above model. Scatted plot of Y and the corresponding estimated curves of exp ( g ( Z ) ) are presented in Figure 6, which indicate the proposed method fits the data well enough. The number of knots has some effects on the estimates of curves, but doesn’t affect the performance substantially, which is also supported by the mean and median of absolute prediction errors | Y i Y ^ i | and squared prediction errors ( Y i Y ^ i ) 2 given in the Table 3, where Y ^ i = exp ( g ^ ( Z i ) ) .

Figure 5. Estimated curves of g ( Z ) for the ethanol data. The solid (red) and dashed (blue) curves correspond to the estimated function curves with 8 and 20 internal knots, respectively.

Table 3. Prediction errors and degrees of freedom of the ethanol data.

Note: kn: the nunber of interior knots; MAPE: mean absolute prediction errors; MDAPE: median absolute prediction errors; MSPE: mean squared prediction errors; MASPE: median squared prediction errors.

Figure 6. Scatted plot of Y and estimated curves of exp ( g ( Z ) ) for the ethanol data. The solid (red) and dashed (blue) curves correspond to the estimated function curves with 8 and 20 internal knots, respectively.

5. Conclusion and Discussions

In this study, we used the penalized spline method to estimate the nonparametric function in model (1) based on the LPRE loss function. Inspired by the iteratively reweighted least squares algorithm, we develop the effective model degree of freedom and propose three smoothing parameter selection methods. Some asymptotic results are established. Furthermore, numerical simulation studies and real data analysis found that the proposed approach works well in several settings. As indicated in Section 1, the approaches proposed in this paper may be adapted to the partially linear, even additive, single index or varying coefficients multiplicative regression models. Our future work will also consider extensions to fields with covariates measurement errors, censored data or longitudinal data analysis, which are meaningful for practitioners.

Data Availability Statement

The data set used in the real data analysis can be available publicly from the R package lattice.

Acknowledgements

This work was partly supported by a grant from Natural Science Foundation of Jiangsu Province under project ID BK20210889 and Start-up fund for doctoral research of Jiangsu University of Science and technology. The authors also thank the lecturer Feng-Ling Ren, School of Computer and Engineering, Xinjiang University of Finance & Economics, for helpful discussions.

Conflicts of Interest

The authors declare no conflicts of interest.

References

[1] Chen, K., Guo, S., Lin, Y. and Ying, Z. (2010) Least Absolute Relative Error Estimation. Journal of the American Statistical Association, 105, 1104-1112.
https://doi.org/10.1198/jasa.2010.tm09307
[2] Khoshgoftaar, T.M., Bhattacharyya, B.B. and Richardson, G.D. (1992) Predicting Software Errors, during Development, Using Nonlinear Regression Models: A Comparative Study. IEEE Transactions on Reliability, 41, 390-395.
https://doi.org/10.1109/24.159804
[3] Narula, S.C. and Wellington, J.F. (1977) Prediction, Linear Regression and the Minimum Sum of Relative Errors. Technometrics, 19, 185-190.
https://doi.org/10.1080/00401706.1977.10489526
[4] Park, H. and Stefanski, L.A. (1998) Relative-Error Prediction. Statistics and Probability Letters, 40, 227-236. https://doi.org/10.1016/S0167-7152(98)00088-1
[5] Chen, K., Lin, Y., Wang, Z. and Ying, Z. (2016) Least Product Relative Error Estimation. Journal of Multivariate Analysis, 144, 91-98.
https://doi.org/10.1016/j.jmva.2015.10.017
[6] Zhang, Q. and Wang, Q. (2013) Local Least Absolute Relative Error Estimating Approach for Partially Linear Multiplicative Model. Statistica Sinica, 23, 1091-1116.
https://doi.org/10.5705/ss.2012.133
[7] Hirose, K. and Masuda, H. (2018) Robust Relative Error Estimation. Entropy, 20, Article 632. https://doi.org/10.3390/e20090632
[8] Zhang, J., Feng, Z. and Peng, H. (2018) Estimation and Hypothesis Test for Partial Linear Multiplicative Models. Computational Statistics & Data Analysis, 128, 87-103.
https://doi.org/10.1016/j.csda.2018.06.017
[9] Chen, Y. and Liu, H. (2021) A New Relative Error Estimation for Partially Linear Multiplicative Model. Communications in Statistics-Simulation and Computation, 52, 4962-4980.
[10] Liu, H. and Xia, X. (2018) Estimation and Empirical Likelihood for Single-Index Multiplicative Models. Journal of Statistical Planning and Inference, 193, 70-88.
https://doi.org/10.1016/j.jspi.2017.08.003
[11] Zhang, J., Zhu, J. and Feng, Z. (2019) Estimation and Hypothesis Test for Single-Index Multiplicative Models. Test, 28, 242-268.
https://doi.org/10.1007/s11749-018-0586-2
[12] Zhang, J., Cui, X. and Peng, H. (2020) Estimation and Hypothesis Test for Partial Linear Single-Index Multiplicative Models. Annals of the Institute of Statistical Mathematics, 72, 699-740. https://doi.org/10.1007/s10463-019-00706-6
[13] Hu, D.H. (2019) Local Least Product Relative Error Estimation for Varying Coefficient Multiplicative Regression Model. Acta Mathematicae Applicatae Sinica, English Series, 35, 274-286. https://doi.org/10.1007/s10255-018-0794-2
[14] Chen, Y., Liu, H. and Ma, J. (2022) Local Least Product Relative Error Estimation for Single-Index Varying-Coefficient Multiplicative Model with Positive Responses. Journal of Computational and Applied Mathematics, 415, Article ID: 114478.
https://doi.org/10.1016/j.cam.2022.114478
[15] Ming, H., Liu, H. and Yang, H. (2022) Least Product Relative Error Estimation for Identification in Multiplicative Additive Models. Journal of Computational and Applied Mathematics, 404, Article ID: 113886.
https://doi.org/10.1016/j.cam.2021.113886
[16] Kalogridis, I. and Van Aelst, S. (2021) M-Type Penalized Splines with Auxiliary Scale Estimation. Journal of Statistical Planning and Inference, 212, 97-113.
https://doi.org/10.1016/j.jspi.2020.09.004
[17] Kalogridis, I. and Van Aelst, S. (2021) Robust Penalized Spline Estimation with Difference Penalties. Econometrics and Statistics, 29, 169-188.
[18] Ruppert, D., Wand, M.P. and Carroll, R.J. (2003) Semiparametric Regression. Cambridge University Press, New York.
https://doi.org/10.1017/CBO9780511755453
[19] Schumaker, L. (2007) Spline Functions: Basic Theory. Cambridge University Press, New York. https://doi.org/10.1017/CBO9780511618994
[20] Wu, C. and Yu, Y. (2014) Partially Linear Modeling of Conditional Quantiles Using Penalized Splines. Computational Statistics & Data Analysis, 77, 170-187.
https://doi.org/10.1016/j.csda.2014.02.020

Copyright © 2024 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.