Partial Time-Varying Coefficient Regression and Autoregressive Mixed Model ()

Hui Li^{1}, Zhiqiang Cao^{2*}

^{1}School of Statistics, Beijing Normal University, Beijing, China.

^{2}Department of Mathematics, College of Big Data and Internet, Shenzhen Technology University, Shenzhen, China.

**DOI: **10.4236/ojs.2023.134026
PDF
HTML XML
124
Downloads
686
Views
Citations

Regression and autoregressive mixed models are classical models used to analyze the relationship between time series response variable and other covariates. The coefficients in traditional regression and autoregressive mixed models are constants. However, for complicated data, the coefficients of covariates may change with time. In this article, we propose a kind of partial time-varying coefficient regression and autoregressive mixed model and obtain the local weighted least-square estimators of coefficient functions by the local polynomial technique. The asymptotic normality properties of estimators are derived under regularity conditions, and simulation studies are conducted to empirically examine the finite-sample performances of the proposed estimators. Finally, we use real data about Lake Shasta inflow to illustrate the application of the proposed model.

Keywords

Regression and Autoregressive, Time Series, Partial Time-Varying Coefficient, Local Polynomial

Share and Cite:

Li, H. and Cao, Z. (2023) Partial Time-Varying Coefficient Regression and Autoregressive Mixed Model. *Open Journal of Statistics*, **13**, 514-533. doi: 10.4236/ojs.2023.134026.

1. Introduction

Regression and autoregressive mixed models are widely used tools to study the correlation between random indicators of time series types and their influencing factors. The expression of the model is

${Y}_{i}={\alpha}_{0}+{\alpha}_{1}{X}_{1i}+\cdots +{\alpha}_{s}{X}_{si}+{\beta}_{1}{Y}_{i-1}+\cdots +{\beta}_{p}{Y}_{i-p}+{\epsilon}_{i},$ (1)

where time series
$\left\{{Y}_{i}\right\}$ ,
$i=p+1,\cdots ,n$ , is the dependent variable; time series
$\left\{{X}_{ji}\right\},j=1,\cdots ,s$ , represents *s* covariates; with
${\alpha}_{j}$ as their regression coefficients;
${\alpha}_{0}$ is the intercept term;
${\beta}_{j},j=1,\cdots ,p$ , represents *p* autoregressive coefficients and satisfies stationary conditions; and
$\left\{{\epsilon}_{i}\right\}$ is a white noise series, which is independent of both the covariates
$\left\{{X}_{ji}\right\}$ and response variable
$\left\{{Y}_{i}\right\}$ . Model (1) has been studied extensively in classical textbooks [1] [2] . Model (1) can also be regarded as a special case of an autoregressive model with exogenous variables or an autoregressive distributed lag model [3] , which has a wide range of applications in finance, econometrics and biomedicine.

The unknown parameters in model (1) are assumed to be constant, indicating that the effects between the dependent variable and the covariates will not change with the observation time. However, in practice, complex nonlinear correlations between dependent variable and covariates are sometimes encountered. The varying-coefficient (or functional coefficient) time series model is an effective tool to handle data with such characteristics. The so-called varying-coefficient is that the parameters in the model are not constants but functions of some variables such as observation time or some delay components of time series. If the expression of the coefficient function in the model is known, it is called a parametric varying-coefficient model. Otherwise, it is a nonparametric model. It is well known that a parametric model will fit better than a nonparametric model if the coefficient function is known. However, if we make a mistake in the expression of the coefficient function, then it will cause serious deviations in the estimation and even result in misleading outcomes. In this case, the nonparametric model has more advantages. Nonparametric varying-coefficient models in time series were studied extensively, see [4] - [16] . In addition, there is a long history of models in the Bayesian realm to track the issue, where the time-variability of coefficients is (semi-) automatically shrunk to constancy in various ways [17] [18] [19] .

It is noted that many scholars have carried out research on model extension for the time-varying coefficient model (*i.e.*, coefficients are functions of time) proposed by Robinson [4] , see [20] - [29] . Since the model in Robinson [4] is the basis of many studies and the model proposed in this article is an extension of the time-varying coefficient model of Robinson [4] and Cai [9] , we first give the model expression:

${Y}_{i}={\beta}_{0}\left({t}_{i}\right)+{\beta}_{1}\left({t}_{i}\right){X}_{1i}+\cdots +{\beta}_{d}\left({t}_{i}\right){X}_{di}+{\epsilon}_{i},\text{\hspace{0.17em}}\text{\hspace{0.17em}}i=1,\cdots ,n,$ (2)

where time series
$\left\{{Y}_{i}\right\}$ is the dependent variable; time series
$\left\{{X}_{ji}\right\}\mathrm{,}j=\mathrm{1,}\cdots \mathrm{,}d$ , represent *d* covariates;
${\beta}_{j}(\cdot )$ ,
$j=0,\cdots ,d$ , are
$d+1$ coefficient functions of observation time whose form are completely unknown and satisfy a certain degree of smoothness; and
$\left\{{\epsilon}_{i}\right\}$ is an i.i.d (independent and identical distribution) series and is independent of
$\left\{{X}_{ji}\right\}$ . It should be noted that the independent variable of the coefficient function is
${t}_{i}=i/n$ instead of *i*. This is a constraint that we need to satisfy when deducing large-sample properties of nonparametric smooth estimators of coefficients in a time-varying coefficient model, and the specific reasons can be seen in [4] [5] [9] .

To avoid the “curse of dimensionality”, many researchers have studied nonparametric estimation methods for varying-coefficient models. For example, Robinson [4] proposed a pseudo-local maximum likelihood estimation method to study the local estimates of functional coefficients in the time-varying coefficient time series model (2). Chen and Tsay [6] and Hastie and Tibshirani [30] proposed a kernel smooth nonparametric estimation method for functional coefficients in autoregressive models. Ramsay and Silverman [31] and Brumback and Rice [32] estimated the parameters of the varying-coefficient regression model by using the B-spline method. Cai *et al*. [8] studied local polynomial estimates for functional-coefficient regression models. Huang *et al*. [33] proposed a B-spline estimation for a varying-coefficient regression model based on repeated measures data. Aneiros-Perez and Vieu [34] used Nadaraya-Watson-type weights to estimate the semi-functional partial linear regression model (SFPLR model) and derived the corresponding asymptotic properties of the estimators. Li *et al*. [20] and Fan *et al*. [22] applied local polynomial expansion techniques to explore parameter estimation for time series models with partial time dependencies. Liu *et al*. [10] used a local linear approach to estimate the nonparametric trend and seasonal effect functions. Cai *et al*. [11] adopted a nonparametric generalized method of moments to estimate a new class of semiparametric dynamic panel data models. Chen *et al*. [23] applied a local linear method with cross-validation bandwidth selection to estimate the time-varying coefficients of the heterogeneous autoregressive model. By combining B-spline smoothing and the local linear method, Hu *et al*. [27] proposed a two-step estimation method for a time-varying additive model. Tu and Wang [14] applied adaptive kernel weighted least squares to estimate functional coefficient cointegration models. Li *et al*. [28] used classical kernel smoothing methods to estimate the coefficient functions in nonlinear cointegrating models. Karmakar *et al*. [29] applied local linear M-estimation to estimate the time-varying coefficients of a general class of nonstationary time series, among others.

For time series data with complex correlations between sample components, sometimes neither the constant coefficient regression and autoregressive mixed model (1) nor the time-varying coefficient regression model (2) can fit the data well. Motivated by this, in this article, we introduce the idea of a time-varying coefficient into a simple mixed model (1) and propose a partial time-varying coefficient regression and autoregressive mixed model. The proposed model can not only estimate the constant effects of some covariates on the dependent variable but also characterize the non-constant effects of other covariates, which greatly increases the flexibility and scope of the models (1) and (2). To the best of our knowledge, explicitly introducing part of the delay terms of the dependent variable into time-varying regression models as covariates is less actively studied. However, for time series with a strong correlation between components, sometimes introducing the delay term of the dependent variable as part of covariates can make full use of the data information and improve the model fitting degree. We will study this problem from the way of Cai [9] in this article, although the Bayesian parallel stream can be incorporated and put into perspective.

The remaining parts of this article are organized as follows. In Section 2, we introduce a partial time-varying coefficient regression and autoregressive mixed model, give the estimation method of model parameters and derive the large sample properties of the proposed estimators. We conduct a simulation study in Section 3 to examine the finite-sample performances of the proposed estimators. In Section 4, we use Shasta Lake inflow data to illustrate the application value of the model. Finally, we conclude with a discussion in Section 5.

2. Model and Estimation

2.1. Proposed Model

As mentioned in Section 1, many researchers have studied different types of varying-coefficient regression models, but research on partial time-varying coefficient regression and autoregressive mixed models is less actively studied. To fill this gap and study the correlation effects of dependent variables with covariates and delay components of partial dependent variables, we propose the following partial time-varying coefficient regression and autoregressive mixed model:

${Y}_{i}={\alpha}_{1}^{\text{T}}\left({t}_{i}\right){X}_{i,{s}_{1}}+{\alpha}_{2}^{\text{T}}{X}_{i,{s}_{2}}+{\beta}_{1}^{\text{T}}\left({t}_{i}\right){Y}_{i,{p}_{1}}+{\beta}_{2}^{\text{T}}{Y}_{i,{p}_{2}}+{\epsilon}_{i},$ (3)

where
${\alpha}_{1}(\cdot )={\left\{{\alpha}_{1,1}(\cdot ),\cdots ,{\alpha}_{1,{s}_{1}}(\cdot )\right\}}^{\text{T}}$ with the superscript T as transposition,
${t}_{i}=i/n$ with
$i={p}_{1}+{p}_{2}+1,\cdots ,n$ ,
${\alpha}_{2}={\left({\alpha}_{\mathrm{2,1}}\mathrm{,}\cdots \mathrm{,}{\alpha}_{\mathrm{2,}{s}_{2}}\right)}^{\text{T}}$ ,
${\beta}_{1}(\cdot )={\left\{{\beta}_{1,1}(\cdot ),\cdots ,{\beta}_{1,{p}_{1}}(\cdot )\right\}}^{\text{T}}$ ,
${\beta}_{2}={\left({\beta}_{\mathrm{2,1}}\mathrm{,}\cdots \mathrm{,}{\beta}_{\mathrm{2,}{p}_{2}}\right)}^{\text{T}}$ ,
${X}_{i,{s}_{1}}={\left({X}_{i,1},\cdots ,{X}_{i,{s}_{1}}\right)}^{\text{T}}$ ,
${X}_{i,{s}_{2}}={\left({X}_{i,\left({s}_{1}+1\right)},\cdots ,{X}_{i,\left({s}_{1}+{s}_{2}\right)}\right)}^{\text{T}}$ ,
${Y}_{i,{p}_{1}}={\left({Y}_{i-{i}_{1}},\cdots ,{Y}_{i-{i}_{{p}_{1}}}\right)}^{\text{T}}$ ,
${Y}_{i,{p}_{2}}={\left({Y}_{i-{i}_{{p}_{1}+1}},\cdots ,{Y}_{i-{i}_{{p}_{1}+{p}_{2}}}\right)}^{\text{T}}$ with
${i}_{1}\mathrm{,}\cdots \mathrm{,}{i}_{{p}_{1}+{p}_{2}}$ as a permutation of time indicator
$\mathrm{1,}\cdots \mathrm{,}{p}_{1}+{p}_{2}$ . For the convenience of derivation, in this article, we set
${i}_{1}\mathrm{,}\cdots \mathrm{,}{i}_{{p}_{1}+{p}_{2}}$ to be the order from 1 to
$\left({p}_{1}+{p}_{2}\right)$ ;
$s={s}_{1}+{s}_{2}$ and
$p={p}_{1}+{p}_{2}$ represent the number of regression covariates and the order of autoregression, respectively; and
$\left\{{\epsilon}_{i}\right\}$ is assumed to be a white noise series. Model (3) is an iterative formula for the sequence
$\left\{{Y}_{i}\right\}$ . To make the time series
$\left\{{Y}_{i}\right\}$ have a solution, certain constraints on the model parameters should be made, such as the autoregressive coefficient satisfies the stationary condition and
$E\left({\epsilon}_{i}{Y}_{j}\right)=0$ when
$j<i$ . Since the autoregressive coefficients in model (3) are not necessarily constant, the corresponding constraints are stricter; for example, the time series in the model satisfies the *α*-mixing condition. We will give the conditions that need to be satisfied in Section 2.3.

Model (3) enhances the generalizability of time series regression models and contains many existing statistical models as specials. For example, when ${\alpha}_{1}(\cdot )=0$ , ${\beta}_{1}(\cdot )=0$ , ${\beta}_{2}=0$ , model (3) is the traditional linear regression model; when ${\alpha}_{2}=0$ , ${\beta}_{1}(\cdot )=0$ , ${\beta}_{2}=0$ , model (3) reduces to the time-varying coefficient linear regression model; when ${\beta}_{1}(\cdot )=0$ , ${\beta}_{2}=0$ , model (3) is a partial time-varying coefficient linear regression model; when ${\alpha}_{1}(\cdot )=0$ , ${\alpha}_{2}=0$ , ${\beta}_{1}(\cdot )=0$ , model (3) is the autoregressive time series model; when ${\alpha}_{1}(\cdot )=0$ , ${\alpha}_{2}=0$ , ${\beta}_{2}=0$ , model (3) is the time-varying coefficient autoregressive time series model; when ${\alpha}_{1}(\cdot )=0$ , ${\beta}_{1}(\cdot )=0$ , model (3) becomes the traditional constant coefficient regression and autoregressive mixed model.

Model (3) is a semi-parametric model since some of its parameters are unknown functions, and we need to apply non-parametric estimation methods, such as local polynomial expansion methods [35] and spline techniques [36] . In this article, we combine the local polynomial expansion technique with the least squares estimation method to estimate unknown model parameters.

2.2. Estimation Method

First, we briefly introduce the local polynomial expansion method. The main idea of this method is that for a function whose form is completely unknown but satisfies certain smoothness conditions at a fixed local point, we apply Taylor expansion to approximate the function as a polynomial about the local point. By using this method, the estimation of the function can be transformed into a parameter estimation problem of local polynomial coefficients. In this article, we elucidate the estimation method by using the first-order Taylor expansion. The higher-order Taylor expansion only increases the number of parameters to be estimated, and there is no essential difference in the algorithm. Specifically, the coefficient functions ${\alpha}_{1}(\cdot )$ and ${\beta}_{1}(\cdot )$ in model (3) are assumed to have a second continuous derivative, denoted as ${{\alpha}^{\prime}}_{1}(\cdot )$ , ${{\alpha}^{\u2033}}_{1}(\cdot )$ , ${{\beta}^{\prime}}_{1}(\cdot )$ and ${{\beta}^{\u2033}}_{1}(\cdot )$ , respectively. $\forall {t}_{0}\in \left(\mathrm{0,1}\right)$ , by Taylor expansion, we have

${\alpha}_{1}\left({t}_{i}\right)\approx {\alpha}_{1}\left({t}_{0}\right)+{{\alpha}^{\prime}}_{1}\left({t}_{0}\right)\left({t}_{i}-{t}_{0}\right),\text{\hspace{1em}}{\beta}_{1}\left({t}_{i}\right)\approx {\beta}_{1}\left({t}_{0}\right)+{{\beta}^{\prime}}_{1}\left({t}_{0}\right)\left({t}_{i}-{t}_{0}\right).$

Then, in the local neighborhood of ${t}_{0}$ , model (3) can be approximated as follows:

$\begin{array}{c}{Y}_{i}\approx {X}_{i,{s}_{1}}^{\text{T}}{\alpha}_{1}\left({t}_{0}\right)+{Y}_{i,{p}_{1}}^{\text{T}}{\beta}_{1}\left({t}_{0}\right)+{X}_{i,{s}_{2}}^{\text{T}}{\alpha}_{2}\left({t}_{0}\right)+{Y}_{i,{p}_{2}}^{\text{T}}{\beta}_{2}\left({t}_{0}\right)\\ \text{\hspace{0.17em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}+{X}_{i,{s}_{1}}^{\text{T}}{{\alpha}^{\prime}}_{1}\left({t}_{0}\right)\left({t}_{i}-{t}_{0}\right)+{Y}_{i,{p}_{1}}^{\text{T}}{{\beta}^{\prime}}_{1}\left({t}_{0}\right)\left({t}_{i}-{t}_{0}\right)+{\epsilon}_{i}\\ ={\gamma}^{\text{T}}{Z}_{i}+{\epsilon}_{i},\text{\hspace{1em}}i=1,\cdots ,n,\end{array}$

where

$\gamma =\gamma \left({t}_{0}\right)={\left\{{\alpha}_{1}^{\text{T}}\left({t}_{0}\right)\mathrm{,}{\beta}_{1}^{\text{T}}\left({t}_{0}\right)\mathrm{,}{\alpha}_{2}^{\text{T}}\left({t}_{0}\right)\mathrm{,}{\beta}_{2}^{\text{T}}\left({t}_{0}\right)\mathrm{,}{\left({{\alpha}^{\prime}}_{1}\left({t}_{0}\right)\right)}^{\text{T}}\mathrm{,}{\left({{\beta}^{\prime}}_{1}\left({t}_{0}\right)\right)}^{\text{T}}\right\}}^{\text{T}}\mathrm{,}$

and

${Z}_{i}={Z}_{i}\left({t}_{0}\right)={\left\{{X}_{i,{s}_{1}}^{\text{T}},{Y}_{i,{p}_{1}}^{\text{T}},{X}_{i,{s}_{2}}^{\text{T}},{Y}_{i,{p}_{2}}^{\text{T}},{X}_{i,{s}_{1}}^{\text{T}}\left({t}_{i}-{t}_{0}\right),{Y}_{i,{p}_{1}}^{\text{T}}\left({t}_{i}-{t}_{0}\right)\right\}}^{\text{T}}.$

We can obtain a locally weighted least squares estimate of the coefficient function in the model (3) by taking the minimum of the weighted sum of squared errors as follows:

$\underset{i=p+1}{\overset{n}{\sum}}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{K}_{h}\left({t}_{i}-{t}_{0}\right){\left({Y}_{i}-{\gamma}^{\text{T}}{Z}_{i}\right)}^{2},$

where
$K(\cdot )$ is the kernel function, , and *h* is the bandwidth. Using the least squares estimation theory, it is not difficult to obtain the estimation of parameter
$\gamma $ , denoted as
$\stackrel{^}{\gamma}$ , which has the following expression:

$\stackrel{^}{\gamma}=\stackrel{^}{\gamma}\left({t}_{0}\right)={\left({Z}^{\text{T}}KZ\right)}^{-1}{Z}^{\text{T}}KY,$ (4)

where
$K=\text{diag}\left\{{K}_{h}\left({t}_{p+1}-{t}_{0}\right)\mathrm{,}\cdots \mathrm{,}{K}_{h}\left({t}_{n}-{t}_{0}\right)\right\}$ ,
$Y={\left({Y}_{p+1}\mathrm{,}\cdots \mathrm{,}{Y}_{n}\right)}^{\text{T}}$ , and
$Z=Z\left({t}_{0}\right)$ is a
$\left(n-p\right)\times \left(p+s+{s}_{1}+{p}_{1}\right)$ matrix with the *k*th row as
${\left\{{Z}_{k+p}\left({t}_{0}\right)\right\}}^{\text{T}}\mathrm{,}k=\mathrm{1,}\cdots \mathrm{,}n-p$ .

For the residual term ${\epsilon}_{i}$ in model (3), its variance can be estimated locally as:

${\stackrel{^}{\sigma}}^{2}={\stackrel{^}{\sigma}}^{2}\left({t}_{0}\right)=\frac{1}{n-p}{\displaystyle \underset{i=p+1}{\overset{n}{\sum}}}{\left({Y}_{i}-{\stackrel{^}{\gamma}}^{\text{T}}{Z}_{i}\right)}^{2}.$

2.3. Asymptotic Normality

For the convenience of description, we introduce the following definitions and notations. Let
$H$ be a diagonal matrix of order
$\left(p+s+{p}_{1}+{s}_{1}\right)$ ; the first
$p+s$ elements on the main diagonal are 1, and the other
${p}_{1}+{s}_{1}$ elements of the main diagonal are *h*. Let
${\mu}_{j}={\displaystyle \int {u}^{j}K\left(u\right)\text{d}u}$ ,
${\nu}_{j}={\displaystyle \int {u}^{j}{K}^{2}\left(u\right)\text{d}u}$ for
$j=0,1,2,3$ . Denote
${G}_{i}={\left({X}_{i,{s}_{1}}^{\text{T}},{Y}_{i,{p}_{1}}^{\text{T}},{X}_{i,{s}_{2}}^{\text{T}},{Y}_{i,{p}_{2}}^{\text{T}}\right)}^{\text{T}}$ ,
${V}_{i}={\left({X}_{i,{s}_{1}}^{\text{T}},{Y}_{i,{p}_{1}}^{\text{T}}\right)}^{\text{T}}$ ,
$i=p+\mathrm{1,}\cdots \mathrm{,}n$ and
${\Omega}_{1}={\Omega}_{1}\left({t}_{0}\right)=E\left\{{G}_{i}{G}_{i}^{\text{T}}\mathrm{|}t={t}_{0}\right\}$ ,
${\Omega}_{2}={\Omega}_{2}\left({t}_{0}\right)=E\left\{{G}_{i}{V}_{i}^{\text{T}}\mathrm{|}t={t}_{0}\right\}$ ,
${\Omega}_{3}={\Omega}_{3}\left({t}_{0}\right)=E\left\{{V}_{i}{V}_{i}^{\text{T}}\mathrm{|}t={t}_{0}\right\}$ . Define

$D=D\left({t}_{0}\right)=\left(\begin{array}{cc}{D}_{1}& {D}_{2}\\ \text{\hspace{0.05em}}{D}_{2}^{\text{T}}& {D}_{3}\end{array}\right)\mathrm{,}\text{\hspace{1em}}\Sigma =\Sigma \left({t}_{0}\right)=\left(\begin{array}{cc}{\Sigma}_{1}& {\Sigma}_{2}\\ {\Sigma}_{2}^{\text{T}}& {\Sigma}_{3}\end{array}\right)\mathrm{,}$

where ${D}_{i}={D}_{i}\left({t}_{0}\right)={\mu}_{i-1}{\Omega}_{i}\left({t}_{0}\right)\mathrm{,}i=\mathrm{1,2,3}$ , and the definition of ${\Sigma}_{i}={\Sigma}_{i}\left({t}_{0}\right)$ is provided in the Appendix.

Throughout the derivation, we impose the following conditions:

C1 The measurable functions ${\alpha}_{\mathrm{1,}i}(\cdot )\left(i=\mathrm{1,}\cdots \mathrm{,}{s}_{1}\right)$ and ${\beta}_{\mathrm{1,}j}(\cdot )\left(j=\mathrm{1,}\cdots \mathrm{,}{p}_{1}\right)$ have second continuous derivatives in the interval $\left[\mathrm{0,1}\right]$ .

C2 The kernel function $K(\cdot )$ is symmetrically bounded and has a compact support on the interval $\left[-\mathrm{1,1}\right]$ .

C3
$\left({Y}_{i},{X}_{i,{s}_{1}},{X}_{i,{s}_{2}},i=1,\cdots ,n\right)$ is a strictly stationary *α*-mixing process, there exists such that
$E{\left|{Y}_{i}\right|}^{2\left(2+\delta \right)}<\infty $ , and the mixing coefficient
$\alpha \left(i\right)=O\left({i}^{-\tau}\right)$ , where
$\tau =\left(2+\delta \right)\left(1+\delta \right)/\delta $ .

C4 The bandwidth satisfies $n{h}^{1+4/\delta}\to \infty $ .

C5 For ${t}_{0}\in \left(\mathrm{0,1}\right)$ , the matrix $D\left({t}_{0}\right)$ is full rank.

C6 For ${t}_{0}\in \left(\mathrm{0,1}\right)$ , the matrix $\Sigma \left({t}_{0}\right)$ is positively definite.

Note that C1, C2 and C4 are the conditions that the local polynomial expansion method needs to satisfy, C3 is the condition required for the moment of random variables, and C5 and C6 are requirements to be satisfied by the large sample properties of the estimators. Conditions C1-C4 are similar to those in Robinson [4] and Cai [9] .

Theorem 1. *Under Conditions *C1-C6*, when
$h\to 0$ and
$nh\to \infty $ , we have*

$\sqrt{nh}H\left(\stackrel{^}{\gamma}-{\gamma}_{0}-\frac{{h}^{2}}{2}b\right)\stackrel{\mathcal{D}}{\to}N\left(0\mathrm{,}{\Sigma}_{\gamma}\right)\mathrm{,}$

where $b=b\left({t}_{0}\right)=\left(\begin{array}{c}{\Omega}_{1}^{-1}{\Omega}_{2}{\mu}_{2}{\eta}^{\prime}\left({t}_{0}\right)\\ 0\end{array}\right)$ and ${\eta}^{\prime}\left({t}_{0}\right)=\left(\begin{array}{c}{{\alpha}^{\u2033}}_{1}\left({t}_{0}\right)\\ {{\beta}^{\u2033}}_{1}\left({t}_{0}\right)\end{array}\right)$ , ${\Sigma}_{\gamma}={\Sigma}_{\gamma}\left({t}_{0}\right)={D}^{-1}\Sigma {D}^{-1}$ .

Therefore, we obtain

$\sqrt{nh}\left\{\stackrel{^}{\xi}\left({t}_{0}\right)-\xi \left({t}_{0}\right)-\frac{{h}^{2}}{2}{\Omega}_{1}^{-1}{\Omega}_{2}{\mu}_{2}{\eta}^{\prime}\left({t}_{0}\right)\right\}\stackrel{\mathcal{D}}{\to}N\left(0\mathrm{,}{\Sigma}_{\xi}\right)\mathrm{,}$

where ${\xi}_{0}=\xi \left({t}_{0}\right)={\left\{{\alpha}_{1}^{\text{T}}\left({t}_{0}\right)\mathrm{,}{\alpha}_{2}^{\text{T}}\left({t}_{0}\right)\mathrm{,}{\beta}_{1}^{\text{T}}\left({t}_{0}\right)\mathrm{,}{\beta}_{2}^{\text{T}}\left({t}_{0}\right)\right\}}^{\text{T}}$ and ${\Sigma}_{\xi}={\Sigma}_{\xi}\left({t}_{0}\right)={D}_{1}^{-1}{\Sigma}_{1}{D}_{1}^{-1}$ .

In practice, the variance-covariance matrix of $\left(\stackrel{^}{\xi}\left({t}_{0}\right)-{\xi}_{0}\right)$ can be calculated by using the following formula:

${\left(nh\right)}^{-1}{D}_{n\mathrm{,1}}^{-1}{\Sigma}_{n\mathrm{,1}}{D}_{n\mathrm{,1}}^{-1}\mathrm{,}$ (5)

where

${D}_{n\mathrm{,1}}={D}_{n\mathrm{,1}}\left({t}_{0}\right)=\frac{1}{n-p}{\displaystyle \underset{i=p+1}{\overset{n}{\sum}}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{G}_{i}{G}_{i}^{\text{T}}{K}_{h}\left({t}_{i}-{t}_{0}\right)\mathrm{,}$

${\Sigma}_{n\mathrm{,1}}=\frac{h}{\left(n-p\right){\nu}_{0}}\left({\displaystyle \underset{i=p+1}{\overset{n}{\sum}}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{\stackrel{^}{\epsilon}}_{i}{G}_{i}{K}_{h}\left({t}_{i}-{t}_{0}\right)\right){\left({\displaystyle \underset{j=p+1}{\overset{n}{\sum}}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{\stackrel{^}{\epsilon}}_{j}{G}_{j}{K}_{h}\left({t}_{j}-{t}_{0}\right)\right)}^{\text{T}}\mathrm{,}$

and ${\stackrel{^}{\epsilon}}_{i},i=p+1,\cdots ,n$ , are residuals of model (3). The proof of Theorem 1 is provided in the Appendix.

For constant coefficients in the model (3), the corresponding estimators obtained from (4) are not $\sqrt{n}$ consistent. To improve its convergence rate, we use the following formula (6) to obtain a global estimator, that is,

$\left(\begin{array}{c}{\stackrel{\u02dc}{\alpha}}_{2}\\ {\stackrel{\u02dc}{\beta}}_{2}\end{array}\right)={\displaystyle {\int}_{\mathcal{T}}\Gamma \left({t}_{0}\right)}\left(\begin{array}{c}{\stackrel{^}{\alpha}}_{2}\left({t}_{0}\right)\\ {\stackrel{^}{\beta}}_{2}\left({t}_{0}\right)\end{array}\right)\text{d}{t}_{0}\mathrm{,}$ (6)

where the weight matrix $\Gamma \left({t}_{0}\right)$ satisfies ${\int}_{\mathcal{T}}\Gamma \left({t}_{0}\right)\text{d}{t}_{0}}={I}_{s\times s$ , and $I$ is an identity matrix. Typically, we choose $\Gamma \left({t}_{0}\right)$ to be the standardized inverse covariance matrix [37] of ${\stackrel{^}{\alpha}}_{2}\left({t}_{0}\right)$ and ${\stackrel{^}{\beta}}_{2}\left({t}_{0}\right)$ , which can be obtained from (5). Then, the finite sample variances of the global estimators ${\left({\stackrel{\u02dc}{\alpha}}_{2}^{\text{T}}\mathrm{,}{\stackrel{\u02dc}{\beta}}_{2}^{\text{T}}\right)}^{\text{T}}$ can be calculated by the following formula (7):

$\frac{1}{n-p}{\displaystyle \underset{i=p+1}{\overset{n}{\sum}}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}\Gamma \left({t}_{i}\right)A{D}_{n}^{-1}\left({t}_{i}\right){\Sigma}_{n}^{-1}\left({t}_{i}\right){D}_{n}^{-1}\left({t}_{i}\right){A}^{\text{T}}{\Gamma}^{\text{T}}\left({t}_{i}\right),$ (7)

where $A$ is a $\left({s}_{2}+{p}_{2}\right)\times \left(p+s+{s}_{1}+{p}_{1}\right)$ matrix with ${a}_{ij}$ as the $\left(i\mathrm{,}j\right)$ th element of $A$ ; furthermore, ${a}_{ij}=1$ for $i=\mathrm{1,}\cdots \mathrm{,}{s}_{2}+{p}_{2}$ , $j=i+{s}_{1}+{p}_{1}$ ; otherwise, ${a}_{ij}=0$ . Definitions of ${D}_{n}$ and ${\Sigma}_{n}$ can be seen in the Appendix.

2.4. Selection Best Bandwidth and Order

When using estimator (4) to obtain
$\stackrel{^}{\gamma}$ , bandwidth *h* and order *p* (
$p={p}_{1}+{p}_{2}$ ) must be determined. Cheng *et al*. [26] proposed a Bayesian approach to determine bandwidth selection for local constant estimators of time-varying coefficients, Chen *et al*. [23] applied the cross-validation (CV) bandwidth selection to estimate the time-varying coefficient heterogeneous autoregressive model. Inspired by the idea of the average mean squared (AMS) criterion proposed by Cai *et al*. [8] , we slightly modify this criterion to simultaneously select the best bandwidth *h* and the optimal order *p* of model (3). The procedure is as follows: First, we divide the sample data into *m* groups and denote the sample size of the *q*th group as
${n}_{q}$ ,
$q=1,\cdots ,m$ . Second, given the values of *h* and *p*, and based on data of removing the *q*th group, we estimate
$\gamma $ by using (4), denoted as
${\stackrel{^}{\gamma}}_{q}$ . Third, we calculate the estimated mean square error for sample data of the* qth* group, denoted as
${\text{AMS}}_{q}\left(h\mathrm{,}p\right)$ , and then we obtain the total mean squared error when *q* goes from 1 to *m*. Finally, the optimal *h* (*i.e.*,
${h}_{\text{opt}}$ ) and *p* (*i.e.*,
${p}_{\text{opt}}$ ) that minimize the (AMS) error has the following form:

${\left({h}_{\text{opt}}\mathrm{,}{p}_{\text{opt}}\right)}^{\text{T}}=\underset{h\mathrm{,}q}{\mathrm{arg}\mathrm{min}}{\displaystyle \underset{q=1}{\overset{m}{\sum}}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{\text{AMS}}_{q}\left(h\mathrm{,}p\right)\mathrm{,}$

where for $q=\mathrm{1,}\cdots \mathrm{,}m$ ,

${\text{AMS}}_{q}\left(h\mathrm{,}p\right)=\frac{1}{{n}_{q}-p}{\displaystyle \underset{i\mathrm{=1}+\left(q-1\right){n}_{q}+p}{\overset{1+q{n}_{q}}{\sum}}}{\left\{{Y}_{i}-{\stackrel{^}{\gamma}}^{\text{T}}{Z}_{i}\right\}}^{2}\mathrm{.}$

Note that in the second step, except for all covariates of
${X}_{i\mathrm{,}{s}_{1}}$ and
${X}_{i\mathrm{,}{s}_{2}}$ , we can also try to use subsets of
${X}_{i\mathrm{,}{s}_{1}}$ and
${X}_{i\mathrm{,}{s}_{2}}$ in
${Z}_{i}$ when we estimate
$\gamma $ by using (4). Obviously, different subsets of covariates of
$X$ may result in different best combinations of *h* and *p*, and we take the combination of *h* and *p* corresponding to the minimum value of AMS for all subsets of covariates of interest as the optimal bandwidth and the order of the autoregressive part. Therefore, AMS can also be used to conduct variable selection of model (3).

3. Simulation Studies

We perform simulation studies to examine the finite-sample performances of the proposed estimators and consider the following partial time-varying coefficient regression and autoregressive mixed model:

${Y}_{i}={\alpha}_{1}\left({t}_{i}\right){X}_{i,1}+{\alpha}_{2}{X}_{i,2}+{\beta}_{1}\left({t}_{i}\right){Y}_{i-1}+{\beta}_{2}{Y}_{i-2}+{\epsilon}_{i},\text{\hspace{1em}}i=3,4,\cdots ,n,$ (8)

where
${\alpha}_{1}\left({t}_{i}\right)=\mathrm{sin}\left({t}_{i}\right)$ ,
${\alpha}_{2}=0.3$ ,
${\beta}_{1}\left({t}_{i}\right)=0.5\mathrm{exp}\left({t}_{i}\right)$ ,
${\beta}_{2}=-0.1$ ;
${X}_{i\mathrm{,1}}$ follows the autoregressive moving average model with
$p=1$ and
$q=1$ , *i.e.*, ARMA(1,1); and
${X}_{i\mathrm{,2}}$ is an autoregressive (AR) model of order 1, *i.e.*, AR(1). Typically,
${X}_{i,1}=0.1{X}_{i-1,2}+{e}_{i,1}+0.3{e}_{i-1,1}$ and
${X}_{i,2}=0.2{X}_{i-1,2}+{e}_{i,2}$ . Both
${e}_{i\mathrm{,1}}$ and
${e}_{i\mathrm{,2}}$ are standard white noise series with mean 0 and variance 1, *i.e.*,
${e}_{i\mathrm{,1}}~WN\left(\mathrm{0,1}\right)$ and
${e}_{i\mathrm{,2}}~WN\left(\mathrm{0,1}\right)$ . Also,
${\epsilon}_{i}~WN\left(\mathrm{0,0.04}\right)$ and that is independent of both
${X}_{i\mathrm{,1}}$ and
${X}_{i\mathrm{,2}}$ . We run 500 Monte Carlo simulations for 9 grid points at
${t}_{0}=0.1,\cdots ,0.9$ in the time interval
$\left[\mathrm{0,1}\right]$ and examine the average of 500 simulations of the parameters in model (8) at these 9 points. We apply the Gaussian kernel function with standard error 2 to the estimation of
$\gamma $ . Once
$\stackrel{^}{\gamma}$ is obtained by (4), then
${\stackrel{^}{\xi}}_{0}$ is immediately known and its variance estimation can be calculated by (5). For constant parameter vector
${\alpha}_{2}$ and
${\beta}_{2}$ , we obtain their global estimators
${\stackrel{\u02dc}{\alpha}}_{2}$ and
${\stackrel{\u02dc}{\beta}}_{2}$ by using formula (6), and compute the corresponding variance estimation of
${\stackrel{\u02dc}{\alpha}}_{2}$ and
${\stackrel{\u02dc}{\beta}}_{2}$ by (7). For each simulation, we consider the sample size as
$n=300$ and
$n=800$ , respectively. According to simulation results under sample size 300 and 800, we find that estimation results are good when bandwidth is between 0.02 and 0.08. Thus, three bandwidths of *h* are chosen, that is,
$h=0.03,0.05$ and 0.07. Finally, we report the following quantities under each *h* and *n* for
${\stackrel{^}{\xi}}_{0}$ in Table 1 & Table 2: Monte Carlo average estimators of
${\stackrel{^}{\xi}}_{0}$ , Monte Carlo standard deviation of estimates (ESD), Monte Carlo average of estimated standard errors (ASE) and coverage probability of nominal 95% confidence intervals (CP).

Table 1. Estimation results for time-varying coefficient ${\alpha}_{1}\left(t\right)=\mathrm{sin}\left(t\right)$ and ${\beta}_{1}\left(t\right)=0.5{\text{e}}^{-t}$ under different scenarios.

Table 1 shows that both estimators of
${\alpha}_{1}(\cdot )$ and
${\beta}_{1}(\cdot )$ are approximate to their true values. The ASEs are close to the ESDs, which demonstrates the good performance of the variance estimation by Theorem 1. 95% confidence interval coverage probabilities are reasonably accurate, matching the nominal level. According to the estimation results from Table 2, biases of
${\stackrel{\u02dc}{\alpha}}_{2}$ and
${\stackrel{\u02dc}{\beta}}_{2}$ can be ignored, ASEs are still approximate to the ESDs and CPs of both estimators are close to 95%, which means good variance estimation of global estimators
${\stackrel{\u02dc}{\alpha}}_{2}$ and
${\stackrel{\u02dc}{\beta}}_{2}$ . Furthermore, when bandwidth *h* increases from 0.03 to 0.07, as expected, biases of
${\stackrel{^}{\alpha}}_{1}(\cdot )$ and
${\stackrel{^}{\beta}}_{1}(\cdot )$ tend to be larger, and corresponding ASEs and ESDs tend to be smaller. When the sample size *n* increases from 300 to 800, the ASEs and ESDs of all estimators decrease, which indicates that increasing the sample size can improve our proposed estimators.

Figure 1 & Figure 2 present the true and estimated time-varying coefficients

Table 2. Estimation results for constant coefficients ${\alpha}_{2}=0.3$ and ${\beta}_{2}=-0.1$ under different scenarios.

Figure 1. Estimated curves of ${\alpha}_{1}\left(t\right)=\mathrm{sin}\left(t\right)$ with bandwidth $h=0.05$ . (a) Sample size $n=300$ ; (b) sample size $n=800$ . The solid lines are the true functions of ${\alpha}_{1}\left(t\right)$ , the dashed lines are the estimates of ${\alpha}_{1}\left(t\right)$ , and the dash-dotted lines are the 95% confidence intervals.

Figure 2. Estimated curves of ${\beta}_{1}\left(t\right)=0.5{\text{e}}^{-t}$ with bandwidth $h=0.05$ . (a) sample size $n=300$ ; (b) sample size $n=800$ . The solid lines are the true functions of ${\beta}_{1}\left(t\right)$ , the dashed lines are the estimates of ${\beta}_{1}\left(t\right)$ , and the dash-dotted lines are the 95% confidence intervals.

of model (8) as well as 95% pointwise confidence intervals. It can be intuitively seen that the estimated curves of ${\stackrel{^}{\alpha}}_{1}(\cdot )$ and ${\stackrel{^}{\beta}}_{1}(\cdot )$ are very close to the true curves, which again confirms the good performance of our proposed estimators.

4. Real Data Analysis

In this Section, we use Lake Shasta inflow data [38] to illustrate the application value of our proposed model. The data includes 454 months of measured values for several climatic variables: air temperature (Temp), dew point (DewPt), cloud cover (CldCvr), wind speed (WndSpd), precipitation (Precip), and inflow (Inflow) at Lake Shasta, California, USA. We are interested in building models to predict Lake Shasta inflow based on these climate variables. We treat the lake water inflow (Inflow) as a time series, but it has both autocorrelation and heteroscedasticity through the Box-Ljung autocorrelation test and Lagrange Multiplier test. We let the dependent variable be the inflow of lake water. According to the suggestion by Shumway and Stoffer [38] , it is better to model this variable after logarithmic transformation. Therefore, we set ${Y}_{i}=\mathrm{log}\left({\text{Inflow}}_{i}\right)\mathrm{,}i=\mathrm{1,}\cdots \mathrm{,454}$ and then use the proposed partial time-varying coefficient regression and autoregressive mixed model (3) to model the relationship between inflows and the other five climate variables.

Before using (4) to obtain estimators, we need to determine the optimal bandwidth *h*, the autoregressive order *p* and which covariates have time-varying coefficient effects as well as which covariates have constant coefficient effects. In other words, we need to conduct variable selection. As we mentioned in Section 2.4, the criterion AMS can not only determine the optimal bandwidth *h* and the autoregressive order *p*, but also conduct variable selection. In this analysis, when choosing a combination of covariates, *i.e.*, which variables should be put into
${X}_{i\mathrm{,}{s}_{1}}$ and which variables are put into
${X}_{i\mathrm{,}{s}_{2}}$ , as well as which lag terms of the dependent variable are put into
${Y}_{i\mathrm{,}{p}_{1}}$ and
${Y}_{i\mathrm{,}{p}_{2}}$ , respectively, one rule we follow is that the vectors
${X}_{i\mathrm{,}{s}_{1}}$ and
${Y}_{i\mathrm{,}{p}_{1}}$ cannot be empty simultaneously. Otherwise, model (3) will reduce to the traditional regression and autoregressive mixed model. The autoregressive order *p* we consider starts at 1 and increases sequentially. A rough criterion for *p* is that there is no autocorrelation and heteroscedasticity in the residuals of the model. When *p* is given, we determine the optimal combination of bandwidth *h* with a combination of covariates according to the value of AMS. If the estimator of a constant effect covariate selected by the AMS is insignificant, then we delete this covariate and calculate AMS again. By using this strategy, we can guarantee that all covariates with constant coefficients selected by the AMS are significant.

The procedure of using the proposed model to analyze real data is as follows: First, we let Temp, DewPt, CldCvr, WndSpd and Precip be five candidate covariates. When calculating AMS, according to the suggestion of Cai *et al*. [8] , we choose
$m=4$ and
${n}_{q}=\left[0.1n\right]$ , where [ ] denotes a rounding function, and the optimal bandwidth satisfies
$h\propto {n}^{-1/5}$ . Since the sample size of Shasta Lake inflow data is 454, we set the selection range of bandwidth *h* from 0.08 to 0.50. Combined with other rules mentioned in the previous paragraph, that is, the
${X}_{i\mathrm{,}{s}_{1}}$ and
${Y}_{i\mathrm{,}{p}_{1}}$ cannot be empty simultaneously, the estimation for covariates selected by the model with constant coefficients should be significant. Then, we consider that model (3) contains only 1, 2, 3, 4 and 5 of the candidate covariates and calculate the corresponding values of AMS based on given *p* and *h*. Next, by comparing AMS under all various combinations, we find the minimum value of AMS occurring at
$h=0.15$ ,
$p=4$ , and the covariate combination is
${X}_{i\mathrm{,}{s}_{1}}={\left({\text{CldCvr}}_{i}\mathrm{,}{\text{WndSpd}}_{i}\right)}^{\text{T}}$ ,
${X}_{i\mathrm{,}{s}_{2}}={\text{Precip}}_{i}$ ,
${Y}_{i\mathrm{,}{p}_{1}}={\left(\mathrm{log}\left({\text{Inflow}}_{i-1}\right)\mathrm{,}\mathrm{log}\left({\text{Inflow}}_{i-2}\right)\mathrm{,}\mathrm{log}\left({\text{Inflow}}_{i-3}\right)\right)}^{\text{T}}$ and
${Y}_{i\mathrm{,}{p}_{2}}=\mathrm{log}\left({\text{Inflow}}_{i-4}\right)$ . Thus,
$\left({s}_{1}\mathrm{,}{s}_{2}\mathrm{,}{p}_{1}\mathrm{,}{p}_{2}\right)=\left(\mathrm{2,1,3,1}\right)$ . Figure 3(a) shows the AMS changes with different *h* under this combination in model (3).

Therefore, the final model to be estimated is:

$\begin{array}{c}{Y}_{i}={\alpha}_{\mathrm{1,1}}\left({t}_{i}\right){\text{CldCvr}}_{i}+{\alpha}_{\mathrm{1,2}}\left({t}_{i}\right){\text{WndSpd}}_{i}+{\alpha}_{\mathrm{2,1}}{\text{Precip}}_{i}+{\beta}_{\mathrm{1,1}}\left({t}_{i}\right){Y}_{i-1}\\ \text{\hspace{0.17em}}\text{\hspace{0.05em}}+{\beta}_{\mathrm{1,2}}\left({t}_{i}\right){Y}_{i-2}+{\beta}_{\mathrm{1,3}}\left({t}_{i}\right){Y}_{i-3}+{\beta}_{\mathrm{2,1}}{Y}_{i-4}+{\epsilon}_{i}\mathrm{,}\end{array}$ (9)

where ${Y}_{i}=\mathrm{log}\left({\text{Inflow}}_{i}\right)$ . Once ${X}_{i\mathrm{,}{s}_{1}}$ , ${X}_{i\mathrm{,}{s}_{2}}$ , ${Y}_{i\mathrm{,}{p}_{1}}$ and ${Y}_{i\mathrm{,}{p}_{2}}$ are determined, we then use (4) to obtain local estimators and (6) to get global estimators of constant coefficients. To evaluate the prediction performance of model (9), we use the first 451 sample data to estimate the unknown parameters and obtain ${\stackrel{^}{\epsilon}}_{i}~WN\left(\mathrm{0,0.049}\right)$ . The estimated time-varying coefficients ${\stackrel{^}{\alpha}}_{\mathrm{1,1}}(\cdot )$ , ${\stackrel{^}{\alpha}}_{\mathrm{1,2}}(\cdot )$ , ${\stackrel{^}{\beta}}_{\mathrm{1,1}}(\cdot )$ , ${\stackrel{^}{\beta}}_{\mathrm{1,2}}(\cdot )$ and ${\stackrel{^}{\beta}}_{\mathrm{1,3}}(\cdot )$ are presented in Figure 3(b), and the estimated constant coefficients ${\stackrel{\u02dc}{\alpha}}_{\mathrm{2,1}}$ and ${\stackrel{\u02dc}{\beta}}_{\mathrm{2,1}}$ are reported in Table 3. Interestingly, considering the time-varying coefficient estimation results, it seems that the prediction

Figure 3. (a) AMS results under different bandwidths *h* for model (9); (b) time-varying coefficient estimation results of model (9).

Table 3. Constant coefficient estimation results of model (9).

effect of lag order 2 of
${Y}_{i}$ (*i.e.*,
${Y}_{i-2}$ ) increases with time
${t}_{i}$ while the lag order 1 of
${Y}_{i}$ (*i.e.*,
${Y}_{i-1}$ ) has no such characteristic, and effect of the lag order 3 of
${Y}_{i}$ (*i.e.*,
${Y}_{i-3}$ ) is close to zero. The time-varying effects from both
${\text{CldCvr}}_{i}$ and
${\text{WndSpd}}_{i}$ decrease with
${t}_{i}$ .

The last three sample values are used to calculate the relative prediction error (RPE) defined as:

$\text{RPE}=\left|\frac{{\stackrel{^}{Y}}_{i}-{Y}_{i}}{{Y}_{i}}\right|\mathrm{,}$

where ${\stackrel{^}{Y}}_{i}=\mathrm{log}\left({\stackrel{^}{\text{Inflow}}}_{i}\right)\left(i=\mathrm{452,453,454}\right)$ are predicted by model (9).

Table 4 shows the relative prediction error results of model (9) for the three forward steps. It can be seen that the average RPE of model (9) is only 2.6%. For comparison, Table 4 also shows the 3-step forward prediction results of the lake inflow based on the FAR(p) model. According to the AMS criterion of Cai *et al*. [8] , the optimal fitted FAR(p) model is obtained when
$h=0.17$ ,
$p=3$ and
$d=1$ , that is,

${Y}_{i}={\stackrel{^}{\alpha}}_{0}\left({Y}_{i-1}\right)+{\stackrel{^}{\alpha}}_{1}\left({Y}_{i-1}\right){Y}_{i-1}+{\stackrel{^}{\alpha}}_{2}\left({Y}_{i-1}\right){Y}_{i-2}+{\stackrel{^}{\alpha}}_{3}\left({Y}_{i-1}\right){Y}_{i-3}\mathrm{,}$ (10)

In addition, we also apply classical regression and autoregressive models to analyze the lake water inflow and find that residuals have heteroskedasticity. Therefore, we combine AR (p) model with the autoregressive conditional

Table 4. 3-step forward relative prediction error of $\mathrm{log}\left({\text{Inflow}}_{t}\right)$ under different models.

heteroskedasticity (ARCH) model, *i.e.*, AR(p)-ARCH(q) model, to fit the residuals. The final optimal AR(p)-ARCH(q) model is:

$\begin{array}{c}{Y}_{i}=1.335+0.460{Y}_{i-1}+0.159{Y}_{i-2}+0.415{\text{CldCvr}}_{i}\\ \text{\hspace{0.17em}}\text{\hspace{0.05em}}+0.212{\text{WndSpd}}_{i}+0.002{\text{Precip}}_{i}+{R}_{i}\mathrm{,}\end{array}$ (11)

${R}_{i}=-0.093{R}_{i-1}-0.088{R}_{i-2}-0.161{R}_{i-3}+{\delta}_{i},$

${\delta}_{i}={\varphi}_{i}{\eta}_{i}\mathrm{,}\text{\hspace{1em}}{\eta}_{i}~WN\left(\mathrm{0,1}\right)\mathrm{,}$

${\varphi}_{i}^{2}=0.040+0.078{\delta}_{i}^{2}.$

By comparing the prediction results of the three models in Table 4, our proposed model (9) performs better than the other two models. Compared to model (10), model (9) takes into account the effects of covariates on lake inflow; compared to model (11), model (9) uses the advantages of the time-varying coefficient. Since our proposed model combines the strengths of the FAR(p) model and the classical regression and autoregressive mixed model, it has advantages on modeling time series data with complex correlations between sample components.

5. Conclusions and Future Works

In this article, we propose a partial time-varying coefficient regression and autoregressive mixed model, which can be regarded as an extension of traditional regression and autoregressive mixed model and time-varying coefficient regression model. The proposed model is very flexible and can handle both complex correlations between time series components and effects of other covariates, so it can improve model fitting when building relationships between complex time series and covariates. We apply the local polynomial expansion technique and the least squares estimation method to obtain local estimates of the parameter functions in the model. At the same time, we also propose a global estimator algorithm for the constant coefficients in the model. In addition, we derive the asymptotic normality of the proposed estimators and conduct simulation studies to examine their finite-sample performances, and find our proposed estimators perform well. Finally, we apply the model to analyze the relationships between the water inflow of Shasta Lake and the other five climatic variables.

As we mentioned in Section 1, local polynomial technique is widely used to estimate varying-coefficient models, including time-varying coefficient time series models. One advantage of this method is it has solid theory so that we can derive out explicit biases of corresponding estimators. Furthermore, local polynomial method is easily implemented. Based on the local estimators, we can obtain global estimators for constant coefficients by using formula (6). Note that it is needed to select the best bandwidth when using local polynomial method. However, if we apply other smooth methods such as spline approach to estimate time-varying coefficients in (3), we also need to determine some turning parameters, e.g., knots and degree.

Although we can apply the local polynomial expansion technique and the least squares estimation method to obtain the local estimators as well as corresponding variances, how to test these estimated time-varying coefficients is another important problem. For example, the AMS criterion suggests that model (9) includes five time-varying coefficients, that is, ${\alpha}_{\mathrm{1,1}}(\cdot )$ , ${\alpha}_{\mathrm{1,2}}(\cdot )$ , ${\beta}_{\mathrm{1,1}}(\cdot )$ , ${\beta}_{\mathrm{1,2}}(\cdot )$ and ${\beta}_{\mathrm{1,3}}(\cdot )$ , but Figure 3(b) shows that some estimated curves seem to be flat across time. Thus, it makes us wonder whether those estimated time-varying coefficients can be treated as constant. One way to assess this is to construct the confidence band to see whether a time-varying coefficient is a function or a constant. This method may be realized by obtaining weak Bahadur representations of estimators [29] . Another way is to develop a more objective and theoretical test following the simulation-assisted hypothesis testing procedure proposed in Zhang and Wu [21] . This will be our future research work.

Acknowledgements

Dr. Cao’s research is supported by Natural Science Foundation of Top Talent of SZTU (grant no. GDRC202135).

Appendix: Proof of Theorem 1

We first introduce some notations for the convenience. Denote

${D}_{n}={D}_{n}\left({t}_{0}\right)=\left(\begin{array}{cc}{D}_{n\mathrm{,1}}& {D}_{n\mathrm{,2}}\\ {D}_{n\mathrm{,2}}^{\text{T}}& {D}_{n\mathrm{,3}}\end{array}\right)\mathrm{,}\text{\hspace{1em}}{\Sigma}_{n}={\Sigma}_{n}\left({t}_{0}\right)=\left(\begin{array}{cc}{\Sigma}_{n\mathrm{,1}}& {\Sigma}_{n\mathrm{,2}}\\ {\Sigma}_{n\mathrm{,2}}^{\text{T}}& {\Sigma}_{n\mathrm{,3}}\end{array}\right)\mathrm{,}$

where

${D}_{n,1}={D}_{n,1}\left({t}_{0}\right)=\frac{1}{n-p}{\displaystyle \underset{i=p+1}{\overset{n}{\sum}}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{G}_{i}{G}_{i}^{\text{T}}{K}_{h}\left({t}_{i}-{t}_{0}\right),$

${D}_{n,2}={D}_{n,2}\left({t}_{0}\right)=\frac{1}{n-p}{\displaystyle \underset{i=p+1}{\overset{n}{\sum}}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{G}_{i}{V}_{i}^{\text{T}}\left(\frac{{t}_{i}-{t}_{0}}{h}\right){K}_{h}\left({t}_{i}-{t}_{0}\right),$

${D}_{n,3}={D}_{n,3}\left({t}_{0}\right)=\frac{1}{n-p}{\displaystyle \underset{i=p+1}{\overset{n}{\sum}}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{V}_{i}{V}_{i}^{\text{T}}{\left(\frac{{t}_{i}-{t}_{0}}{h}\right)}^{2}{K}_{h}\left({t}_{i}-{t}_{0}\right),$

${\Sigma}_{n,1}={\Sigma}_{n,1}\left({t}_{0}\right)=\frac{h}{\left(n-p\right){\nu}_{0}}\left({\displaystyle \underset{i=p+1}{\overset{n}{\sum}}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{\stackrel{^}{\epsilon}}_{i}{G}_{i}{K}_{h}\left({t}_{i}-{t}_{0}\right)\right)\left({\displaystyle \underset{j=p+1}{\overset{n}{\sum}}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{\stackrel{^}{\epsilon}}_{j}{G}_{j}^{\text{T}}{K}_{h}\left({t}_{j}-{t}_{0}\right)\right),$

${\Sigma}_{n,2}={\Sigma}_{n,2}\left({t}_{0}\right)=\frac{h}{\left(n-p\right){\nu}_{0}}\left({\displaystyle \underset{i=p+1}{\overset{n}{\sum}}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{\stackrel{^}{\epsilon}}_{i}{G}_{i}{K}_{h}\left({t}_{i}-{t}_{0}\right)\right)\left({\displaystyle \underset{j=p+1}{\overset{n}{\sum}}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{\stackrel{^}{\epsilon}}_{j}{V}_{j}^{\text{T}}\left(\frac{{t}_{j}-{t}_{0}}{h}\right){K}_{h}\left({t}_{j}-{t}_{0}\right)\right),$

${\Sigma}_{n,3}={\Sigma}_{n,3}\left({t}_{0}\right)=\frac{h}{\left(n-p\right){\nu}_{0}}\left({\displaystyle \underset{i=p+1}{\overset{n}{\sum}}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{\stackrel{^}{\epsilon}}_{i}{V}_{i}\left(\frac{{t}_{i}-{t}_{0}}{h}\right){K}_{h}\left({t}_{i}-{t}_{0}\right)\right)\left({\displaystyle \underset{j=p+1}{\overset{n}{\sum}}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{\stackrel{^}{\epsilon}}_{j}{V}_{j}^{\text{T}}\left(\frac{{t}_{j}-{t}_{0}}{h}\right){K}_{h}\left({t}_{j}-{t}_{0}\right)\right).$

Define

${T}_{n}={T}_{n}\left({t}_{0}\right)=\left(\begin{array}{c}{T}_{n,0}\left({t}_{0}\right)\\ {T}_{n,1}\left({t}_{0}\right)\end{array}\right),\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.05em}}\text{\hspace{0.17em}}{T}_{n}^{\ast}={T}_{n}^{\ast}\left({t}_{0}\right)=\left(\begin{array}{c}{T}_{n,0}^{\ast}\left({t}_{0}\right)\\ {T}_{n,1}^{\ast}\left({t}_{0}\right)\end{array}\right),$

where

${T}_{n,0}={T}_{n,0}\left({t}_{0}\right)=\frac{1}{n-p}{\displaystyle \underset{i=p+1}{\overset{n}{\sum}}}\text{\hspace{0.05em}}{G}_{i}{K}_{h}\left({t}_{i}-{t}_{0}\right){Y}_{i},$

${T}_{n,1}={T}_{n,1}\left({t}_{0}\right)=\frac{1}{n-p}{\displaystyle \underset{i=p+1}{\overset{n}{\sum}}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{V}_{i}\left(\frac{{t}_{i}-{t}_{0}}{h}\right){K}_{h}\left({t}_{i}-{t}_{0}\right){Y}_{i},$

${T}_{n,0}^{\ast}={T}_{n,0}^{\ast}\left({t}_{0}\right)=\frac{1}{n-p}{\displaystyle \underset{i=p+1}{\overset{n}{\sum}}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{G}_{i}{K}_{h}\left({t}_{i}-{t}_{0}\right){\epsilon}_{i},$

${T}_{n,1}^{\ast}={T}_{n,1}^{\ast}\left({t}_{0}\right)=\frac{1}{n-p}{\displaystyle \underset{i=p+1}{\overset{n}{\sum}}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{V}_{i}\left(\frac{{t}_{i}-{t}_{0}}{h}\right){K}_{h}\left({t}_{i}-{t}_{0}\right){\epsilon}_{i}.$

Noting the definitions of ${T}_{n}$ and ${T}_{n}^{\ast}$ , it is easy to know

${T}_{n\mathrm{,0}}-{T}_{n\mathrm{,0}}^{\ast}={D}_{n\mathrm{,1}}{\xi}_{0}+h{D}_{n\mathrm{,2}}{\eta}_{0}+\frac{{h}^{2}}{2}{D}_{n\mathrm{,1}}^{\ast}{{\eta}^{\prime}}_{0}+{o}_{p}\left({h}^{2}\right)\mathrm{,}$

${T}_{n\mathrm{,1}}-{T}_{n\mathrm{,1}}^{\ast}={D}_{n\mathrm{,2}}^{\text{T}}{\xi}_{0}+h{D}_{n\mathrm{,3}}{\eta}_{0}+\frac{{h}^{2}}{2}{D}_{n\mathrm{,2}}^{\ast}{{\eta}^{\prime}}_{0}+{o}_{p}\left({h}^{2}\right)\mathrm{,}$

${\xi}_{0}=\xi \left({t}_{0}\right)={\left\{{\alpha}_{1}^{\text{T}}\left({t}_{0}\right)\mathrm{,}{\alpha}_{2}^{\text{T}}\left({t}_{0}\right)\mathrm{,}{\beta}_{1}^{\text{T}}\left({t}_{0}\right)\mathrm{,}{\beta}_{2}^{\text{T}}\left({t}_{0}\right)\right\}}^{\text{T}}\mathrm{,}$

${\eta}_{0}=\eta \left({t}_{0}\right)={\left({\left({{\alpha}^{\prime}}_{1}\left({t}_{0}\right)\right)}^{\text{T}}\mathrm{,}{\left({{\beta}^{\prime}}_{1}\left({t}_{0}\right)\right)}^{\text{T}}\right)}^{\text{T}}\mathrm{,}$

${{\eta}^{\prime}}_{0}={\eta}^{\prime}\left({t}_{0}\right)={\left({\left({{\alpha}^{\u2033}}_{1}\left({t}_{0}\right)\right)}^{\text{T}}\mathrm{,}{\left({{\beta}^{\u2033}}_{1}\left({t}_{0}\right)\right)}^{\text{T}}\right)}^{\text{T}}\mathrm{,}$

${D}_{n,1}^{\ast}={D}_{n,1}^{\ast}\left({t}_{0}\right)=\frac{1}{n-p}{\displaystyle \underset{i=p+1}{\overset{n}{\sum}}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{G}_{i}{V}_{i}^{\text{T}}{\left(\frac{{t}_{i}-{t}_{0}}{h}\right)}^{2}{K}_{h}\left({t}_{i}-{t}_{0}\right),$

${D}_{n,2}^{\ast}={D}_{n,2}^{\ast}\left({t}_{0}\right)=\frac{1}{n-p}{\displaystyle \underset{i=p+1}{\overset{n}{\sum}}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{V}_{i}{V}_{i}^{\text{T}}{\left(\frac{{t}_{i}-{t}_{0}}{h}\right)}^{3}{K}_{h}\left({t}_{i}-{t}_{0}\right).$

After a simple combination, we can obtain

${T}_{n}-{T}_{n}^{\ast}={D}_{n}H{\gamma}_{0}+\frac{{h}^{2}}{2}\left(\begin{array}{c}{D}_{n\mathrm{,1}}^{\ast}{\eta}^{\prime}\left({t}_{0}\right)\\ {D}_{n\mathrm{,2}}^{\ast}{\eta}^{\prime}\left({t}_{0}\right)\end{array}\right)+{o}_{p}\left({\left(t-{t}_{0}\right)}^{2}\right)\mathrm{.}$ (A.1)

Note that the equivalent form of formula (4) is:

$\stackrel{^}{\gamma}={H}^{-1}{D}_{n}^{-1}{T}_{n}\mathrm{.}$ (A.2)

By (A.1) and (A.2), it is easy to know

$H\left(\stackrel{^}{\gamma}-{\gamma}_{0}\right)={D}_{n}^{-1}{T}_{n}^{\ast}+\frac{{h}^{2}}{2}{D}_{n}^{-1}\left(\begin{array}{c}{D}_{n\mathrm{,1}}^{\ast}{\eta}^{\prime}\left({t}_{0}\right)\\ {D}_{n\mathrm{,2}}^{\ast}{\eta}^{\prime}\left({t}_{0}\right)\end{array}\right)+{o}_{p}\left({h}^{2}\right)\mathrm{.}$ (A.3)

Next, we discuss the large sample properties of ${T}_{n}^{\ast}$ . To facilitate the derivation, the following notation is introduced: ${\sigma}^{2}\left({t}_{0}\mathrm{,}g\right)=\text{Var}\left(Y\mathrm{|}t={t}_{0}\mathrm{,}G=g\right)$ , ${u}_{t}={\epsilon}_{t}/\sigma \left(t\mathrm{,}{G}_{t}\right)$ . For $p+1\le k\mathrm{,}l\le n$ , $t\mathrm{,}s\in \left[\mathrm{0,1}\right]$ , let

${R}_{l,k}^{\left(1\right)}\left(t,s\right)=\text{Cov}\left({G}_{l}{u}_{l}\sigma \left(t,{G}_{l}\right),{G}_{k}{u}_{k}\sigma \left(s,{G}_{k}\right)\right),$

${R}_{l,k}^{\left(2\right)}\left(t,s\right)=\text{Cov}\left({G}_{l}{u}_{l}\sigma \left(t,{G}_{l}\right),{V}_{k}{u}_{k}\sigma \left(s,{G}_{k}\right)\right),$

${R}_{l,k}^{\left(3\right)}\left(t,s\right)=\text{Cov}\left({V}_{l}{u}_{l}\sigma \left(t,{G}_{l}\right),{V}_{k}{u}_{k}\sigma \left(s,{G}_{k}\right)\right).$

It can be seen from the above that for a stationary process ${R}_{l,l+k}^{\left(i\right)}\left(t,t\right)={R}_{k}^{\left(i\right)}\left(t\right),i=1,2,3$ . Using a similar derivation from Cai (2007), we obtain

$\begin{array}{l}nh\text{Var}\left({T}_{n,0}^{\ast}\left({t}_{0}\right)\right)\\ =\frac{nh}{{\left(n-p\right)}^{2}}{\displaystyle \underset{p+1\le i,j\le n}{\sum}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{Cov}\left({G}_{i}{u}_{i}\sigma \left({t}_{i},{G}_{i}\right),{G}_{j}{u}_{j}\sigma \left({t}_{j},{G}_{j}\right)\right){K}_{h}\left({t}_{i}-{t}_{0}\right){K}_{h}\left({t}_{j}-{t}_{0}\right)\\ \approx \frac{h}{n-p}{\displaystyle \underset{i=p+1}{\overset{n}{\sum}}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}E\left({G}_{i}{G}_{i}^{\text{T}}{u}_{i}^{2}{\sigma}^{2}\left({t}_{i},{G}_{i}\right)\right){K}_{h}^{2}\left({t}_{i}-{t}_{0}\right)\\ \text{\hspace{0.17em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{\hspace{0.17em}}+2\frac{h}{n-p}{\displaystyle \underset{p+1\le i<j\le n}{\sum}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{Cov}\left({G}_{i}{u}_{i}\sigma \left({t}_{i},{G}_{i}\right),{G}_{j}{u}_{j}\sigma \left({t}_{j},{G}_{j}\right)\right){K}_{h}\left({t}_{i}-{t}_{0}\right){K}_{h}\left({t}_{j}-{t}_{0}\right)\end{array}$

$\begin{array}{l}\approx \frac{h}{n-p}{\displaystyle \underset{i=p+1}{\overset{n}{\sum}}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{R}_{i,i}^{\left(1\right)}\left({t}_{i},{t}_{i}\right){K}_{h}^{2}\left({t}_{i}-{t}_{0}\right)\\ \text{\hspace{0.17em}}\text{\hspace{0.05em}}\text{\hspace{0.17em}}\text{\hspace{0.05em}}+\frac{2h}{n-p}{\displaystyle \underset{p+1\le i<j\le n}{\sum}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{R}_{i,j}^{\left(1\right)}\left({t}_{i},{t}_{j}\right){K}_{h}\left({t}_{i}-{t}_{0}\right){K}_{h}\left({t}_{j}-{t}_{0}\right)\\ \approx {\nu}_{0}\text{\hspace{0.05em}}{R}_{0}^{\left(1\right)}\left({t}_{0}\right)+2{\nu}_{0}{\displaystyle \underset{k=1}{\overset{\infty}{\sum}}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{R}_{k}^{\left(1\right)}\left({t}_{0}\right).\end{array}$ (A.4)

Thus, we have $nh\text{Var}\left({T}_{n,0}^{\ast}\left({t}_{0}\right)\right)\to {\Sigma}_{1}\left({t}_{0}\right)=\left({R}_{0}^{\left(1\right)}\left({t}_{0}\right)+2{\displaystyle {\sum}_{k=1}^{\infty}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{R}_{k}^{\left(1\right)}\left({t}_{0}\right)\right){\nu}_{0}$ . Similarly, we obtain

$nh\text{Var}\left({T}_{n,1}^{\ast}\left({t}_{0}\right)\right)\to {\Sigma}_{3}\left({t}_{0}\right)=\left({R}_{0}^{\left(3\right)}\left({t}_{0}\right)+2{\displaystyle \underset{k=1}{\overset{\infty}{\sum}}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{R}_{k}^{\left(3\right)}\left({t}_{0}\right)\right){\nu}_{0},$

$nh\text{Cov}\left({T}_{n,0}^{\ast}\left({t}_{0}\right),{T}_{n,1}^{\ast}\left({t}_{0}\right)\right)\to {\Sigma}_{2}\left({t}_{0}\right)=\left({R}_{0}^{\left(2\right)}\left({t}_{0}\right)+2{\displaystyle \underset{k=1}{\overset{\infty}{\sum}}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{R}_{k}^{\left(2\right)}\left({t}_{0}\right)\right){\nu}_{0}.$ (A.5)

Next, we derive the properties of ${D}_{n}\left({t}_{0}\right)$ . From the approximation of the Riemann summation of definite integrals, it is known that when $0<{t}_{0}<1$ , we have

${D}_{n\mathrm{,1}}\left({t}_{0}\right)=\frac{1}{n-p}{\displaystyle \underset{i=p+1}{\overset{n}{\sum}}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{G}_{i}{G}_{i}^{\text{T}}{K}_{h}\left({t}_{i}-{t}_{0}\right)\approx {\Omega}_{1}{\displaystyle {\int}_{{t}_{0}/h}^{\left(1-{t}_{0}\right)/h}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}K\left(u\right)\text{d}u\approx {\Omega}_{1}{\mu}_{0}\mathrm{,}$

${D}_{n\mathrm{,2}}\left({t}_{0}\right)=\frac{1}{n-p}{\displaystyle \underset{i=p+1}{\overset{n}{\sum}}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{G}_{i}{V}_{i}^{\text{T}}\left(\frac{{t}_{i}-{t}_{0}}{h}\right){K}_{h}\left({t}_{i}-{t}_{0}\right)\approx {\Omega}_{2}{\displaystyle {\int}_{{t}_{0}/h}^{\left(1-{t}_{0}\right)/h}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}uK\left(u\right)\text{d}u\approx {\Omega}_{2}{\mu}_{1}\mathrm{,}$

${D}_{n\mathrm{,3}}\left({t}_{0}\right)=\frac{1}{n-p}{\displaystyle \underset{i=p+1}{\overset{n}{\sum}}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{V}_{i}{V}_{i}^{\text{T}}{\left(\frac{{t}_{i}-{t}_{0}}{h}\right)}^{2}{K}_{h}\left({t}_{i}-{t}_{0}\right)\approx {\Omega}_{3}{\displaystyle {\int}_{{t}_{0}/h}^{\left(1-{t}_{0}\right)/h}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{u}^{2}K\left(u\right)\text{d}u\approx {\Omega}_{3}{\mu}_{2}\mathrm{,}$

${D}_{n\mathrm{,1}}^{\ast}\left({t}_{0}\right)=\frac{1}{n-p}{\displaystyle \underset{i=p+1}{\overset{n}{\sum}}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{G}_{i}{V}_{i}^{\text{T}}{\left(\frac{{t}_{i}-{t}_{0}}{h}\right)}^{2}{K}_{h}\left({t}_{i}-{t}_{0}\right)\approx {\Omega}_{2}{\displaystyle {\int}_{{t}_{0}/h}^{\left(1-{t}_{0}\right)/h}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{u}^{2}K\left(u\right)\text{d}u\approx {\Omega}_{2}{\mu}_{2}\mathrm{,}$

${D}_{n\mathrm{,2}}^{\ast}\left({t}_{0}\right)=\frac{1}{n-p}{\displaystyle \underset{i=p+1}{\overset{n}{\sum}}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{V}_{i}{V}_{i}^{\text{T}}{\left(\frac{{t}_{i}-{t}_{0}}{h}\right)}^{3}{K}_{h}\left({t}_{i}-{t}_{0}\right)\approx {\Omega}_{3}{\displaystyle {\int}_{{t}_{0}/h}^{\left(1-{t}_{0}\right)/h}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{u}^{3}K\left(u\right)\text{d}u\approx {\Omega}_{3}{\mu}_{3}\mathrm{.}$ (A.6)

Since the kernel function $K(\cdot )$ is symmetrical, ${\mu}_{0}=1$ , ${\mu}_{1}={\mu}_{3}=0$ . Therefore,

${B}_{n}\left({t}_{0}\right)=\frac{{h}^{2}}{2}{D}_{n}^{-1}\left(\begin{array}{c}{D}_{n\mathrm{,1}}^{\ast}{\eta}^{\prime}\left({t}_{0}\right)\\ {D}_{n\mathrm{,2}}^{\ast}{\eta}^{\prime}\left({t}_{0}\right)\end{array}\right)\to \frac{{h}^{2}}{2}\left(\begin{array}{c}{\Omega}_{1}^{-1}{\Omega}_{2}{\mu}_{2}{\eta}^{\prime}\left({t}_{0}\right)\\ 0\end{array}\right)=b=b\left({t}_{0}\right)\mathrm{,}$ (A.7)

when $n\to \infty $ . Combining the results of (A.3)-(A.7), it is not difficult to obtain the conclusion of Theorem 1.

Conflicts of Interest

The authors declare no conflicts of interest.

[1] | Hamilton, J.D. (1994) Time Series Analysis. Princeton University Press, Princeton. |

[2] |
Box, G.E.P., Jenkins, G.M. and Reinsel, G.C. (2008) Time Series Analysis: Forecasting and Control. 4th Edition, John Wiley & Sons, New York.
https://doi.org/10.1002/9781118619193 |

[3] | Gujarati, D.N. and Porter, D.C. (2010) Essentials of Econometrics. 4th Edition, McGraw-Hill/Irwin, New York. |

[4] |
Robinson, P.M. (1989) Nonparametric Estimation of Time-Varying Parameters. In: Hackl, P., Eds., Statistical Analysis and Forecasting of Economic Structural Change, Springer-Verlag, Berlin, 253-264. https://doi.org/10.1007/978-3-662-02571-0_15 |

[5] |
Robinson, P.M. (1989) Time-Varying Nonlinear Regression. In: Hackl, P. and Westlund, A.H., Eds., Economic Structure Change, Springer-Verlag, Berlin, 179-190.
https://doi.org/10.1007/978-3-662-06824-3_13 |

[6] |
Chen, R. and Tsay, R.S. (1993) Functional-Coefficient Autoregressive Models. Journal of the American Statistical Association, 88, 298-308.
https://doi.org/10.1080/01621459.1993.10594322 |

[7] |
Hoover, D.R., Rice, J.A., Wu, C.O. and Yang, L.P. (1998) Nonparametric Smoothing Estimates of Time-Varying Coefficient Models with Longitudinal Data. Biometrika, 85, 809-822. https://doi.org/10.1093/biomet/85.4.809 |

[8] |
Cai, Z., Fan, J. and Yao, Q. (2000) Functional-Coefficients Regression Models for Nonlinear Time Series. Journal of the American Statistical Association, 95, 941-956.
https://www.tandfonline.com/doi/abs/10.1080/01621459.2000.10474284 https://doi.org/10.1080/01621459.2000.10474284 |

[9] |
Cai, Z. (2007) Trending Time-Varying Coefficient Time Series Models with Serially Correlated Errors. Journal of Econometrics, 136, 163-188.
https://doi.org/10.1016/j.jeconom.2005.08.004 |

[10] |
Liu, X., Cai, Z. and Chen, R. (2007) Functional Coefficient Seasonal Time Series Models with an Application of Hawaii Tourism Data. Computational Statistics, 30, 719-744. https://doi.org/10.1007/s00180-015-0574-x |

[11] |
Cai, Z., Chen, L. and Fang, Y. (2015) Semiparametric Estimation of Partially Varying-Coefficient Dynamic Panel Data Models. Econometric Reviews, 34, 695-719.
https://doi.org/10.1080/07474938.2014.956569 |

[12] |
Cai, Z., Fang, Y. and Tian, D. (2018) Assessing Tail Risk Using Expectile Regressions with Partially Varying Coefficients. Journal of Management Science and Engineering, 3, 183-213. https://doi.org/10.3724/SP.J.1383.304011 |

[13] |
Chen, Y., Chua, W.S. and Koch, T. (2018) Forecasting Day-Ahead High-Resolution Natural-Gas Demand and Supply in Germany. Applied Energy, 228, 1091-1100.
https://doi.org/10.1016/j.apenergy.2018.06.137 |

[14] |
Tu, Y. and Wang, Y. (2019) Functional Coefficient Cointegration Models Subject to Time-Varying Volatility with an Application to the Purchasing Power Parity. Oxford Bulletin of Economics and Statistics, 86, 1401-1423.
https://doi.org/10.1111/obes.12309 |

[15] |
Cai, Z., Fang, Y., Lin, M. and Su, J. (2019) Inferences for a Partially Varying Coefficient Model with Endogenous Regressors. Journal of Business & Economic Statistics, 37, 158-170. https://doi.org/10.1080/07350015.2017.1294079 |

[16] |
Yousuf, K. and Ng, S. (2021) Boosting High Dimensional Predictive Regressions with Time Varying Parameters. Journal of Econometrics, 224, 60-87.
https://doi.org/10.1016/j.jeconom.2020.08.003 |

[17] |
Kalli, M. and Griffin, J.E. (2014) Time-Varying Sparsity in Dynamic Regression Models. Journal of Econometrics, 178, 779-793.
https://doi.org/10.1016/j.jeconom.2013.10.012 |

[18] | Feldkircher, M., Huber, F. and Kastner, G. (2017) Sophisticated and Small versus Simple and Sizeable: When Does It Pay off to Introduce Drifting Coefficients in Bayesian VARs? Department of Economics Working Paper Series 260, WU Vienna University of Economics and Business. |

[19] |
Bitto, A. and Fruhwirth-Schnatter, S. (2019) Achieving Shrinkage in a Time-Varying Parameter Model Framework. Journal of Econometrics, 210, 75-97.
https://doi.org/10.1016/j.jeconom.2018.11.006 |

[20] |
Li, D., Chen, J. and Lin, Z. (2011) Statistical Inference in Partially Time-Varying Coefficient Models. Journal of Statistical Planning and Inference, 141, 995-1013.
https://doi.org/10.1016/j.jspi.2010.09.004 |

[21] |
Zhang, T. and Wu, W.B. (2012) Inference of Time-Varying Regression Models. Annals of Statistics, 40, 1376-1402. https://doi.org/10.1214/12-AOS1010 |

[22] |
Fan, G.L., Liang, H.Y. and Wang, J.F. (2013) Statistical Inference for Partially Time-Varying Coefficient Errors-in-Variables Models. Journal of Statistical Planning and Inference, 143, 505-519. https://doi.org/10.1016/j.jspi.2012.08.017 |

[23] |
Chen, X.B., Gao, J., Li, D. and Silvapulle, P. (2018) Nonparametric Estimation and Forecasting for Time-Varying Coefficient Realized Volatility Models. Journal of Business & Economic Statistics, 36, 88-100.
https://doi.org/10.1080/07350015.2016.1138118 |

[24] |
Kim, S., Zhao, Z. and Xiao, Z. (2018) Efficient Estimation for Time-Varying coEFficient Logitudinal Models. Journal of Nonparametric Statistics, 30, 680-702.
https://doi.org/10.1080/10485252.2018.1467415 |

[25] |
Cuaresma, J.C., Doppelhofer, G., Feldkircher, M. and Huber, F. (2019) Spillovers from US Monetary Policy: Evidence from a Time Varying Parameter Global Vector Autoregressive Model. Journal of the Royal Statistical Society Series A: Statistics in Society, 182, 831-861. https://doi.org/10.1111/rssa.12439 |

[26] |
Cheng, T., Gao, J. and Zhang, X. (2019) Bayesian Bandwidth Estimation in Nonparametric Time-Varying Coefficient Models. Journal of Business & Economic Statistics, 37, 1-12. https://doi.org/10.1080/07350015.2016.1255216 |

[27] |
Hu, L., Huang, T. and You, J. (2019) Two-Step Estimation of Time-Varying Additive Model for Locally Stationary Time Series. Computational Statistics & Data Analysis, 130, 94-110. https://doi.org/10.1016/j.csda.2018.08.023 |

[28] |
Li, D., Phillips, P.C.B. and Gao, J. (2019) Kernel-Based Inference in Time-Varying Coefficient Cointegrating Regression. Journal of Econometrics, 215, 607-632.
https://doi.org/10.1016/j.jeconom.2019.10.005 |

[29] |
Karmakar, S., Richter, S. and Wu, W.B. (2022) Simultaneous Inference for Time-Varying Models. Journal of Econometrics, 227, 408-428.
https://doi.org/10.1016/j.jeconom.2021.03.002 |

[30] |
Hastie, T. and Tibshirani, R. (1993) Varying-Cofficient Models. Journal of the Royal Statistical Society: Series B (Methodological), 55, 757-796.
https://doi.org/10.1111/j.2517-6161.1993.tb01939.x |

[31] |
Ramsay, J.O. and Silverman, B.W. (1997) Functional Data Analysis. Springer-Verlag, New York. https://doi.org/10.1007/978-1-4757-7107-7 |

[32] |
Brumback, B.A. and Rice, J.A. (1998) Smoothing Spline Models for the Analysis of Nested and Crossed Samples of Curves. Journal of the American Statistical Association, 93, 961-976. https://doi.org/10.1080/01621459.1998.10473755 |

[33] | Huang, J., Wu, C.O. and Zhou, L. (2004) Polynomial Spline Estimation and Inference for Varying Coefficient Models with Longitudinal Data. Statistica Sinica, 14, 763-788. |

[34] |
Aneiros-Perez, G. and Vieu, P. (2008) Nonparametric Time Series Prediction: A Semi-Functional Partial Linear Modeling. Journal of Multivariate Analysis, 99, 834-857. https://doi.org/10.1016/j.jmva.2007.04.010 |

[35] | Fan, J. and Gijbels, I. (1996) Local Polynomial Modelling and Its Applications. Chapman & Hall, London. |

[36] | De Boor, C (2001) A Practical Guide to Splines. Springer-Verlag, New York. |

[37] |
Tian, L., Zucker, D. and Wei, L.J. (2005) On the Cox Model with Time-Varying Regression Coefficients. Journal of the American Statistical Association, 100, 172-183.
https://doi.org/10.1198/016214504000000845 |

[38] |
Shumway, R.H. and Stoffer, D. (2017) Time Series Analysis and Its Applications. 4th Edition, Springer-Verlag, New York. https://doi.org/10.1007/978-3-319-52452-8 |

Journals Menu

Contact us

+1 323-425-8868 | |

customer@scirp.org | |

+86 18163351462(WhatsApp) | |

1655362766 | |

Paper Publishing WeChat |

Copyright © 2024 by authors and Scientific Research Publishing Inc.

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.