div>

Selection of Heteroscedastic Models: A Time Series Forecasting Approach

Abstract Full-Text HTML XML Download Download as PDF (Size:1209KB) PP. 333-348
DOI: 10.4236/am.2019.105024    229 Downloads   460 Views   Citations

ABSTRACT

To overcome the weaknesses of in-sample model selection, this study adopted out-of-sample model selection approach for selecting models with improved forecasting accuracies and performances. Daily closing share prices were obtained from Diamond Bank and Fidelity Bank as listed in the Nigerian Stock Exchange spanning from January 3, 2006 to December 30, 2016. Thus, a total of 2713 observations were explored and were divided into two portions. The first which ranged from January 3, 2006 to November 24, 2016, comprising 2690 observations, was used for model formulation. The second portion which ranged from November 25, 2016 to December 30, 2016, consisting of 23 observations, was used for out-of-sample forecasting performance evaluation. Combined linear (ARIMA) and Nonlinear (GARCH-type) models were applied on the returns series with respect to normal and student-t distributions. The findings revealed that ARIMA (2,1,1)-EGARCH (1,1)-norm and ARIMA (1,1,0)-EGARCH (1,1)-norm models selected based on minimum predictive errors throughout-of-sample approach outperformed ARIMA (2,1,1)-GARCH (2,0)-std and ARIMA (1,1,0)-EGARCH (1,1)-std model chosen through in-sample approach. Therefore, it could be deduced that out-of-sample model selection approach was suitable for selecting models with improved forecasting accuracies and performances.

Cite this paper

Moffat, I. and Akpan, E. (2019) Selection of Heteroscedastic Models: A Time Series Forecasting Approach. Applied Mathematics, 10, 333-348. doi: 10.4236/am.2019.105024.

1. Introduction

Model selection is the act of choosing a model from a class of candidate models as a quest for a true model or best forecasting model or both (see also, [1], [2], [3]). There are often several competing models that can be used for forecasting a particular time series. Consequently, selecting an appropriate forecasting model is considerably practical importance [4] [5]. Selecting the model that provides the best fit to historical data generally does not result in a forecasting method that produces the best forecasts of new data. Concentrating too much on the model that produces the best historical fit often leads to overfitting, or including too many parameters or terms. The best approach is to select the model that results in the smallest standard deviation or mean squared error of the one-step-ahead forecast errors when the model is applied to data set that was not used in fitting process [4]. There are two approaches to model selection in time series; the in-sample model selection and the out-of-sample model selection. The in-sample model selection is targeted at selecting a model for inference, which according to [1] is intended to identify the best model for the data and to provide a reliable characterization of the sources of uncertainty for scientific insight and interpretation. The in-sample model selection criteria include Akaike information criterion, AIC [6], Schwarz information criterion, SIC [7], and Hannan and Quinn information criteria, HQIC [8]. As captured in [9], AIC considered a discrepancy between the true model and a candidate, BIC approximated the posterior model probabilities in a Bayesian framework, and Hannan and Quinn proposed a related criterion which has a smaller penalty compared to BIC that yet permitted strong consistency property (for more details on information criteria, see [10] [11] [12] [13] [14]). However, the major drawbacks of in-sample model selection criteria are that, they are unstable and minimizing these criteria over a class of candidate models leads to a model selection procedure that is conservative or over-consistent in parameter settings [2] [9], and the inability to inform directly about the quality of the model [3]. On the other hand, out-of-sample model selection procedure is applied to achieve the best predictive performance, essentially at describing the characterization of future observations without necessarily considering the choice of true model, rather, the attention is shifted to choose a model with the smallest predictive errors [1] [2] [15] [16]. The out-of-sample forecast is accomplished when the data used for constructing the model are different from that used in forecasting evaluation. That is, the data is divided into two portions. The first portion is for model construction and the second is used for evaluating the forecasting performance with possibility of forecasting new future observations which can be checked against what is observed ( [11] [16] [17]). Yet the choice of in-sample and out-of-sample model selection criteria is not without contention and such contention is well handled in [1] [15] [18] [19] [20].

With respect to heteroscedastic processes (or nonlinear time series), details regarding model selection are available in the studies of [21] - [27]. Meanwhile, in Nigeria, model selection in heteroscedastic processes are mainly based on in-sample criteria. For instance, the studies of [28] - [33] rely on the in-sample procedure to select the best fit model. Hence, this study seeks to improve on the work of [28] who applied the in-sample model selection criteria to choose best fitted heteroscedastic models by adopting out-of-sample forecasting approach in selecting heteroscedastic models that would best describe the accuracy and precision of future observations.

This work is further organized as follows: materials and methods are treated in Section 2, results and discussion covered in Section 3 and Section 4 takes care of conclusion.

2. Materials and Methods

2.1. Return

The return series R t can be obtained given that P t is the price of a unit share at time t, and P t 1 is the share price at time t 1 .

R t = ln P t = ( 1 B ) ln P t = ln P t ln P t 1 (1)

The R t in Equation (1) is regarded as a transformed series of the share price, P t meant to attain stationarity, that is, both mean and variance of the series are stable [29]. The letter B is the backshift operator.

2.2. Information Criteria

There are several information criteria available to determine the order, p, of an AR process and the order, q, of MA(q) process, all of them are likelihood based. The well-known Akaike information criterion (AIC), [6] is defined as

AIC = 2 T ln ( likelihood ) + 2 T x ( number of parameters ) , (2)

where the likelihood function is evaluated at the maximum likelihood estimates and T the sample size. For a Gaussian AR(p) model, AIC reduces to

AIC ( P ) = ln ( σ ^ P 2 ) + 2 P T (3)

where σ ^ P 2 is the maximum likelihood estimate of σ ^ a 2 , which is the variance of a t , and T is the sample size. The first term of the AIC in Equation (6) measures the goodness-of-fit of the AR(p) model to the data whereas the second term is called the penalty function of the criterion because it penalizes a chosen model by the number of parameters used. Different penalty functions result in different information criteria.

The next commonly used criterion function is the Schwarz information criterion (SIC), [7]. For a Gaussian AR(p) model, the criterion is

SIC ( P ) = ln ( σ ^ P 2 ) + ( P ln ( T ) T ) (4)

Another commonly used criterion function is the Hannan Quinn information criterion (HQIC), [8]. For a Gaussian AR(p) model, the criterion is

HQIC ( P ) = ln ( σ ^ P 2 ) + ln { ln ( T ) } T (5)

The penalty for each parameter used is 2 for AIC, ln(T) for SIC and ln{ln(T)} for HQIC. These penalty functions help to ensure selection of parsimonious models and to avoid choosing models with too many parameters.

The AIC criterion asymptotically overestimates the order with positive probability, whereas the BIC and HQIC criteria estimate the order consistently under fairly general conditions ( [11] [17]). Moreover, an in-sample model selection criterion is consistent if it chooses a true model when the true model is among those considered with probability approaching unity as the sample size becomes large, and if the true model is not among those considered, it selects the best approximation with probability approaching unity as sample size becomes larger [3]. The AIC is always considered inconsistent in that it does not penalize the inclusion of additional parameters. As such, relying on these criterion leads to overfitting. Meanwhile, the SIC and HQIC criteria are consistent in that it takes into account large size adjustment penalty. In contrast, consistency is not sufficiently informative. It turns out that the true model and any reasonable approximation to it are very complex. An asymptotically efficient model selection criterion chooses a sequence of models as the sample size get larger for which the one-step-ahead forecast error variances approach the one-step-ahead forecast error variance for the true model at least as fast as any other criterion [3]. The AIC is asymptotically efficient while SIC and HQIC are not. However, one major drawback of in-sample criteria is their inability to evaluate a candidate model’s potential predictive performance.

2.3. Model Evaluation Criteria

It is tempting to evaluate performance on the basis of the fit of the forecasting or time series model to historical data [3]. The best way to evaluate a candidate model’s predictive performance is to apply the out-of-sample forecast technique. This will provide a direct estimate of the one-step-ahead forecast error variance that guarantees an efficient model selection criterion. The methods of forecast evaluation based on forecast error include Mean Squared Error (MSE), Root Mean Squared Error (RMSE) and Mean Absolute Error (MAE). These criteria measure forecast accuracy. The forecast bias is measured by Mean Error (ME).

The measures are computed as follows:

MSE = 1 n i = 1 n e i 2 (6)

RMSE = 1 n i = 1 n e i 2 (7)

MAE = 1 n i = 1 n | e i | (8)

ME = 1 n i = 1 n ( e i ) (9)

where e i is the forecast error and n is the number of forecast error. Also, it should be noted that in this work, the forecasts of the returns are used as proxies for the volatilities as they are not directly observable [34].

2.4. Autoregressive Integrated Moving Average (ARIMA) Model

[10] considered the extension of ARMA model to deal with homogenous non-stationary time series in which X t , itself is non-stationary but its dth difference is a stationary ARMA model. Denoting the dth difference of X t by

φ ( B ) = ϕ ( B ) d X t = θ ( B ) ε t , (10)

where φ ( B ) is the nonstationary autoregressive operator such that d of the roots of φ ( B ) = 0 are unity and the remainder lie outside the unit circle. ϕ ( B ) is a stationary autoregressive operator.

2.5. Heteroscedastic Models

Autoregressive Conditional Heteroscedastic (ARCH) Model: The first model that provides a systematic framework for modeling heteroscedasticity is the ARCH model of [35]. Specifically, an ARCH (q) model assumes that,

R t = μ t + a t , a t = σ t e t ,

σ t 2 = ω + α 1 a t 1 2 + + α q a t q 2 , (11)

where [ e t ] is a sequence of independent and identically distributed (i.i.d.) random variables with mean zero, that is E ( e t ) = 0 and variance 1, that is E ( e t 2 ) = 1 , ω > 0 , and α 1 , , α q 0 [36]. The coefficients α i , for i > 0 , must satisfy some regularity conditions to ensure that the unconditional variance of a t is finite.

Generalized Autoregressive Conditional Heteroscedastic (GARCH) Model: Although the ARCH model is simple, it often requires many parameters to adequately describe the volatility process of a share price return. Some alternative models must be sought. [37] proposed a useful extension known as the generalized ARCH (GARCH) model. For a return series, R t , let a t = R t μ t be the innovation at time t. Then, a t follows a GARCH(q, p) model if

a t = σ t e t ,

σ t 2 = ω + i = 1 q α i a t i 2 + j = 1 q β j σ t j 2 , (12)

where again e t is a sequence of i.i.d. random variance with mean, 0, and variance, 1, ω > 0 , α i 0 , β j 0 , and i = 1 max ( p , q ) ( α i + β i ) < 1 (see [38]).

Here, it is understood that α i = 0 , for i > p , and β i = 0 , for i > q . The latter constraint on α i + β i implies that the unconditional variance of a t is finite, whereas its conditional variance σ t 2 , evolves over time.

Exponential Generalized Autoregressive Conditional Heteroscedastic (EGARCH) Model: The EGARCH model represents a major shift from ARCH and GARCH models [39]. Rather than modeling the variance directly, EGARCH models the natural logarithm of the variance, and so no parameter restrictions are required to ensure that the conditional variance is positive. The EGARCH(q, p) is defined as,

R t = μ t + a t , a t = σ t e t ,

ln σ t 2 = ω + i = 1 q α i | a t i σ t i 2 | + k = 1 r γ k ( a t k σ t k 2 ) + j = 1 p β j ln σ t j 2 , (13)

where again, e t is a sequence of i.i.d. random variance with mean, 0, and variance, 1, and Selection of Heteroscedastic Models: A Time Series Forecasting Approach

  
comments powered by Disqus

Copyright © 2020 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.