Averaged-Calibration-Length Prediction for Currency Exchange Rates by a Time-Dependent Vasicek Model ()

Tomasz Serafin^{1}, Anna Michalak^{2}, Łukasz Bielak^{3}, Agnieszka Wyłomańska^{1*}

^{1}Faculty of Pure and Applied Mathematics, Hugo Steinhaus Center, Wroclaw University of Science and Technology, Wroclaw, Poland.

^{2}Faculty of Geoengineering, Mining and Geology, Wroclaw University of Science and Technology, Wroclaw, Poland.

^{3}KGHM Polska Miedz S.A., Lubin, Poland.

**DOI: **10.4236/tel.2020.103037
PDF HTML XML
297
Downloads
893
Views
Citations

The mining business is extremely sensitive to market factors in price behavior. One of the main risk factors in the KGHM, one of the biggest mining companies in the world, is the currency exchange rates prices. Thus, one of the main problems from the market risk management perspective is to properly predict the dynamics of the currency exchange rate data in the long-term horizon. In this paper, we propose to model the data by the so-called extended Vasicek model, which is a natural generalization of the classical Vasicek model, also known as the Ornstein-Uhlenbeck process. The classical model is very popular in the financial data modeling, however, it does not capture the possible changes in the long-term mean and long-term variance. The extended model takes into consideration the fact that the dynamics of the data may change over time by using time-varying coefficients. Applying the extended Vasicek model, we demonstrate the problem of long-term prediction and propose a new approach in this context which is based on the averaging of the predictions obtained from different calibration sample lengths.

Keywords

Modeling, Vasicek Model, Time-Dependent Model, Long-Term Prediction, Calibration, Estimation

Share and Cite:

Serafin, T. , Michalak, A. , Bielak, Ł. and Wyłomańska, A. (2020) Averaged-Calibration-Length Prediction for Currency Exchange Rates by a Time-Dependent Vasicek Model. *Theoretical Economics Letters*, **10**, 579-599. doi: 10.4236/tel.2020.103037.

1. Introduction

The mining business is extremely sensitive to market factors in price behavior. Long-term price forecasting plays a critical role in strategic planning, including scheduling of production, verification of investment portfolios, and preparation of long-term financial plans. Usually, mining companies are preparing base-case forecasts scenario which is used to build basic production and financial plan and alternative scenarios, optimistic and pessimistic, which help to prepare the company for significant price movements. Mining companies are usually preparing forecasts for the base and precious metals prices, FX rates, oil prices, and interest rates in even over a dozen years horizon.

Long-term price modeling is very important for mining companies’ business plan preparation but at the same time creates a high risk for planning. Continuously changing macroeconomic situation causes that models are not stable and have to be frequently adjusted. Therefore, for predicting purpose, different approaches like fundamental analysis, econometric modeling, or methods based on price probability distributions like stochastic process simulations are being used. Especially, the last method use is growing in the mining industry in recent years given high volatility on the metals market, which increases the propensity to use more flexible forecasting methods.

Over past decades, the literature dedicated to forecasting problem has been developing dynamically, constantly introducing new, faster, and more precise forecasting techniques and algorithms (Adcock & Gradojevic, 2019) (Dai, Zhu, & Dong, 2020) (Fu, Li, Sun, & Li, 2019) (Pezzulli et al., 2006) (Göb, Lurz, & Pievatolo, 2013) (Li, 2019) (Contino & Gerlach, 2017) (Skiadas, Papagiannakis, & Mourelatos, 1997) (Clements, Hurn, & White, 2006) (Zhao, Xie, & West, 2016) (Reikard, 2006) (Hawkes & Date, 2007) (Herwartz & Reimers, 2002).

The most common approach to the process of forecasting is to choose a sufficient portion of historical data and then calibrate the forecasting model on the selected sample and predict future values. The length of the historical data sample is most often chosen in an “ad-hoc” fashion, usually authors decide to use as many historical values as possible. It has been however shown in the literature on the short-term forecasting, that using different approaches, the forecasting errors can be significantly lowered. The first popular method involves averaging predictions obtained from different models calibrated to the same historical data sample (Öğünç et al., 2013) (Pesaran & Timmermann, 2007). It turns out that the averaged forecast usually outperforms each of single predictions, however this approach does not resolve the problem of the optimal calibration sample length. Another technique that brings gains in terms of forecasting accuracy involves averaging forecasts obtained from the same model but calibrated to historical samples of different lengths (Pesaran & Pick, 2011) (Marcjasz, Serafin, & Weron, 2018) (Hubicka, Marcjasz, & Weron, 2018) (Welfe, 2011). The rationale behind this approach is that, when calibrating the model to a longer sample, we capture the long-term trends and, on the contrary, using short calibration sample we take into consideration only recent, short-term behavior of the price process.

The mentioned above approaches have been proved to be effective and extensively used in the context of short-term forecasting but to our best knowledge the problem of the optimal calibration sample length selection remains practically overlooked when it comes to the long-term predictions. In the existing literature on the long-term forecasting, authors most often choose the biggest possible historical data sample and then perform forecasting (Jorion & Sweeney, 1996) (Garratt & Mise, 2014). Although some researchers argue that averaging long-term predictions obtained from different models clearly improve forecasting performance (Kapetanios, Labhard, & Price, 2008) (Hull & White, 1990), the selection of the optimal calibration length problem still remains unaddressed.

In this paper, we apply the generalized Vasicek model with time-dependent parameters to the description of the currency exchange rates data for long-term prediction. The analyzed currency exchange rates are the main risk factors in the KGHM, one of the biggest mining company in the world. KGHM extracts and processes natural resources (copper ore, molybdenum, nickel, gold, palladium, and platinum). The company possesses a geographically diversified portfolio of mining projects. It owns production plants on three continents—in Europe, South and North America. Thus, as it was mentioned above, from the risk management perspective of KGHM, it is crucial to correctly predict the dynamics of the exchange rates in the long-term horizon.

The generalized Vasicek model is the natural extension of the classical Vasicek process, which was used to model the interest rate data (Vasicek, 1977). This process is also known as the Ornstein-Uhlenbeck model, which was introduced by Uhlenbeck and Ornstein (Uhlenbeck & Ornstein, 1930) as a proper model for the velocity process in the Brownian diffusion. The Ornstein-Uhlenbeck process very often is considered in theoretical studies in physics and mathematics. Moreover, it has also been used in many applications (Zhang, Xiao, Zhang, & Niu, 2014) (Chaiyapo & Phewchean, 2017) (Obuchowski & Wylomanska, 2013). It exhibits so-called mean reversion property which means that over time the process tends to drift towards its long-term mean.

However, analysis of real data indicates that the classical Ornstein-Uhlenbeck process (or Vasicek model) seems to be insufficient. In the literature one can find different modifications of the classical model (Brockwell, 2014) (Brockwell, 2001) (Brockwell, Davis, & Yang, 2011) (Janczura, Orzeł, & Wyłomańska, 2011) (Wyłomańska, 2011). In this paper, we analyze the extended Vasicek model by replacement the constant coefficient in the classical process by the coefficients dependent on time. The considered process seems to be perfect for modeling the analyzed exchange rates data under the condition they are homogeneous. However, in the analyzed data, we observe that some of their characteristics change in time, thus in the long-term prediction problem this behavior needs to be taken under consideration. In general, as it was mentioned, one may expect that the longer the calibration length, the better the prediction performance. This statement is true, but only in the case of the homogeneous data. This problem is highlighted in the paper. We demonstrate that the analyzed vectors of observations cannot be modeled by the same process thus in the problem of the long-term prediction the non-homogeneous character of the time series should be taken under consideration.

The paper is structured as follows: In Section 2, we introduce the generalized Vasicek model and show its main properties. Moreover, we present also how to estimate the parameters of the model. Next, in Section 3, we formulate the problem for simulated data with non-homogeneous behavior. The main part of the paper is Section 4, where we demonstrate the deep analysis for two sets of observations corresponding to the currency exchange rates, namely EUR/USD and USD/PLN. In the next Section, we try to explain the obtained in Section 4 results by demonstrating the non-homogeneous character of the data applying the Markov regime switching model. The last Section concludes the paper.

2. The Time-Dependent Vasicek Model

2.1. The Classical Vasicek Model

The classical Vasicek model was introduced in 1977 by Vasicek (Vasicek, 1977). It is one of the earliest financial models describing the evolution of interest rates, currency exchange rates, and commodity prices. This diffusion process $\left\{{X}_{t}\right\}$ (called also the Ornstein-Uhlenbeck process) has a mean-reverting property and it is defined by the following stochastic differential equation (SDE):

$\text{d}{X}_{t}=\theta \left(\mu -{X}_{t}\right)\text{d}t+\sigma \text{d}{B}_{t},$ (1)

where $\mu \in \mathbb{R}$ is a drift, $\theta >0$ and $\sigma >0$ are the model parameters, and the $\left\{{B}_{t}\right\}$ is the Brownian motion.

The $\mu $ parameter represents the long-term mean level, $\theta $ corresponds to the speed of mean reversion, and the $\sigma $ is the volatility. It means that all future trajectories of $\left\{{X}_{t}\right\}$ process will oscillate around the $\mu $, and the $\theta $ define the velocity at which these trajectories will regroup around the long-term mean.

The unique solution of SDE (1) is defined as:

${X}_{t}={X}_{s}{\text{e}}^{-\theta \left(t-s\right)}+\mu \left(1-{\text{e}}^{-\theta \left(t-s\right)}\right)+\sigma {\displaystyle {\int}_{s}^{t}{\text{e}}^{-\theta \left(t-u\right)}\text{d}{B}_{u}}.$ (2)

For any $s<t$ the $\left\{{X}_{t}\right\}$ process has the conditional Gaussian distribution with conditional expectation equal to ${X}_{s}{\text{e}}^{-\theta \left(t-s\right)}+\mu \left(1-{\text{e}}^{-\theta \left(t-s\right)}\right)$ and the conditional variance ${\sigma}^{2}/2\theta \left(1-{\text{e}}^{-2\theta \left(t-s\right)}\right)$. When t tends to infinity, the limits of expectation is equal to $\mu $, and the variance is ${\sigma}^{2}/2\theta $.

The simulation and estimation procedures for the model (1) are based on the Euler-Maruyama method, and it is one of the popular approaches for the approximation of the numerical solution of the SDE. It is given as follows:

${X}_{t+\Delta t}={X}_{t}+\theta \left(\mu -{X}_{t}\right)\Delta t+\sigma \sqrt{\Delta t}{\epsilon}_{t},$ (3)

where $\text{\Delta}t$ is the discretization step, ${\epsilon}_{t}$ with $t=\left(0,\text{\Delta}t,2\text{\Delta}t,\cdots ,T\right)$ is a sequence of independent and identically distributed normal random variables with mean 0 and variance $\text{\Delta}t$ (denoted $N\left(0,\text{\Delta}t\right)$ ), and T is the time horizon. Notice that if the $\text{\Delta}t$ is equal to 1, the given Equation (3) becomes a discrete-time autoregressive model with order 1 (AR(1)) (Musiela & Rutkowski, 1998).

2.2. The Vasicek Model with Time-Dependent Parameters

The extended Vasicek model (also called generalized Vasicek model) with time-dependent parameters belongs to the group of time-inhomogeneous models (Fan, Jiang, Zhang, & Zhou, 2003). This model has been extended by the Hull and White by allowing both, the drift and variance coefficients, to be time-varying and its SDE is given by the formula (Fan, Jiang, Zhang, & Zhou, 2003):

$\text{d}{X}_{t}=\left\{{\alpha}_{0}\left(t\right)+{\alpha}_{1}\left(t\right){X}_{t}\right\}\text{d}t+{\beta}_{0}\left(t\right)\text{d}{B}_{t},$ (4)

where ${\alpha}_{0}\left(t\right),{\alpha}_{1}\left(t\right),{\beta}_{0}\left(t\right)$ are the functions. The special case of the model (4) is the process given by (1). However, in the practical applications, it is more reasonable to expect that at least some of the parameters of the extended model involve the time-dependent coefficients.

The explicit solution of the formula (4) is given by (Musiela & Rutkowski, 1998):

${X}_{t}={\text{e}}^{-l\left(t\right)}\left({X}_{s}+{\displaystyle {\int}_{s}^{t}{\text{e}}^{l\left(u\right)}{\alpha}_{0}\left(u\right)\text{d}u}+{\displaystyle {\int}_{s}^{t}{\text{e}}^{l\left(u\right)}{\beta}_{0}\left(u\right)\text{d}{B}_{t}}\right),$ (5)

where $l\left(t\right)={\displaystyle {\int}_{s}^{t}{\alpha}_{1}\left(u\right)\text{d}u}$. The conditional mean of ${X}_{t}$ (with known ${X}_{s}$ ) is given by: ${\text{e}}^{-l\left(t\right)}\left({X}_{s}+{\displaystyle {\int}_{s}^{t}{\text{e}}^{l\left(u\right)}{\alpha}_{0}\left(u\right)\text{d}u}\right)$, and the conditional variance is ${\text{e}}^{-2l\left(t\right)}{\displaystyle {\int}_{s}^{t}{\text{e}}^{-2l\left(u\right)}{\beta}_{0}^{2}\left(u\right)\text{d}u}$.

2.3. Estimation

For the estimation of the time-dependent parameters of the model (4), we assume that ${\alpha}_{0}\left(t\right),{\alpha}_{1}\left(t\right),{\beta}_{0}\left(t\right)$ are functions twice continuously differentiable. Denote the data sample $\left\{{X}_{{t}_{i}},i=1,\cdots ,n+1\right\}$ at the discrete time points ${t}_{1}<\cdots <{t}_{n+1}$. In presented application, the time points are equally spaced with ${\text{\Delta}}_{{t}_{i}}={t}_{i+1}-{t}_{i}$. The discretized version of (4) can be expressed as:

${Y}_{i}=\left({\alpha}_{0}\left({t}_{i}\right)+{\alpha}_{1}\left({t}_{i}\right){X}_{{t}_{i}}\right){\Delta}_{{t}_{i}}+{\beta}_{0}\left({t}_{i}\right)\sqrt{{\Delta}_{{t}_{i}}}{\epsilon}_{{t}_{i}},$ (6)

where ${Y}_{i}={X}_{{t}_{i+1}}-{X}_{{t}_{i}}$, and ${\epsilon}_{{t}_{i}}$ are independent and standard normally distributed random variable with 0 mean and the variance ${\text{\Delta}}_{{t}_{i}}$.

The form of the functions corresponding to time-dependent parameters is not specified in general. For the sake of simplicity, we approximate them locally at each point by a constant, obtaining a set of points which can be later used for curve fitting. In such a way, we obtain a continuous approximation of time-dependent parameters.

We assume that for a given point ${t}_{0}$, for a time point t in small neighborhood h of ${t}_{0}$, the approximation is given by:

${\alpha}_{i}\left(t\right)\approx {\alpha}_{i}\left({t}_{0}\right),{\beta}_{0}\left(t\right)\approx {\beta}_{0}\left({t}_{0}\right),\text{\hspace{0.17em}}\text{\hspace{0.17em}}i=0,1,$ (7)

where ${\alpha}_{i}\left({t}_{0}\right)=const$ for $i=0,1$ and ${\beta}_{0}\left({t}_{0}\right)=const$. We assume the local estimators of ${\alpha}_{i}\left(t\right),i=0,1$ are:

${\stackrel{^}{\alpha}}_{0}\left({t}_{0}\right)\approx a,\text{\hspace{0.17em}}{\stackrel{^}{\alpha}}_{1}\left({t}_{0}\right)\approx b.$ (8)

Let $K(\cdot )$ be a nonnegative weight function, called a kernel function. By using the local regression technique (Fan, 2018) the estimates of ${\alpha}_{i}\left({t}_{0}\right),i=0,1$ can be found via the following weighted least-squares criterion. With respect to parameters a and b, we minimize the following formula:

$\underset{i=1}{\overset{n}{{\displaystyle \sum}}}{\left[\frac{{Y}_{{t}_{i}}}{\Delta i}-a-b{X}_{{t}_{i}}\right]}^{2}{K}_{h}\left({t}_{i}-{t}_{0}\right),$ (9)

where ${K}_{h}(\cdot )=K\left(\cdot /h\right)/h$. In our studies we assume, $K(\cdot )$ is the one-sided Epanechnikov kernel (Fan, Jiang, Zhang, & Zhou, 2003) (Hart & Wehrly, 1986):

$K\left(t\right)=3/4\left(1-{t}^{2}\right)I\left(t<0\right).$ (10)

In this way, we use only the data observed in the time interval $\left[{t}_{0}-h,{t}_{0}\right)$ which allows to use only the historical data. When it comes to bandwidth selection, in this paper we take $h=20$ (which corresponds to the approximate number of trading days during a month) as the optimal bandwidth selection problem is not the main subject of our concern. The problem of the optimal bandwidth selection is discussed in (Fan, 2018).

The explicit formula for $\stackrel{^}{a}$ and $\stackrel{^}{b}$ can be presented as:

$\stackrel{^}{a}=\frac{{{\displaystyle \sum}}_{i=1}^{n}\frac{{Y}_{{t}_{i}}{C}_{i}}{{\Delta}_{{t}_{i}}}-\frac{\frac{{{\displaystyle \sum}}_{i=1}^{n}\text{\hspace{0.05em}}{Y}_{{t}_{i}}{X}_{{t}_{i}}{C}_{i}}{{\Delta}_{{t}_{i}}}{{\displaystyle \sum}}_{i=1}^{n}\text{\hspace{0.05em}}{X}_{{t}_{i}}{C}_{i}}{{{\displaystyle \sum}}_{i=1}^{n}\text{\hspace{0.05em}}{X}_{{t}_{i}}^{2}{C}_{i}}}{{{\displaystyle \sum}}_{i=1}^{n}\text{\hspace{0.05em}}{C}_{i}\left(1-\frac{{\left({{\displaystyle \sum}}_{i=1}^{n}\text{\hspace{0.05em}}{X}_{{t}_{i}}{C}_{i}\right)}^{2}}{{{\displaystyle \sum}}_{i=1}^{n}\text{\hspace{0.05em}}{X}_{{t}_{i}}{C}_{i}{{\displaystyle \sum}}_{i=1}^{n}\text{\hspace{0.05em}}{C}_{i}}\right)},$ (11)

and

$\stackrel{^}{b}=\frac{{{\displaystyle \sum}}_{i=1}^{n}\frac{{X}_{{t}_{i}}{Y}_{{t}_{i}}{C}_{i}}{{\Delta}_{{t}_{i}}}-\stackrel{^}{a}{{\displaystyle \sum}}_{i=1}^{n}\text{\hspace{0.05em}}{X}_{{t}_{i}}{C}_{i}}{{{\displaystyle \sum}}_{i=1}^{n}\text{\hspace{0.05em}}{X}_{{t}_{i}}^{2}{C}_{i}},$ (12)

where ${C}_{i}={K}_{h}\left({t}_{i}-{t}_{0}\right)$.

Let $\stackrel{^}{\mu}\left(t,{X}_{t}\right)={\stackrel{^}{\alpha}}_{0}\left(t\right)+{\stackrel{^}{\alpha}}_{1}\left(t\right){X}_{t}$ be the estimator of the mean of the process and let:

${\stackrel{^}{E}}_{t}^{2}={\left(\frac{{Y}_{t}-\stackrel{^}{\mu}\left(t,{X}_{t}\right){\Delta}_{t}}{\sqrt{\Delta t}}\right)}^{2}.$ (13)

According to the formulas (6) and (13) we obtain:

${\stackrel{^}{E}}_{t}^{2}\approx {\beta}_{0}^{2}\left(t\right){\epsilon}_{t}^{2}.$ (14)

Next, by maximization of the pseudo-likelihood function, we obtain the estimator for ${\beta}_{0}\left({t}_{0}\right)$ (Fan, Jiang, Zhang, & Zhou, 2003):

${\stackrel{^}{\beta}}_{0}^{2}\left({t}_{0}\right)=\frac{{{\displaystyle \sum}}_{i=1}^{n}\text{\hspace{0.05em}}{C}_{i}{\stackrel{^}{E}}_{{t}_{i}}^{2}}{{{\displaystyle \sum}}_{i=1}^{n}\text{\hspace{0.05em}}{C}_{i}}.$ (15)

In the analysis of the currency exchange rates data for the simplicity we assume that functions ${\alpha}_{0}\left(t\right)$ and ${\alpha}_{1}\left(t\right)$ are described by three-term Fourier model (16), and the ${\beta}_{0}\left(t\right)$ —by two-term Fourier model (17). Namely:

${\alpha}_{i}\left(t\right)={a}_{0}+\underset{j=1}{\overset{3}{{\displaystyle \sum}}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{a}_{j}\mathrm{cos}\left(jt\omega \right)+{b}_{j}\mathrm{sin}\left(jt\omega \right),\text{\hspace{0.17em}}\text{\hspace{0.17em}}i=0,1,$ (16)

${\beta}_{0}\left(t\right)={a}_{0}+\underset{j=1}{\overset{2}{{\displaystyle \sum}}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{a}_{j}\mathrm{cos}\left(jt\omega \right)+{b}_{j}\mathrm{sin}\left(jt\omega \right).$ (17)

The estimated functions are finally applied to the prediction. More precisely, after estimation of the functions ${\alpha}_{0}(\cdot ),{\alpha}_{1}(\cdot )$ and ${\beta}_{0}(\cdot )$ based on the historical data, we used them later for the prediction period, as they represent the deterministic components of the model.

3. Problem Formulation

In the econometric literature, the vast majority of papers regarding forecasting and modeling focus on developing more complex and better performing statistical models that will more precisely represent the features of the underlying’s price process. Obviously, each of these methods first involves calibrating the model to a portion of historical data.

Interestingly, hardly any authors consider the selection of the optimal calibration sample length, usually the choice is made in an “ad-hoc” fashion, and the most common approach is to choose the longest training sample possible. However, some researches argue that the quality of the forecasts strongly depends on the choice of the optimal calibration window. Pesaran & Pick (Pesaran & Pick, 2011) focus on the presence of the structural breaks (the rapid and unexpected changes in the underlying process) and show that for the futures market, averaging predictions from the autoregressive model over different estimation windows lowers the root mean square forecast error. Marcjasz et al. (Marcjasz, Serafin, & Weron, 2018) and Hubicka et al. (Hubicka, Marcjasz, & Weron, 2018) conduct a study for the electricity market and conclude that averaging forecasts over a number of carefully selected calibration window lengths yields significantly better forecasting accuracy than the single “optimal” (selected ex-post) calibration window.

In this study, we focus on the problem of the parameter estimation for the time-varying Vasicek model using different lengths of the historical data calibration sample. Then we forecast future values by simulating a number of trajectories from the extended Vasicek model with estimated parameters, and we investigate how the length of the calibration sample impacts the quality of our predictions. Finally, we try to find the “optimal” calibration window length. Our general methodology consists of several steps which are presented by a flowchart in Figure 1.

In order to illustrate the formulated problem, we first present our approach to the simulated data set. As our exemplary data, we consider a simulated trajectory (Figure 2) of the model defined in (4) for constant coefficients. We use first 252 × 9 = 2268 observations (on the left-hand side from the dashed vertical line) as a training period for estimation of the model parameters. Note that in the simulated trajectory, we can distinguish three different regimes, i.e., the trajectory consists of three realizations of the Vasicek process, simulated with three different sets of parameters (we denote these three parts ${X}_{1},{X}_{2},{X}_{3}$, respectively). Values of these parameters are shown in Table 1.

Next step requires the selection of a calibration period for the parameter estimation—a training sample consisting of historical data on which we estimate the parameters of the Vasicek model (to later use the estimated model to forecast the future values). A common belief is that the longer the calibration sample is, the better the results are. However, in our case, as can be seen from Figure 2, the data that we want to forecast, i.e., observations from 2267 to 2520 (on the right-hand side from the vertical dashed line), was generated using the same set of parameters as for the last 3 × 252 observations in the calibration sample (observations from 1513 to 2268). Therefore, with this knowledge, intuitively, taking a long calibration period that takes into account the whole calibration sample should bring less satisfactory results that calibrating our model to the last 3 × 252 observations. In our study we consider different lengths of the calibration period (calibration windows)—it ranges from 3 × 252 up to 9 × 252 observations for the simulated case. Note that all training samples are left-truncated so that they end on the same observation of the training period, i.e., when considering a 3 × 252 observation calibration window we take observations from 1513 to 2268, and for 7 × 252 calibration sample we take observations from 505 to 2268.

Once the parameters of the Vasicek model are estimated from historical data, we predict future values by simulating 10,000 trajectories (each containing 252 observations) using the calibrated model. Next, we evaluate the performance of our estimations using two following measures: mean absolute percentage error (MAPE) and a novel technique introduced in (Sikora, Michalak, Bielak, Miśta, & Wyłomańska, 2019), called a validation factor (VF).

The first measure, MAPE, is calculated in the following way:

$\text{MAPE}\left(cal\right)=\frac{1}{NT}\underset{n=1}{\overset{N}{{\displaystyle \sum}}}\text{\hspace{0.17em}}\underset{t=1}{\overset{T}{{\displaystyle \sum}}}\frac{\left|{\stackrel{^}{X}}_{t,n}^{cal}-{X}_{t}\right|}{{X}_{t}},$ (18)

where ${\stackrel{^}{X}}_{t,n}^{cal}$ is the predicted value at prediction time t from the n-th generated trajectory from the validated period, obtained from the Vasicek model calibrated to a training sample of length cal, ${X}_{t}$ is the real value at time t, N is the number of generated trajectories and T is the length of each trajectory. We take $N=10000$ and $T=252$.

Table 1. Parameters of the Vasicek model for each part of the trajectory in Figure 1.

Figure 1. The schematic algorithm of the applied methodology.

Figure 2. A sample simulated trajectory with three distinguishable regimes X_{1}, X_{2} and X_{3} (marked with different colors). The dashed, vertical line indicated the end of the training period.

In the next step of assessing our performance, we follow (Sikora, Michalak, Bielak, Miśta, & Wyłomańska, 2019) and evaluate the quality of our estimations with the novel method based on the quantile lines computed from the 10,000 simulated trajectories. The quantile lines, $Q\left(cal\right)$, are obtained by calculating the empirical quantiles of the simulated values at each time point:

$Q\left(cal\right)=\left[\begin{array}{ccc}{Q}_{0.05}^{cal}\left({t}_{1}\right)& \cdots & {Q}_{0.05}^{cal}\left({t}_{252}\right)\\ {Q}_{0.1}^{cal}\left({t}_{1}\right)& \cdots & {Q}_{0.1}^{cal}\left({t}_{252}\right)\\ \vdots & & \vdots \\ {Q}_{0.9}^{cal}\left({t}_{1}\right)& \cdots & {Q}_{0.9}^{cal}\left({t}_{252}\right)\\ {Q}_{0.95}^{cal}\left({t}_{1}\right)& \cdots & {Q}_{0.95}^{cal}\left({t}_{252}\right)\end{array}\right],$ (19)

where ${Q}_{q}^{cal}\left(t\right)$ is a q-th quantile of the simulated prices from a certain calibration window cal, calculated for values of 10,000 simulations at time t.

The second error measure, VF, is calculated in the following way:

$\text{VF}\left(cal\right)=\frac{1}{\#P}\underset{p\in P}{{\displaystyle \sum}}{\left|\varphi \left(p\right)-p\right|}^{2},$ (20)

where $P=\left[0.1,0.2,\cdots ,0.9\right]$ is a set of possible values of p, $\varphi \left(p\right)$ is the fraction of real data from the validated period lying between calculated quantile lines $\left(1-p\right)/2$ and $p+\left(1-p\right)/2$ and #P is the size of the set P. When evaluating our results with the use of this factor, we pay the particular attention to the amount of real data placed between certain quantile lines—if the model is correctly estimated, 90% of data should lie between 0.05 and 0.95 quantile lines, 50% between 0.25 and 0.75 quantile lines etc. Similarly to MAPE, the lower the value of the VF is, the better the results of the model parameter estimation are.

The results of the evaluation for the simulated data case are presented in Table 2. It turns out that indeed taking the shortest, 3-year (3 × 252 observations) calibration sample produced the best results in terms of both MAPE and VF. This simple analysis is a motivation and a starting point to the real data consideration.

4. Real Data Analysis

In this section, we present the results of our investigation concerning the selection of the calibration window for the estimation of the generalized Vasicek model parameters for real data. We propose here a new approach in the considered issue. We consider daily-frequency of the EUR/USD and USD/PLN currency exchange rates, both spanning from

2nd January 1997 up to 2nd January 2015, we use first 10 years of data, i.e., from 2 January 1997 to 29 December 2006 (dashed, vertical line in Figure 3 and Figure 4 marks the end of this period) as the initial period for the model calibration. The data is freely available (https://stooq.pl/, 2020). For each currency pair, we examine the influence of the length of a training sample on the quality of our 1-year forecasts—the methodology is similar to the one used in case of simulated data. For each test-year (from 2007 to 2014) we first calibrate generalized Vasicek model’s parameters on the different lengths of historical data and then generate 10,000 exchange rate trajectories for the corresponding year. Then we use the real data to evaluate the predictions in terms of both MAPE and VF—results are shown in Table 3 and Table 4. The columns are labeled from 2 to 10 and correspond to the length of the historical data sample used to estimate generalized Vasicek model’s parameters.

For each row (which corresponds to a certain test-year) we mark the best performing calibration window, i.e., we mark the calibration window length for which generated trajectories produced the best results in terms of a certain error measure (MAPE or VF). This allows us to clearly depict which calibration window length was the “optimal” one in a certain year.

4.1. Results

As can be seen from Table 3, for the EUR/USD pair, no clear pattern emerges from the evaluation of our predictions neither in terms of MAPE nor in terms of VF. None of the calibration windows is chosen constantly as the best one, their performance is highly unpredictable and changes significantly in different test-years.

Figure 3. USD/PLN daily exchange rates from 2 January 1997 to 2 January 2015. The dashed, vertical line indicates the end of the initial 10-year calibration period.

Figure 4. EUR/USD daily exchange rates from 2 January 1997 to 2 January 2015. The dashed, vertical line indicates the end of the initial 10-year calibration period.

Table 2. Results in terms of MAPE and VF for the simulated data case. Columns refer to lengths of the model calibration window. The best performing model is marked in blue.

Table 3. Results in terms of MAPE and VF for the EUR/USD data. Columns refer to lengths of the model calibration window (in years). The best performing model for each year is marked in blue.

Table 4. Results in terms of MAPE and VF for the USD/PLN data. Columns refer to lengths of the model calibration window (in years). The best performing model for each year is marked in blue.

Somehow surprisingly, in contradiction with the common belief, the longest 10-year window is not a top-performer. It is the optimal choice only in years 2009, 2010 and when looking at MAPE, and is never selected as the best model by the VF criterion.

When looking at results for USD/PLN in Table 4, a similar conclusion as for the EUR/USD can be drawn. Neither MAPE nor VF confirms that one certain window length outperforms the rest. Looking at MAPE for USD/PLN (Table 4) we notice that almost every year, there was a different calibration window length that was chosen as the optimal one. Similarly, when looking at the VF criterion, it is impossible to recommend just one calibration sample length. It is also worth noting that the optimal calibration windows for a certain year were almost always different for both data sets.

4.2. Averaging Predictions across Calibration Windows

As we have seen in the previous subsection, the choice of just one optimal calibration window length for estimating the extended Vasicek model parameters is, in our case, a cumbersome and hardly doable task. The performance of certain windows changes significantly over time, and an inappropriate choice can bring extremely disappointing effects by significantly lowering the accuracy of our predictions. Also, it is worth mentioning that the selection of the optimal calibration window was made ex-post, we would not be able to indicate the best calibration sample length in advance.

Therefore, in order to tackle this issue, analogously to (Hubicka, Marcjasz, & Weron, 2018) and (Marcjasz, Serafin, & Weron, 2018) we examine several combinations of different calibration window lengths and average the predictions obtained from these windows. Then we evaluate the accuracy of averaged forecast in terms of MAPE and VF. We consider five different combinations of calibration windows; we denote them by Avg (·) and the argument describes the windows that we choose for averaging. We use MATLAB notation in order to refer to a certain combination, e.g. Avg (2:10) refers to averaging predictions from all window lengths from 2 to 10 years while Avg (2:4, 8:10) to selecting predictions from three shortest (2, 3, 4-years) and three longest (8, 9, 10-years) windows.

We use two different approaches to average the predictions—the choice depends on the selection of the evaluation measure.

4.2.1. Averaging Approach When Evaluating MAPE

When evaluating the forecasts in terms of MAPE, we take every simulated trajectory from each calibration window length and average their values with corresponding trajectories from different windows in the chosen combination.

As an example we take Avg (2:4)—we average predictions of currency pair of exchange rates obtained from the extended Vasicek model calibrated to 2, 3, and 4-year historical data samples. For each calibration window length, we obtain 10,000 simulated trajectories of the underlying’s price. We take the first trajectory for each of the calibration window length and average their values—we obtain a new, averaged trajectory of the same length (1-year). This procedure is repeated for all trajectories and for each test-year. Then, MAPE is calculated for new, 10,000 averaged trajectories in the same way as it was shown in Section 3.

4.2.2. Averaging Approach When Evaluating VF

The second error measure, a validation factor, assesses the performance of predictions based on the quantile lines obtained from simulated trajectories. When averaging predictions from a certain combination of windows, for each calibration window length, we calculate the values of quantile lines from the generated samples. Then we average their values (across different window lengths) for each of the quantiles 0.05, …, 0.95 separately and obtain new, averaged quantile lines. Then we evaluate the performance of a certain combination by calculating the value of the VF for averaged quantile lines. An exemplary procedure of averaging quantile lines for two calibration windows is shown in Figure 5.

4.2.3. Evaluation

The performance of averaged predictions are shown in Table 5 and Table 6 We tested five different combinations of windows, namely: a combination of only short (Avg (2:4)), medium-length (Avg (5:7)), long (Avg (8:10)) windows, a mix of short and long calibration windows (Avg (2:4, 8:10)), and an average across all calibration window lengths (Avg (2:10)).

Figure 5. Method of obtaining averaged quantile lines, shown only for 0.05 (lower curves) and 0.95 quantiles (upper curves). Red, dotted lines, which represent averaged quantile lines, are obtained by averaging values of three other quantile lines.

Table 5. Results in terms of MAPE and VF for the EUR/USD data. Columns refer to averages across different calibration windows. The best performing model for each year is marked in blue.

Table 6. Results in terms of MAPE and VF for the USD/PLN data. Columns refer to averages across different calibration windows. The best performing model for each year is marked in blue.

Looking at Table 5, we can see that a certain pattern can be observed, namely a combination of long windows, Avg (8:10) is a powerful performer when assessing the forecasts by both measures. When evaluating the forecasts in terms of MAPE, it can be observed that both for EUR/USD and USD/PLN an average across all calibration windows Avg (2:10) performs well (excluding the year 2012 for EUR/USD and 2007 for USD/PLN, where 2-year and 3-year windows produced very poor results).

It is also worth noting that in terms of MAPE, both for EUR/USD and USD/PLN pairs, in hardly any test-year (except for 2014 for EUR/USD) was the best-averaged model outperformed by the best single calibration window. The opposite could be observed usually for more than one combination of windows, not only for the best one. In some cases, e.g. for the year 2013 for USD/PLN, considering an average of predictions from all calibration windows Avg (2:10), instead of choosing an “optimal” single calibration window would result in the 30% more accurate predictions in terms of MAPE.

5. Discussion

The presented results indicate the analyzed data can be modeled by using the generalized Vasicek process defined by the stochastic differential Equation (4). However, by the analysis of the real time series one needs to take under consideration the specific behavior of the data. One may expect, the data are not homogeneous, which means for the historical data, the parameters of the model may change in time. This case is considered in this paper. For the analyzed currency exchange rates, one can explain this regime-change behavior. Exchange rates markets always have been characterized by high dynamics and tendency to shift while some external shocks occur. Taking into account central banks activity in recent years, growth of geopolitical tensions including free-trade restrictions and speeding-up changes on capital markets, stability of models used is under constant pressure.

In order to confirm the analyzed real data have non-homogeneous behavior, we present a short analysis in the contest of their regime-change behavior.

We have applied here the Markov regime switching model (Janczura & Weron, 2012) (Hamilton, 1990) (Hamilton, 2016) that assumes the analyzed data constitutes a sample of independent random variables with the same distribution however the parameters of the distribution may change over time. Because our analyzed time series related to the currency exchange rates are not independent, we applied this methodology to the logarithmic returns of the currency exchange rates data. In the Markov regime switching model, we assume the data have Student’s t distribution, and the parameter (number of degrees of freedom) may change over time. For simplicity, we assume here two regimes (R1 and R2) corresponding to two numbers of degrees of freedom of Student’s t distribution v1 and v2, respectively (Dueker, 1997). It is worth to highlight, the Markov regime switching methodology is here applied only to detect the significant changes in the logarithmic returns of the analyzed data. In the algorithm, the estimation is made using the second partial derivatives of log-likelihood function (a.k.a. the Hessian matrix). As a result of the procedure, we obtain the vector of probabilities (of the same length as the analyzed vector of observations) of being in regime R1 (corresponding to the Student’s t distribution with v1 number of degrees of freedom). We assume the regime change occurs at a given time t when the corresponding estimated probability is higher or equal 0.5.

Figure 6 (top panel) we demonstrate the logarithmic returns of the EUR/USD currency exchange rates with marked two regimes corresponding to two Student’s t distributions with a different number of degrees of freedom while in Figure 7 (top panel)—the result for USD/PLN. In bottom panels, we demonstrate the raw time series with marked regimes. The simple analysis by using the Markov regime switching model explains the results obtained in the previous sections. Because of the non-homogeneous behavior of the analyzed real data we cannot conclude the longer calibration length influence, the better prediction performance and there is a need to take into consideration the fact that the character of the data may change in time which is related to the market changes.

Figure 6. The logarithmic returns of the EUR/USD currency exchange rates (top panel) and the raw time series (bottom panel) with marked two regimes.

Figure 7. The logarithmic returns of the USD/PLN currency exchange rates (top panel) and the raw time series (bottom panel) with marked two regimes.

6. Conclusion

In this paper, we have considered the problem of the selection of the calibration window length of the long-term prediction for the currency exchange rate data. We propose here a new approach. The prices of the currency exchange rates USD/PLN and EUR/USD are the main market risk factors of the KGHM mining company, and thus the proper long-term prediction of their dynamics is the crucial point from the risk management perspective. In this paper, we propose to model the real data by the extended Vasicek process with time-varying coefficients, which is a natural generalization of the classical Vasicek model where the coefficients are constant in time. We define the extended model and demonstrate how to estimate its parameters. The problem is formulated based on the simulated time series. The simulated data comes from the Vasicek model, but we assumed that some of the model’s parameters change in time. Based on that, we demonstrated how to select the optimal calibration window length and indicated that in this case, the longer length of the historical data is not the optimal choice for the estimation period because of the non-homogeneous behavior of the data. Finally, we presented the deep analysis of real time series of the mentioned currency exchange rate data in the context of the long-term prediction. We demonstrated that depending on the test-year (defined in the previous section), one might obtain different results. This behavior we have explained by the simple analysis of the logarithmic returns of the considered real data where we demonstrated that the time series exhibits regime-change behavior. For the analyzed data we propose a new approach in the problem of long-term prediction which is based on the averaging of the predictions obtained from different calibration sample lengths of the extended Vasicek model. The analyzed problem corresponds not only to the specific data analyzed in this paper; it is more extensive and complex. Thus, it is worth indicating that in the issue of the long-term prediction, the specific non-homogeneous behavior of the time series should be taken under consideration. The possible extensions of the results presented in this paper are related to the application of more general model instead of the time-dependent Vasicek one where specific functions are applied in the model. We see here also the huge potential of the application of the non-Gaussian models. In most of the cases, the real financial data, especially the currency exchange rates data, exhibit non-Gaussian behavior and the assumption of the Brownian motion in the considered model is only the simplification. In further study, we plan to extend the considered model to the non-Gaussian case.

Conflicts of Interest

The authors declare no conflicts of interest regarding the publication of this paper.

[1] |
Adcock, R., & Gradojevic, N. (2019). Non-Fundamental, Non-Parametric Bitcoin Forecasting. Physica A: Statistical Mechanics and Its Applications, 531, Article ID: 121727.
https://doi.org/10.1016/j.physa.2019.121727 |

[2] |
Brockwell, P. J. (2001). Levy-Driven CARMA Processes. Annals of the Institute of Statistical Mathematics, 53, 113-124. https://doi.org/10.1023/A:1017972605872 |

[3] |
Brockwell, P. J. (2014). Recent Results in the Theory and Applications of CARMA Processes. Annals of the Institute of Statistical Mathematics, 66, 647-685.
https://doi.org/10.1007/s10463-014-0468-7 |

[4] |
Brockwell, P. J., Davis, R. A., & Yang, Y. (2011). Estimation for Non-Negative Levy-Driven CARMA Processes. Journal of Business & Economic Statistics, 29, 250-259.
https://doi.org/10.1198/jbes.2010.08165 |

[5] |
Chaiyapo, N., & Phewchean, N. (2017). An Application of Ornstein-Uhlenbeck Process to Commodity Pricing in Thailand. Advances in Difference Equations, 2017, Article No. 179. https://doi.org/10.1186/s13662-017-1234-y |

[6] |
Clements, A. E., Hurn, S., & White, S. I. (2006). Mixture Distribution-Based Forecasting Using Stochastic Volatility Models. Applied Stochastic Models in Business and Industry, 22, 547-557. https://doi.org/10.1002/asmb.647 |

[7] |
Contino, C., & Gerlach, R. H. (2017). Bayesian Tail-Risk Forecasting Using Realized GARCH. Applied Stochastic Models in Business and Industry, 33, 213-236.
https://doi.org/10.1002/asmb.2237 |

[8] |
Dai, Z., Zhu, H., & Dong, X. (2020). Forecasting Chinese Industry Return Volatilities with RMB/USD Exchange Rate. Physica A: Statistical Mechanics and Its Applications, 539, Article ID: 122994. https://doi.org/10.1016/j.physa.2019.122994 |

[9] |
Dueker, M. J. (1997). Markov Switching in GARCH Processes and Mean-Reverting Stock-Market Volatility. Journal of Business & Economic Statistics, 15, 26-34.
https://doi.org/10.1080/07350015.1997.10524683 |

[10] |
Fan, J. (2018). Local Polynomial Modelling and Its Applications: Monographs on Statistics and Applied Probability 66. New York: Routledge.
https://doi.org/10.1201/9780203748725 |

[11] | Fan, J., Jiang, J., Zhang, C., & Zhou, Z. (2003). Time-Dependent Diffusion Models for Term Structure Dynamics. Statistica Sinica, 13, 965-992. |

[12] |
Fu, S., Li, Y., Sun, S., & Li, H. (2019). Evolutionary Support Vector Machine for RMB Exchange Rate Forecasting. Physica A: Statistical Mechanics and Its Applications, 521, 692-704. https://doi.org/10.1016/j.physa.2019.01.026 |

[13] |
Garratt, A., & Mise, E. (2014). Forecasting Exchange Rates Using Panel Model and Model Averaging. Economic Modelling, 37, 32-40.
https://doi.org/10.1016/j.econmod.2013.10.017 |

[14] |
Gob, R., Lurz, K., & Pievatolo, A. (2013). Electrical Load Forecasting by Exponential Smoothing with Covariates. Applied Stochastic Models in Business and Industry, 29, 629-645. https://doi.org/10.1002/asmb.2008 |

[15] |
Hamilton, J. D. (1990). Analysis of Time Series Subject to Changes in Regime. Journal of Econometrics, 45, 39-70. https://doi.org/10.1016/0304-4076(90)90093-9 |

[16] | Hamilton, J. D. (2016). The New Palgrave Dictionary of Economics. London: Palgrave Macmillan. |

[17] |
Hart, J. D., & Wehrly, T. E. (1986). Kernel Regression Estimation Using Repeated Measurements Data. Journal of the American Statistical Association, 81, 1080-1088.
https://doi.org/10.1080/01621459.1986.10478377 |

[18] |
Hawkes, R., & Date, P. (2007). Medium-Term Horizon Volatility Forecasting: A Comparative Study. Applied Stochastic Models in Business and Industry, 23, 465-481.
https://doi.org/10.1002/asmb.684 |

[19] |
Herwartz, H., & Reimers, H. E. (2002). Empirical Modelling of the DEM/USD and DEM/JPY Foreign Exchange Rate: Structural Shifts in GARCH-Models and Their Implications. Applied Stochastic Models in Business and Industry, 18, 3-22.
https://stooq.pl/ https://doi.org/10.1002/asmb.451 |

[20] |
Hubicka, K., Marcjasz, G., & Weron, R. (2018). A Note on Averaging Day-Ahead Electricity Price Forecasts across Calibration Windows. IEEE Transactions on Sustainable Energy, 10, 321-323. https://doi.org/10.1109/TSTE.2018.2869557 |

[21] |
Hull, J., & White, A. (1990). Pricing Interest-Rate-Derivative Securities. The Review of Financial Studies, 3, 573-592. https://doi.org/10.1093/rfs/3.4.573 |

[22] |
Janczura, J., & Weron, R. (2012). Efficient Estimation of Markov Regime-Switching Models: An Application to Electricity Spot Prices. Advances in Statistical Analysis, 96, 385-407. https://doi.org/10.1007/s10182-011-0181-2 |

[23] |
Janczura, J., Orzel, S., & Wylomanska, A. (2011). Subordinated α-Stable Ornstein-Uhlenbeck Process as a Tool for Financial Data Description. Physica A: Statistical Mechanics and Its Applications, 390, 4379-4387. https://doi.org/10.1016/j.physa.2011.07.007 |

[24] |
Jorion, P., & Sweeney, R. J. (1996). Mean Reversion in Real Exchange Rates: Evidence and Implications for Forecasting. Journal of International Money and Finance, 15, 535-550.
https://doi.org/10.1016/0261-5606(96)00020-4 |

[25] |
Kapetanios, G., Labhard, V., & Price, S. (2008). Forecast Combination and the Bank of England’s Suite of Statistical Forecasting Models. Economic Modelling, 25, 772-792.
https://doi.org/10.1016/j.econmod.2007.11.004 |

[26] |
Li, T. H. (2019). Hierarchical Nonparametric Survival Modeling for Demand Forecasting with Fragmented Categorical Covariates. Applied Stochastic Models in Business and Industry, 35, 1185-1201. https://doi.org/10.1002/asmb.2459 |

[27] |
Marcjasz, G., Serafin, T., & Weron, R. (2018). Selection of Calibration Windows for Day-Ahead Electricity Price Forecasting. Energies, 11, 2364.
https://doi.org/10.3390/en11092364 |

[28] |
Musiela, M., & Rutkowski, M. (1998). Martingale Methods in Financial Modelling: Theory and Applications. Berlin: Springer. https://doi.org/10.1007/978-3-662-22132-7 |

[29] |
Obuchowski, J., & Wylomanska, A. (2013). Ornstein-Uhlenbeck Process with Non-Gaussian Structure. Acta Physica Polonica B, 44, 1123-1136.
https://doi.org/10.5506/APhysPolB.44.1123 |

[30] |
Ogünc, F., Akdogan, K., Baser, S., Chadwick, M. G., Ertug, D., Hülagü, T., & Tekatli, N. (2013). Short-Term Inflation Forecasting Models for Turkey and a Forecast Combination Analysis. Economic Modelling, 33, 312-325.
https://doi.org/10.1016/j.econmod.2013.04.001 |

[31] |
Pesaran, M. H., & Pick, A. (2011). Forecast Combination across Estimation Windows. Journal of Business & Economic Statistics, 29, 307-318.
https://doi.org/10.1198/jbes.2010.09018 |

[32] |
Pesaran, M. H., & Timmermann, A. (2007). Selection of Estimation Window in the Presence of Breaks. Journal of Econometrics, 137, 134-161.
https://doi.org/10.1016/j.jeconom.2006.03.010 |

[33] |
Pezzulli, S., Frederic, P., Majithia, S., Sabbagh, S., Black, E., Sutton, R., & Stephenson, D. (2006). The Seasonal Forecast of Electricity Demand: A Hierarchical Bayesian Model with Climatological Weather Generator. Applied Stochastic Models in Business and Industry, 22, 113-125. https://doi.org/10.1002/asmb.622 |

[34] |
Reikard, G. E. (2006). Simultaneity and Non-Linear Variability in Financial Markets: Simulation and Forecasting. Applied Stochastic Models in Business and Industry, 22, 371-383. https://doi.org/10.1002/asmb.632 |

[35] |
Sikora, G., Michalak, A., Bielak, L., Mista, P., & Wylomanska, A. (2019). Stochastic Modeling of Currency Exchange Rates with Novel Validation Techniques. Physica A: Statistical Mechanics and Its Applications, 523, 1202-1215.
https://doi.org/10.1016/j.physa.2019.04.098 |

[36] |
Skiadas, C. H., Papagiannakis, L., & Mourelatos, A. (1997). Application of Diffusion Models to Forecast Industrial Energy Demand. Applied Stochastic Models in Business and Industry, 13, 357-367.
https://doi.org/10.1002/(SICI)1099-0747(199709/12)13:3/4<357::AID-ASM329>3.0.CO;2-O |

[37] |
Uhlenbeck, G. E., & Ornstein, L. S. (1930). On the Theory of the Brownian Motion. Physical Review, 36, 823. https://doi.org/10.1103/PhysRev.36.823 |

[38] |
Vasicek, O. (1977). An Equilibrium Characterization of the Term Structure. Journal of Financial Economics, 5, 177-188. https://doi.org/10.1016/0304-405X(77)90016-2 |

[39] |
Welfe, W. (2011). Long-Term Macroeconometric Models: The Case of Poland. Economic Modelling, 28, 741-753. https://doi.org/10.1016/j.econmod.2010.05.002 |

[40] |
Wylomanska, A. (2011). Measures of Dependence for Ornstein-Uhlenbeck Process with Tempered Stable Distribution. Acta Physica Polonica B, 42, 2049-2062.
https://doi.org/10.5506/APhysPolB.42.2049 |

[41] |
Zhang, P., Xiao, W. L., Zhang, X. L., & Niu, P. Q. (2014). Parameter Identification for Fractional Ornstein-Uhlenbeck Processes Based on Discrete Observation. Economic Modelling, 36, 198-203. https://doi.org/10.1016/j.econmod.2013.09.004 |

[42] |
Zhao, Z. Y., Xie, M., & West, M. (2016). Dynamic Dependence Networks: Financial Time Series Forecasting and Portfolio Decisions. Applied Stochastic Models in Business and Industry, 32, 311-332. https://doi.org/10.1002/asmb.2161 |

Journals Menu

Contact us

customer@scirp.org | |

+86 18163351462(WhatsApp) | |

1655362766 | |

Paper Publishing WeChat |

Copyright © 2023 by authors and Scientific Research Publishing Inc.

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.