A Panel Bounds Testing Procedure

Abstract

We propose a bounds testing procedure (BTP) with a battery of tests for the existence of a non-degenerate co-integrating relationship in levels, for long panels. It is a natural extension to panel data of the respective approach in time series by Pesaran, Shin, and Smith (2001) and extended by Bertsatos, Sakellaris, and Tsionas (2022). Simulations suggest that standard inference is not valid for at least one of the tests in our proposed panel BTP. A computer code that generates sample-specific critical values is provided. We demonstrate the proposed BTP by applying alternative model specifications for systemic banks’ market value of equity and extending Rhodes-Kropf, Robinson, and Viswanathan (2005).

Share and Cite:

Bertsatos, G. , Sakellaris, P. and Tsionas, M. (2023) A Panel Bounds Testing Procedure. Theoretical Economics Letters, 13, 1765-1779. doi: 10.4236/tel.2023.137102.

1. Introduction

We introduce a bounds testing procedure (BTP), in the spirit of Pesaran, Shin, and Smith (2001, hereafter PSS) and Bertsatos, Sakellaris, and Tsionas (2022, hereafter BST) , for the existence of a non-degenerate co-integrating relationship, in levels, of the dependent variable and its forcing variables for “large N large T” panel datasets.

The dynamic fixed effects (DFE) estimator is easy to run and has been utilized in many papers (see e.g. Pesaran et al., 1999 ; Clements et al., 2019 ) as a benchmark or main model to analyze panel datasets and derive long-run multipliers, where a panel autoregressive distributed lag model (henceforth PARDL) is implied. However, the long-run effects based on the DFE estimator may be spurious if there is no co-integration or if there is a degenerate co-integrating relationship. In such cases, the derived long-run coefficients could be misleading. Therefore, based on the (one-way or two-way) DFE estimator, we propose a bounds testing procedure similar to that of PSS and BST in the time-series environment.

To demonstrate the proposed in this paper, panel BTP, we apply modified model specifications of Rhodes-Kropf, Robinson, and Viswanathan (2005, henceforth RKRV) and alternative estimation strategies, relative to the original paper, with UECMs for a dataset of listed systemic banks. Results show that there is a stable and non-degenerate co-integrating relationship running from the market value of equity to the book value of equity and the net income of the examined banks.

The rest of the paper is as follows: Section 2 presents the framework of unrestricted error-correction models that are required for the bounds testing procedure and Section 3 describes the simulations used and presents the results. Section 4 involves an empirical application and Section 5 concludes. Finally, there is an Online Appendix with tables and results of the employed simulations and supplementary material related to this paper.

2. The Unrestricted Error-Correction Framework

In this section, we present the unrestricted error-correction models to be estimated before executing the BTP. We state the assumptions and pose the associated tests. We work with panel datasets, where the number of cross sections, N, and time periods, T, are large1.

We employ PARDL models in unrestricted error-correction form as PSS and BST in time-series level. UECMs are attractive since they permit examination of both short-run and long-run multipliers in one step efficiently, as well as of transitional dynamics towards the steady-state equilibrium (if any)2. Furthermore, they allow regressors to be a mixture of stationary variables, I(0) processes, and variables with unit root, I(1) processes (see e.g. Pesaran et al., 1999 , PSS; Chudik et al., 2016 ), and allow for one co-integrating relationship that runs from the forcing variables to the dependent variable3.

Regarding panel co-integrating tests, our paper is related to Westerlund (2007) who employs unrestricted error-correction models (UECMs). He focuses on the error-correction coefficient and proposes four different tests which are based on UECMs. If the null hypothesis of no error correction is rejected then, that of no co-integration is rejected as well. However, even if these tests are robust to several error specifications, they are limited to I(1) variables and are subject to problems related to pretesting for unit roots.

2.1. Basic Setup

A PARDL (p, q, …, q) model in levels is:

Y i , t = c + a t + b t 2 + j = 1 p λ j Y i , t j + j = 0 q δ j X i , t j + d i + e i , t (1)

Our methodology for the panel BTP is built on the ARDL (p − 1, q − 1, …, q − 1) model in unrestricted error-correction form that is equivalent of the ARDL (p, q, …, q) model in levels4:

Δ Y i , t = c + a t + b t 2 + ( 1 + j = 1 p λ j ) Y i , t 1 + ( j = 0 q δ j ) X i , t 1 + [ j = 1 p 1 ( m = j + 1 p λ m ) ] Δ Y i , t j + δ 0 Δ X i , t + [ j = 1 q 1 ( m = j + 1 q δ j ) ] Δ X i , t j + d i + e i , t Δ Y i , t = c + a t + b t 2 + φ y Y i , t 1 + γ x X i , t 1 + j = 1 p 1 λ j * Δ Y i , t j + j = 0 q 1 δ j Δ X i , t j + d i + e i , t (2)

where, Y is the main variable of interest, φy is the error-correction parameter, X is a column vector of k stochastic regressors (forcing variables) and γ x is the row vector with the associated k coefficients, c is the constant term, a is the coefficient of the linear trend, b is the coefficient of quadratic trend, di is the cross-sectional fixed effects, p is number of lags of the dependent variable, and q is the number of lags of the stochastic regressors. The error structure is ei,t and has to be free of serial and cross-sectional correlation. For ease of exposition, we assume that the lags for all forcing variables equal q however in practice, the lags of the stochastic regressors may vary and not be the same.

We focus on Equation (2) to execute the panel BTP as we will present next. In practice, there may be fixed cross-sectional and/or time effects and thus, practitioners and researchers have to purge them. In this study, we suggest prior to the execution of our BTP the use of DFE estimator to get rid of fixed effects5.

Finally, Equation (2) could be expanded in the spirit of Shin et al. (2014) , so that non-linear responses of Y to its covariates are allowed6. Therefore, that would be the panel non-linear ARDL, or PNARDL.

2.2. Cases and Tests of the Bounds Testing Procedures

In this subsection, we focus on the unrestricted models of Equation (2) and highlight eleven cases following BST. In particular, these cases are based on the specification of the deterministic components, i.e. the constant term, the linear trend and the quadratic trend7. We present for each case the involved tests of the BTP and their hypotheses in Section II of the Online Appendix. Table 1 summarizes the aforementioned cases for the panel BTP.

To sum up, there are four common steps involved in the proposed panel.

Step 1: If the F-statistic of Fyx-test is greater than the simulated critical bounds then, move to the next step. If on the other hand, it is lower the simulated critical bounds, there is no co-integrating relationship with this specification under the selected case. The Fyx-test tests if the lagged dependent variable in levels, the lagged stochastic regressors in levels and any deterministic factor (intercept, linear trend, quadratic trend) are jointly zero.

Step 2: If the t-statistic of ty-test is smaller than the simulated critical values then, we move to the next step. If there is a positive or a small negative t-statistic, which is greater than the simulated critical bounds then, there is no long-run relationship with the examined specification under the selected case. The ty-test tests if the lagged independent variable in levels is zero.

Step 3: If the F-statistic of Fx-test is greater than the simulated critical bounds then, the BTP is successfully passed in this specification under the selected case and the long-run, or co-integrating, multipliers θ = γ x φ y can be successfully assessed with the delta method since the sample size is large (both N and T are large). The Fx-test tests if the lagged stochastic regressors in levels and any deterministic factor (intercept, linear trend, quadratic trend) are jointly zero.

Step 4: If the Fyx-test and ty-test reject their null hypotheses, and the Fx-test does not reject its null, and the γ’s are individually significant (tx-tests), H 0 : γ j = 0 against H A : γ j 0 for j = 1 , , k , then, there is a long-run relationship running from X’s to Y and the long-run coefficients can be evaluated with the delta method. The tx-tests test if the lagged stochastic regressors in levels are equal to zero individually. Furthermore, if both Fx-test and tx-tests in Step 3 do not reject their null hypothesis then, this is a degenerate case of co-integration (below Figure 1 illustrates these steps).

Table 1. Cases for the panel bounds testing procedure.

Notes: This table shows all the cases of the panel bounds testing procedure we examine. Regarding the generation of the simulated critical values (SCVs), we run the model in Equation (3) with OLS. The SCVs are the empirical quantiles of the following computed test-statistics. The tests involved in each case are: 1) Φ = 0 (Fyx-test), where Φ equals to Φ = ( φ y , φ x ) , includes the coefficients of z i , t 1 according to this table and φ y is the coefficient of y i , t 1 , 2) φ y = 0 (ty-test), 3) φ x = 0 (Fx-test), which includes the coefficients of z i , t 1 except for that of y i , t 1 , and iv. φ j = 0 (tx-test), where j = 1 to k. Note that the ty-test is the same for Cases II and III (we keep Case III), Cases IV to VII (we keep Case V), and Cases VII to XI (we keep Case XI). The same holds for tx-test. The ty-test is a left-oriented test and the tx-test is a two-tailed test.

The first two steps (Fyx and ty tests) are the same as in PSS and BST. The Fx and tx tests help us towards the path to co-integration as in BST and filter out degenerate cases. After the rejection of the null hypotheses of Fyx-test and ty-test, there are two paths towards the existence of a non-degenerate co-integration. The first one is through joint testing with the Fx-test and if this leads to a dead end, there is one alternative through individual testing with the tx-tests.

Figure 1. Graphical illustration of the tests involved in the bounds testing procedure.

If we reject the joint null hypothesis of interest given by H 0 = H 0 F y x H 0 t y H 0 F x against the alternative that at least one of the three null hypotheses cannot be rejected then, we find evidence that a non-degenerate co-integrating relationship in levels between Y and forcing X exists. On the other hand if we fail to reject only the null of Fx-test, we could focus on the joint null of H 0 = H 0 F y x H 0 t y H 0 t x whose rejection also implies the existence of a non-degenerate co-integration between Y and forcing X. Moreover, if we fail to reject the null hypothesis of the Fyx-test or that of the ty-test then, there is no evidence of co-integration for the examined sample and model specification. Finally, if the Fx-test and tx-tests fail to reject their null hypothesis, we argue this is a degenerate case of co-integration. Graphically, our suggested BTP is depicted in Figure 1.

3. Monte Carlo Simulations and Results

We run stochastic simulations to obtain our test statistics with 50,000 replications using OLS regressions. The regression required for the simulations is shown in Equation (3):

Δ y i , t = Φ z i , t 1 + a w i , t + ξ i , t , i = 1 , 2 , , N & t = 1 , 2 , , T (3)

where, z i , t 1 = ( y i , t 1 , x i , t 1 ) , x i , t = ( x 1 , i , t , , x k , i , t ) , w i , t = [ 1 , t , t 2 ] , the variables y i , t and x i , t are generated from y i , t = y i , t 1 + ε 1 , i , t and x i , t = P x i , t 1 + ε 2 , i , t with y i , 0 = 0 and x i , 0 = 0 , ε i , t = ( ε 1 , i , t , ε 2 , i , t ) is drawn as ( k + 1 ) independent standard normal variables. T is the number of observations (periods) for each cross section, N. If x i , t is purely I(1) then, P = I k , while if x i , t is purely I(0) then, P = 0 . The row vectors, in Equation (3), Φ consists of ( φ y , φ x ) = ( φ y , φ 1 , , φ k ) and a = ( c , a , b ) , where c is the constant term, α is the coefficient of the linear trend and b is the coefficient of the quadratic trend. Moreover, we study “large N large T” datasets. Specifically, we study combinations of N and T, where N takes the values of 50, 100, 200, 300, 400, 500 and 1000, T equals to 50, 100 and 1000, while k (row size of x) ranges between 0 and 13.

According to Table 1, we obtain the simulated critical values (SCVs) for the following test statistics. First, for the F-statistics (Fyx and Fx) for testing i. Φ = 0 (Fyx-test), where Φ equals to Φ = ( φ y , φ x ) , includes the coefficients of z i , t 1 and φ y is the coefficient of y i , t 1 , and ii. φ x = 0 (Fx-test), where φ x includes the coefficients of z i , t 1 except for that of y i , t 1 . Second, for the t-statistics (ty and tx) for testing φ y = 0 (ty-test) and φ j = 0 (tx-test), where j = 1 to k.

A set of critical values is obtained when the stochastic regressors x are stationary and another one when they contain a unit root. These are the bounds of the critical values from the simulations we run, and the case when the stochastic regressors are mutually co-integrated (see PSS) is also included. The SCVs are the empirical quantiles of the computed test-statistics8.

To see whether SCVs are required for the tests in the BTP, we compare them with the typical or conventional critical values (CCVs). Specifically, we calculate the absolute percentage deviations (APDs) of CCV from the SCV relative to CCV for the 4 mostly used sizes (1%, 2.5%, 5% and 10%) for all the cases (11 for the F-tests and 4 for the t-tests).

APD = | SCV CCV CCV | 100 % (4)

We observe that most of the time for a given significance level or for a given case, the APDs are very close to each other. So, for simplicity and to save space, we calculate the average absolute percentage deviations (AAPDs) for all the aforementioned cases and significance levels9. We consider a threshold of 5% such that any AAPD, which is greater than that, suggests that we employ the SCVs for the proposed panel BTP and that standard inference could lead to misleading results. We do not use a lower or a higher threshold because a higher value (e.g. the liberal 10%) could bias our results towards the use of the CCVs, whilst a lower value (e.g. the conservative 1%) may favor our SCVs against the use of the CCVs.

We examine combinations of cross-sectional dimension, N, and time-series dimensions, T, where N equals to 50, 100, 200, 300, 400, 500 and 1000, while T equals to 50, 100 and 1000 after the estimation process. We find that when N is equal to or greater than 100, the CCVs should be used for the F-tests and the tx-tests. However, when N is 50 the SCVs can be useful for the inference of these tests, as well as when N Î [50, 100). Yet, regarding the ty-test, we show that the CCVs are not appropriate and that SCVs should be employed for every pair N and T we examine10.

Lag Augmentation

To economize on space and for presentation reasons, we

1) Used Equation (3) for the calculation of the critical values, where no lagged values of Δy, and contemporaneous or lagged values of Δx, are taken into account, and

2) Calculated the average deviations of the simulated critical values from the conventional critical values (see Equation (4)).

In this way, we provide notable evidence that there are several circumstances where the conventional critical values exhibit great divergences from the simulated critical values.

The observations after the estimation of Equation (3) range from 2500 (when both N and T equal 50) to 1,000,000 (when both N and T equal 1000) leading to large degrees of freedom. Therefore, lag augmentation could have a weak effect, if any at all, for the generation of the simulated critical values. To alleviate any concern, we developed a “lag and stochastic regressor” specific code for greater accuracy of the bounds testing procedure, as well as for completeness of our work11. This computer code generates strictly speaking, sample-specific critical values unless there are extra variables, which affect only the short-run or long-run path of the dependent variable in the examined empirical model of researchers and practitioners12.

4. Empirical Application

We demonstrate the proposed in this paper, BTP by applying modified model specifications according to Rhodes-Kropf, Robinson and Viswanathan (2005, RKRV) , and estimate at the same time both short-run and long-run responses of market value of equity.

4.1. Motivation

The goal of RKRV is to discover the drivers of mergers and acquisitions by decomposing the market-to-book (MB) ratio, and test theories predicting that misvaluation affects merger activity. RKRV decompose the MB ratio into market-to-true value and true-to-book value, where the true value of a firm comes from a valid valuation model. Otherwise, the decomposition of MB ratio is inaccurate and misleading. Yet, true value can be seen as the long-run fundamental value. RKRV also expand the MB decomposition into 3 parts involving sector valuation too13. Particularly, market-to-sector value reflects firm-specific deviations from industry, sector-to-true value reflects deviations of sector valuations from long-run valuations, and true-to-book value reflects deviations of long-run value from book value14.

Instead of running cross-sectional regressions as RKRV, we employ a setup of dynamic modelling and utilize all information and observations by estimating PARDL models. In such a framework, the RKRV-type sector value could be treated as the short-run fundamental value incorporating temporary or short-run loading factors, whilst the long-run fundamental value is the steady-state equilibrium value calculated with the long-run multipliers or permanent loading factors.

As a result, the ARDL technique—permitting simultaneously estimation of both short-run and long-run responses of the dependent variable, allowing for endogenous regressors—is naturally destined to be one of the best ways to extend the RKRV value decomposition. To put it differently, the sector value and the long-run value could be treated in the ARDL environment as the forecast (or best guess) of market value of equity one period and w → ∞ periods ahead, respectively. This is in line with Bertsatos et al. (2023) who utilize forward substitution of the estimated price-to-book (PB) equation to visualize the transition dynamics of PB towards the path to equilibrium for the examined sample of systemic banks. Many expectations arise from such an upgrade of the RKRV value decomposition and leave a lot of room for future researchers; however, in this paper, we will focus our attention on the BTP part of the PARDLs.

4.2. Data

We download quarterly data from Datastream for all the listed systemic banks according to the Basel Committee on Banking Supervision (BCBS) and the Financial Stability Board (FSB). The covered period is 1998:Q1 to 2018:Q1. After the exclusion of banks with less than 20 observations in the variables of interest, the final sample consists of 77 listed banks for 81 quarters. Table IV.1 in the Online Appendix presents the banks names in the final sample.

Next, we discuss the employed variables in this empirical application with their Datastream codes in square brackets. Specifically, we download data for the market value of equity [MV], the book value of common equity [WC03501A], and net income available to common shareholders [WC01751A]. We also drop 10 observations with negative values in book equity. Since all three variables are expressed in local currency, we convert them into USD with nominal exchange rates that are calculated as the quarterly averages of daily exchange rates (Table 2).

4.3. Estimation Strategy and Results

Model Specifications

We apply model specifications with MVE as the dependent variable and BVE as explanatory variable first, and then with both BVE and NI as explanatory variables. Equation (5) shows Model 1 and Equation (6) shows Model 2. These specifications are like those of Model 1 and Model 2 in RKRV, however, in this empirical application, we use variables in levels and not in logs. One lag for each variable is employed in both PARDL Model 1 and PARDL Model 2. Therefore, the lag structure in levels is (1, 1) for Model 1 and (1, 1, 1) for Model 2.

Δ M V E i , t = c + c i + c t + φ M V E i , t 1 + a 1 B V E i , t 1 + β 0 Δ B V E i , t + e 1 , i , t (5)

Δ M V E i , t = c + c i + c t + φ M V E i , t 1 + a 1 B V E i , t 1 + a 2 N I i , t 1 + β 0 Δ B V E i , t + γ 0 Δ N I i , t + e 2 , i , t (6)

where, φ is the coefficient of error-correction term, −φ is the speed of adjustment, MVE is market value of equity, BVE is book value of equity, NI is net income, ci is bank fixed effects, ct is time fixed effects and e is the error term.

Estimation

We estimate Models 1 and 2 with the two-way dynamic fixed effects estimator (Table 3).

Before analyzing the models’ estimates, we test for the existence of a non-degenerate co-integrating relationship in levels—running from the book value of equity to the market value of equity in Model 1, and from the book value of equity and net income to the market value of equity in Model 2—employing the proposed BTP in this paper15. However, for the rest of the analysis and

Table 2. Descriptive statistics.

Notes: This table shows descriptive statistics for market value of equity (MVE), book value of common equity (BVE), and net income available to common shareholders (NI), expressed in billions of USD and current prices.

Table 3. Estimates of the Models 1 and 2.

Notes: This table shows the estimates of Models 1 and 2 in first row and standard errors in second row of each cell. The two-way dynamic fixed effects estimator is employed with bank and time fixed effects. LR_BVE and LR_NI are the long-run coefficients of book value of equity and net income, respectively. We use simulated critical values for testing the statistical significance of lagged levels of market value of equity (ty-test), book value of equity (tx-test) and net income (tx-test). Details about the simulated critical values for Model 2 are given below and in Table IV.2 in the Online Appendix, while those for Model 1 are unreported to economize on space. *** denotes statistical significance at 1%, ** at 5% and * at 10%.

ease of exposition, we focus our attention on Model 2 that involves both book value of equity and net income as explanatory variables for the market value of equity. Moreover, as the aim of this paper is to demonstrate the panel BTP, we skip presenting many robustness checks with alternative estimates and specifications.

Bounds Testing Procedure

We run the code to generate sample-specific critical values for 77 cross sections, 80 time periods, 2 stochastic regressors and a lag order in error-correction form (0, 0, 0) using 5000 replications. First, with normally distributed errors and then, with a student-t distribution with 5 degrees of freedom to account for fat tails. Moreover for conservatism, we employ the larger critical value, in absolute terms, between that for I(0) case and I(1) case, for the tx-tests involved in the BTP.

Performing the Fyx test we get F-statistics about 69 and 90 for Cases II and III, respectively, which are larger than the associated critical values at 1% level (3.291 for Case II and 3.748 for Case III for the normal distribution, and 3.460 for Case II and 4.020 for Case III for the t5 distribution; see Table IV.2 in the Online Appendix). The ty-test of the lagged term of MVE (or the coefficient of the error-correction term, ECT) exhibits a t-statistic almost equal to −16 that is smaller than the associated critical value (−2.518 for the normal distribution, and −2.489 for the t5 distribution; see Table IV.2 in the Online Appendix). Consequently, these tests lend overwhelming support for a co-integrating relationship running from BVE and NI to MVE in the mean environment.

Next, performing the Fx test we find F-statistics around 89 and 57 for Cases II and III, respectively. These statistics are larger than the associated critical values at 1% level (3.713 for Case II and 4.564 for Case III for the normal distribution, and 3.955 for Case II and 5.095 for Case III for the t5 distribution; see Table IV.2 in the Online Appendix). Finally, using the critical values for the tx tests for the lagged terms of BVE and NI we find statistical significance at 1% level as the corresponding t-statistics 3.3 and 9.4 are outside the interval of the associated critical values at 1% (−2.665 to 2.588 for the normal distribution, and −2.708 to 2.671 for the t5 distribution; see Table IV.2 in the Online Appendix). Therefore, combining the Fx and tx tests, we verify that the co-integrating relationship of MVE is non-degenerate, and given the magnitude and statistical significance of the error-correction term (−0.104 and lies within the range of ±2) the steady-state equilibrium relationship of MVE with BVE and NI is also stable.

5. Discussion

Having established a stable and non-degenerate co-integration, we begin elaborating on the results. The speed of adjustment (SOA) is 10.4% on average, denoting a sluggish convergence to equilibrium. Such a SOA denotes that it takes almost 42 quarters to close 99% of the gap, if any, from the steady-state value.

The marginal effect of BVE and NI to MVE could be seen, respectively, as the implied price-to-book (PB) ratio and price-to-earnings (PE) ratio16. We find that the implied short-run PB ratio is almost 0.40 (coefficient of ΔBVE), whereas the corresponding long-run is about 0.20 (long-run coefficient of BVE). Both coefficients of BVE are statistically significant at 1% level.

Regarding the effect of NI to MVE, we document that the average implied PE ratio in the short run is estimated at about 0.6 and the respective long-run one at almost 11.5. Both coefficients of NI are statistically significant at 1% level. Finally, testing whether the short-run response of MVE to BVE is equal to the respective long-run, we spot that the null of equality is rejected at 1% significance level, and this is also the case for the responses of MVE to NI.

Endogeneity Concerns

ARDLs can deal with endogeneity issues and specifically with reverse causality through the popular in this framework error-projection technique (PSS; Pesaran & Shin, 1999 ; Shin et al., 2014 ; Cho et al., 2015 ; Bertsatos et al., 2023 ). This parametric correction is equivalent to augmenting the initial ARDL model with extra lags, i.e. estimating a new ARDL with a richer lag order. Table IV.3 in the Online Appendix presents the new estimated models. We witness that results exhibit robustness, as estimates of Model 2 with lag orders (1, 1, 1) and (2, 2, 2) are quite similar. This is also the case for estimates of Model 1 with lag structures (1, 1) and (2, 2).

6. Concluding Remarks

In this paper, we propose for large panel datasets a bounds testing procedure for the existence of a non-degenerate co-integrating relationship running from the forcing variables to the main variable of interest. Previous findings that used the dynamic fixed effect estimator for studying long-run multipliers may contain spurious long-run results as there could be a degenerate co-integrating relationship or no co-integration at all. Such cases can be detected with our proposed bounds testing procedure. To this end, a computer code is provided for generating sample-specific critical values and moreover, this code for one cross section extends Bertsatos, Sakellaris, and Tsionas (2022) OLS-based code.

To demonstrate our panel bounds testing procedure, we employ modified model specifications of Rhodes-Kropf et al. (2005) for the market value of equity of systemic banks. Specifically, we estimate simultaneously short-run and long-run multipliers. There is overwhelming support in favor of a stable and non-degenerate co-integrating relationship running from the book value of equity and net income to the market value of equity. Finally, future research could explore further the decomposition of Rhodes-Kropf et al. (2005) to extract sector values and long-run values by employing panel ARDL models.

Funding

The publication is financed by the Centre of Planning and Economic Research (KEPE).

Data Availability

The raw data of the empirical application of this paper were downloaded from Datastream. Access restrictions apply to these data, which were downloaded and used under a license to the Laboratory of Financial Applications of Athens University of Economics and Business.

Appendix

Please follow the hyperlink below for supplementary material related to this paper:

https://drive.google.com/drive/folders/11xEsoG5az2-XRg39-ht8RatrJ0QSpO7V?usp=drive_link

NOTES

1For dynamic modelling in panels involving a large number of cross sections and a small number of time periods, generalized method of moments (GMM) estimators have been proposed (see e.g. Arellano & Bond, 1991 ).

2The use of lags in ARDL (autoregressive distributed lag) models alleviates endogeneity concerns (see e.g. PSS, and Clements et al., 2019 ). Particularly, short-run reverse causality is resolved with the error-projection technique given that regressors are represented by finite-order autoregressive processes (see Pesaran & Shin, 1999 ; Pesaran et al., 1999 , PSS; Shin et al., 2014 ; Cho et al., 2015 ).

3However, there are some caveats as with the family of (P)MG estimators (see e.g. Pesaran & Smith, 1995 ; Pesaran et al., 1999 ). I(2) variables or of higher order of integration are not supported and if the dependent variable affects some of the forcing variables in the long run then, ARDL models may yield spurious results.

4We should notice that an ARDL in unrestricted error-correction form is not the same with a first-difference model since the error term and the deterministic factors remain unchanged. BST discuss this issue thoroughly.

5In Section III of the Online Appendix, we discuss about the inclusion of fixed effects in the specification and how it alters the interpretation of the coefficients. The interpretation is like that in the time-series case when cross-sectional fixed effects are added, whilst with fixed time effects the interpretation is like that in the cross-sectional case. Regarding the two-way fixed effects, the interpretation is more complicated (see Kropko & Kubinec, 2018 ); however, we derive a much easier interpretation.

6When the threshold is known a priori, either with zero or non-zero value, and there are no large differences in the regime probabilities then, no estimation or inference issues arise (see e.g. Greenwood-Nimmo & Shin, 2013 ).

7BST discuss in detail the null and alternative hypotheses for all cases.

8We have created a code in EViews (9th edition) that generates simulated critical values.

9Should we employ the median instead of the average, results do not change.

10In Section I of the Online Appendix, Tables I.1 to I.7 contain the AAPDs for every (N, T) we examine.

11This code also can be run for one cross section (N = 1), and this code is an extension of the BST time-series codes. Furthermore, besides a standard normal distribution, the user can also select a student-t distribution with 5 degrees of freedom to account for fat tails. This code—accounting for sample size, number of regressors and their lagged terms, and errors’ distribution—is available in the journal’s website along with a readme file for the user.

12Therefore, we suggest two options along with the use of cross-sectional fixed effects. First, the use of fixed time effects and second, the model augmentation with macro variables that affect the dependent variable along the path towards the steady-state equilibrium relationship.

13 Elliott et al. (2008) employ the residual income model to estimate long-run fundamental value and decompose book-to-market (BM) ratio into 2 elements: book-to-fundamental value reflecting a decreasing amount of growth opportunities and fundamental-to-market value reflecting an increasing degree of mispricing, where the fundamental value comes from a 2-stage residual income model (RIM).

14RKRV express the MB decomposition using logged variables and thus, they have deviations instead of ratios.

15Having executed alternative unit-root tests (with or without intercept, and with both linear trend and constant term), we confirm that none of the employed variable is I(2) or of higher order of integration and thus, the PARDL models are applicable. Results of the unit-root tests are unreported to save space.

16 Cho et al. (2015) interpret the coefficient of earnings in the dividend equation as payout ratio.

Conflicts of Interest

The authors declare no conflicts of interest regarding the publication of this paper.

References

[1] Arellano, M., & Bond, S. (1991). Some Tests of Specification for Panel Data: Monte Carlo Evidence and an Application to Employment Equations. The Review of Economic Studies, 58, 277-297.
https://doi.org/10.2307/2297968
[2] Bertsatos, G., Pagratis, S., & Sakellaris, P. (2023). Public Sector Corruption and Price-to-Book Ratios of Systemically Important Banks. SSRN.
https://doi.org/10.2139/ssrn.4058298
[3] Bertsatos, G., Sakellaris, P., & Tsionas, M. G. (2022). Extensions of the Pesaran, Shin and Smith (2001) Bounds Testing Procedure. Empirical Economics, 62, 605-634.
https://doi.org/10.1007/s00181-021-02041-3
[4] Cho, J. S., Kim, T., & Shin, Y. (2015). Quantile Cointegration in the Autoregressive Distributed-Lag Modeling Framework. Journal of Econometrics, 188, 281-300.
https://doi.org/10.1016/j.jeconom.2015.05.003
[5] Chudik, A., Mohaddes, K., Pesaran, M. H., & Raissi, M. (2016). Long-Run Effects in Large Heterogeneous Panel Data Models with Cross-Sectionally Correlated Errors. In Essays in Honor of Aman Ullah (Advances in Econometrics, Vol. 36) (pp. 85-135). Emerald Group Publishing Limited.
https://doi.org/10.1108/S0731-905320160000036013
[6] Clements, B., Gupta, S., & Khamidova, S. (2019). Is Military Spending Converging across Countries? An Examination of Trends and Key Determinants (p. 21). International Monetary Fund, WP 19/196.
https://doi.org/10.5089/9781513509877.001
[7] Elliott, W. B., Koeter-Kant, J., & Warr, R. S. (2008). Market Timing and the Debt-Equity Choice. Journal of Financial Intermediation, 17, 175-197.
https://doi.org/10.1016/j.jfi.2007.05.002
[8] Greenwood-Nimmo, M., & Shin, Y. (2013). Taxation and the Asymmetric Adjustment of Selected Retail Energy Prices in the UK. Economics Letters, 121, 411-416.
https://doi.org/10.1016/j.econlet.2013.09.020
[9] Kropko, J., & Kubinec, R. (2018). Why the Two-Way Fixed Effects Model Is Difficult to Interpret, and What to Do about It. SSRN.
http://dx.doi.org/10.2139/ssrn.3062619
[10] Pesaran, M. H., & Shin, Y. (1999). Chapter 11. An Autoregressive Distributed Lag Modelling Approach to Cointegration Analysis. In S. Strom (Ed.), Econometrics and Economic Theory in the 20th Century: The Ragnar Frisch Centennial Symposium (pp. 371-413). Cambridge University Press.
https://doi.org/10.1017/CCOL521633230.011
[11] Pesaran, M. H., & Smith, R. (1995). Estimating Long-Run Relationships from Dynamic Heterogeneous Panels. Journal of Econometrics, 68, 79-113.
https://doi.org/10.1016/0304-4076(94)01644-F
[12] Pesaran, M. H., Shin, Y., & Smith R. J. (2001). Bounds Testing Approaches to the Analysis of Level Relationships. Journal of Applied Econometrics, 16, 289-326.
https://doi.org/10.1002/jae.616
[13] Pesaran, M. H., Shin, Y., & Smith, R. P. (1999). Pooled Mean Group Estimation of Dynamic Heterogeneous Panels. Journal of the American Statistical Association, 94, 621-634.
https://doi.org/10.1080/01621459.1999.10474156
[14] Rhodes-Kropf, M., Robinson, D. T., & Viswanathan, S. (2005). Valuation Waves and Merger Activity: The Empirical Evidence. Journal of Financial Economics, 77, 561-603.
https://doi.org/10.1016/j.jfineco.2004.06.015
[15] Shin, Y., Yu, B., & Greenwood-Nimmo, M. J. (2014). Modelling Asymmetric Cointegration and Dynamic Multipliers in a Nonlinear ARDL Framework. In W. C. Horrace, & R. C. Sickles (Eds.), Festschrift in Honor of Peter Schmidt (pp. 281-314). Springer Science & Business Media.
https://doi.org/10.1007/978-1-4899-8008-3_9
[16] Westerlund, J. (2007). Testing for Error Correction in Panel Data. Oxford Bulleting of Economics and Statistics, 69, 709-748.
https://doi.org/10.1111/j.1468-0084.2007.00477.x

Copyright © 2024 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.