Application of Equality Test of Coefficients of Variation to the Heteroskedasticity Test ()

Josoa Michel Tovohery^{1}, André Totohasina^{2}, Feno Daniel Rajaonasy^{3}

^{1}Thematic Doctoral School “Science, Culture, Society and Development” of the University of Toamasina, Toamasina, Madagascar.

^{2}Department of Mathematics and Computer Science, école Normale Supérieure pour l’Enseignement Technique, University of Antsiranana, Antsiranana, Madagascar.

^{3}Department of Mathematics, Computer Science and Applications, University of Toamasina, Toamasina, Madagascar.

**DOI: **10.4236/ajcm.2020.101005
PDF
HTML XML
563
Downloads
1,671
Views
Citations

The presence of heteroskedasticity in a considered regression model may bias the standard deviations of parameters obtained by the Ordinary Least Square (OLS) method. In this case, several hypothesis tests on the model under consideration may be biased, for example, CHOW’s coefficient stability test (or structural change test), Student’s t-test and Fisher’s F-test. Most of the heteroscedasticity tests in the literature are based on the comparison of variances. Despite the multiplication of equality tests of coefficients of variation (CVs) that have appeared in the literature, to our knowledge, the first and only use of the coefficient of variation in the detection of heteroskedasticity was offered by Li and Yao in 2017. Thus, this paper offers an approach to determine the existence of heteroskedasticity by a test of equality of coefficients of variation. We verify by a Monte Carlo robustness and performance test that our method seems even better than some tests in the literature. The results of this study contribute to the exploitation of the statistical measurement of CV dispersion. They help technicians economists to better verify their hypotheses before making a scientific decision when making a necessary forecast, in order to contribute effectively to the economic and sustainable development of a company or enterprise.

Keywords

Heteroskedasticity Tests, Equality Test, Coefficients of Variation, Ordinary Least Square (OLS) Method, Linear Regression, Analysis of Variance (ANOVA)

Share and Cite:

Tovohery, J.M., Totohasina, A. and Rajaonasy, F.D. (2020) Application of Equality Test of Coefficients of Variation to the Heteroskedasticity Test. *American Journal of Computational Mathematics*, **10**, 73-89. doi: 10.4236/ajcm.2020.101005.

1. Introduction

Gauss-Markov’s theorem states that the least squares estimator is called BLUE, because it is the Best linear Unbiased Estimator, in the sense that it provides the lowest variances for estimators ( [1], p. 53). However, the presence of heteroskedasticity in a considered regression model may bias the standard deviations of parameters obtained by the Ordinary Least Square (OLS) method ( [2], p. 31). In this case, several hypothesis tests on the model under consideration may be biased, for example, CHOW’s coefficient stability test (or structural change test) ( [3], p. 25), Student’s t-test and Fisher’s F-test. Heterosedasticity tests are already available in the literature. Examples include the Levene test, the Goldfeld-Quandt test, the White test, the Gleisjer test and the Breush test. Most of these tests are based on the comparison of variances.

Today, tests of comparison of Coefficients of Variation (CVs) have appeared in the literature. Examples include the Curto test [4], the application of the Rényi divergence proposed by Pardo (1999) [5], the test based on a numerical approach by Gokpinar (2015) [6], the Forkman test [7], McKay and Miller’s statistics [8].

To our knowledge, the first use of the coefficient of variation in the detection of heteroskedasticity was offered by Li and Yao (2017) [9]. Thus, the question is: “is it possible to find an application of these CV equality tests to detect the existence of heteroskedasticity?”

The rest of this article is organized as follows: Section 2 will discuss the position of our problem; Section 3 will present a state of the art on heteroskedasticity test; Section 4 will propose an approach to using a CV equality test when detecting heteroskedasticity; and finally, a conclusion is given at the end.

2. Position of Problem

We have a simple linear regression model

${y}_{t}={a}_{0}+{a}_{1}{x}_{t}+{\u03f5}_{t}\mathrm{,}\text{\hspace{0.17em}}t=\stackrel{\xaf}{\mathrm{1,}n}$ (1)

such that the ${\u03f5}_{t}$ are the errors made when applying the model. We want to check if the variance of the errors is constant for t ranging from 1 to n. That is, we want to test if the model is homoscedastic or heteroscedastic. Figure 1 shows an example of homoscedastic model, and Figures 2-4 show three examples of

Figure 1. Homoskedastic model ( ${\sigma}_{\u03f5}^{2}=\text{constant}$ ).

Figure 2. Heteroscedastic model ( ${\sigma}_{\u03f5}^{2}$ increases with the exogenous variable).

Figure 3. Heteroscedastic model ( ${\sigma}_{\u03f5}^{2}$ decreases with the exogenous variable).

Figure 4. Heteroscedastic model ( ${\sigma}_{\u03f5}^{2}$ represents a concave look).

heteroscedastic model. We note that these four models all have the same regression line equation: $y=x+2$.

3. State of the Art on the Homoskedasticity Test

We consider the general linear regression model $Y=Xa+\u03f5$. The various tests, which we will mention below, consist in testing the following hypothesis:

$\{\begin{array}{l}\text{nul hypothesis}{H}_{0}\mathrm{:}{\sigma}_{{\u03f5}_{t}}=\sigma \mathrm{,}\text{\hspace{0.17em}}t=\stackrel{\xaf}{\mathrm{1,}n}\text{\hspace{0.17em}}\text{\hspace{1em}}\left(\text{constant}\right)\mathrm{;}\\ \text{alternative hypothesis}{H}_{1}\mathrm{:}\text{there}\text{\hspace{0.17em}}\text{are}\text{\hspace{0.17em}}{t}_{1}\text{\hspace{0.17em}}\text{and}\text{\hspace{0.17em}}{t}_{2}\mathrm{,}\text{\hspace{0.17em}}\text{such}\text{\hspace{0.17em}}\text{as}\text{\hspace{0.17em}}{\sigma}_{{t}_{1}}\ne {\sigma}_{{t}_{2}}\mathrm{.}\end{array}$

3.1. Breusch-Pagan Test

The Breusch-Pagan Test assumes that the squares of the errors ${\u03f5}_{i}^{2}$ are related to the dependent variable Y. According to Leblond (2003) ( [2], p. 31), the Breusch-Pagan test is done in four steps:

1) Recover the residues ${\u03f5}_{t}$ of the regression;

2) Generate the residue square ( ${\u03f5}_{i}^{2}$ );

3) Regress the residue square on the variables dependent on the original regression ( ${\u03f5}_{t}^{2}={\stackrel{^}{a}}_{0}+{\stackrel{^}{a}}_{1}{y}_{t}$, where ${\stackrel{^}{a}}_{0}$ and ${\stackrel{^}{a}}_{1}$ to be determined);

4) Test if the coefficients are jointly significant (Perform the F-test):

$F=\frac{{R}^{2}/k}{\left(1-{R}^{2}\right)/\left(n-k-1\right)}$ (2)

where k is the number of explanatory variables ${x}_{i}$, n is the sample size and ${R}^{2}$ is the coefficient of determination of ${\u03f5}^{2}$ and Y.

Decision-making: We accept the null hypothesis ${H}_{0}$ at the confidence level $\left(1-\alpha \right)\times \mathrm{100\%}$, if $F<{F}_{k;n-k-1}^{\alpha}$, where ${F}_{k\mathrm{;}n-k-1}^{\alpha}$ is the critical value of F-distribution at risk $\alpha $, at k and $n-k-1$ degrees of freedom.

3.2. Goldfeld-Quandt Test

The Goldfeld-Quandt test assumes that there is an explanatory variable ${X}_{i}$ that influences the variance of errors, such as $E\left({\u03f5}^{2}\mathrm{|}{X}_{i}\right)={\sigma}^{2}+h\left({X}_{i}\right)$, where h is an increasing function ( [10], p. 103). The test is summarized as follows:

1) Sort the observation values according to the increasing or decreasing values of the explanatory variable ${X}_{i}$ suspected of being the source of heteroskedasticity.

2) Divide the observations into two groups:

${Y}_{1}=\left(\begin{array}{c}{y}_{1}\\ {y}_{2}\\ \vdots \\ {y}_{{n}_{1}}\end{array}\right)\mathrm{,}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{Y}_{2}=\left(\begin{array}{c}{y}_{{n}_{2}+1}\\ {y}_{{n}_{2}+2}\\ \vdots \\ {y}_{n}\end{array}\right),$

${X}_{1}=\left(\begin{array}{cccc}1& {x}_{1,1}& \cdots & {x}_{1,k}\\ 1& {x}_{2,1}& \cdots & {x}_{2,k}\\ \vdots & \vdots & \ddots & \vdots \\ 1& {x}_{{n}_{1},1}& \cdots & {x}_{{n}_{1},k}\end{array}\right),\text{\hspace{0.17em}}\text{\hspace{0.17em}}{X}_{2}=\left(\begin{array}{cccc}1& {x}_{{n}_{2}+1,1}& \cdots & {x}_{{n}_{2}+1,k}\\ 1& {x}_{{n}_{2}+2,1}& \cdots & {x}_{{n}_{2}+2,k}\\ \vdots & \vdots & \ddots & \vdots \\ 1& {x}_{n,1}& \cdots & {x}_{n,k}\end{array}\right)$

where ${n}_{1}=n/3$ and ${n}_{2}=2n/3$.

3) Calculate the error variance estimators for each sub-sample:

${\stackrel{^}{\sigma}}_{1}^{2}=\left[{\left({Y}_{1}-{X}_{1}\stackrel{^}{a}\right)}^{\prime}\times \left({Y}_{1}-{X}_{1}\stackrel{^}{a}\right)\right]/\left({n}_{1}-k-1\right)=\left({\displaystyle {\sum}_{i=1}^{{n}_{1}}{e}_{i}^{2}}\right)/\left({n}_{1}-k-1\right)$ (3)

$\begin{array}{c}{\stackrel{^}{\sigma}}_{2}^{2}=\left[{\left({Y}_{2}-{X}_{2}\stackrel{^}{a}\right)}^{\prime}\times \left({Y}_{2}-{X}_{2}\stackrel{^}{a}\right)\right]/\left(n-{n}_{2}-k-1\right)\\ =\left({\displaystyle {\sum}_{i={n}_{2}+1}^{n}{e}_{i}^{2}}\right)/\left(n-{n}_{2}-k-1\right)\end{array}$ (4)

where $\stackrel{^}{a}$ is the estimator of the parameter a by the least squares method, ${e}_{i}={y}_{i}-\left({\stackrel{^}{a}}_{0}+{\stackrel{^}{a}}_{1}{x}_{i1}+\cdots +{\stackrel{^}{a}}_{1}{x}_{ik}\right)$ and k is the number of explanatory variables of the model.

4) Calculate the Goldfeld-Quandt statistic:

$GQ=\frac{{\stackrel{^}{\sigma}}_{1}^{2}}{{\stackrel{^}{\sigma}}_{2}^{2}}$ (5)

The $GQ$ statistic follows the F-distribution at ${n}_{1}-k-1$ and $n-{n}_{2}-k-1$ degrees of freedom, noted as ${F}_{{n}_{1}-k-\mathrm{1;}n-{n}_{2}-k-1}$.

Decision-making: The null hypothesis ${H}_{0}$ is rejected at confidence level $\left(1-\alpha \right)\times \mathrm{100\%}$, if $GQ>{F}_{{n}_{1}-k-1;n-{n}_{2}-k-1;\alpha}$.

3.3. Gleisjer’s Test

The Gleisjer test can detect both heteroskedasticity and the form that this heteroskedasticity takes ( [1], p. 150). The Gleisjer test assumes that there is a relationship between the error $\u03f5$ of the model and the variable ${X}_{i}$ assumed to be the cause of heteroskedasticity. The steps of the test are summarized as follows:

Step 1: Determination of the residues generated by the suspected variable ${X}_{i}$.

1) Regress Y to X. This gives the simple regression model ${Y}_{k}=a{X}_{ki}+b+{\u03f5}_{k}\mathrm{,}k=\stackrel{\xaf}{\mathrm{1,}n}$.

2) Calculate the estimators of a and b using the Ordinary Least Squares method: $\stackrel{^}{a}$ and $\stackrel{^}{b}$.

3) Estimate the model’s residues ${\epsilon}_{k}$ by its estimators: ${e}_{k}={Y}_{k}-\left(\stackrel{^}{a}{X}_{ki}+\stackrel{^}{b}\right),\text{\hspace{0.17em}}k=\stackrel{\xaf}{1,n}$.

Thus, the vector of residues ${e}_{k}$ is known.

Step 2: Proposal of possible forms of existing heteroskedasticity.

Gleisjer suggests testing different forms of possible relationships between $\left|e\right|$ and ${X}_{i}$, for example:

1) Type 1:

$\left|{e}_{k}\right|={a}_{0}+{a}_{1}{X}_{ki}+{v}_{k},\text{\hspace{0.17em}}k=\stackrel{\xaf}{1,n},$ (6)

where ${v}_{k}$ is the residue of this model. This relationship generates the type of heteroskedasticity ${\stackrel{^}{\sigma}}_{{e}_{k}}^{2}={c}^{2}{X}_{ki}^{2}$, where c is a non-zero real constant. Thus, the variance of errors is a function of the squares of the suspected explanatory variable ${X}_{i}$.

2) Type 2:

$\left|{e}_{k}\right|={a}_{0}+{a}_{1}\sqrt{{X}_{ki}}+{v}_{i},\text{\hspace{0.17em}}k=\stackrel{\xaf}{1,n}.$ (7)

This relationship generates the type of heteroskedasticity ${\stackrel{^}{\sigma}}_{{e}_{k}}^{2}={c}^{2}{X}_{ki}$. In this case, the variance of the errors is proportional to the values of the suspected explanatory variable ${X}_{i}$

3) Type 3:

$\left|{e}_{k}\right|={a}_{0}+{a}_{1}\frac{1}{{X}_{ki}}+{v}_{i},\text{\hspace{0.17em}}k=\stackrel{\xaf}{1,n}.$ (8)

This relationship leads to heteroskedasticity of type ${\stackrel{^}{\sigma}}_{{e}_{k}}^{2}=\frac{{c}^{2}}{{X}_{ki}^{2}}$.

Step 3: Detection of heteroskedasticity

Significance test of the regression coefficient ${a}_{1}$ :

${t}^{*}=\frac{\left|{\stackrel{^}{a}}_{1}\right|}{{\stackrel{^}{\sigma}}_{{\stackrel{^}{a}}_{1}}},$ (9)

with

${\stackrel{^}{\sigma}}_{{\stackrel{^}{a}}_{1}}=\sqrt{\frac{{\stackrel{^}{\sigma}}_{v}^{2}}{{\displaystyle {\sum}_{k=1}^{n}}{\left(h\left({X}_{ki}\right)-\stackrel{\xaf}{h\left({X}_{i}\right)}\right)}^{2}}}$

and

${\stackrel{^}{\sigma}}_{v}^{2}=\frac{1}{n-2}{\displaystyle {\sum}_{k=1}^{n}}{\left[\left|{e}_{k}\right|-\left({\stackrel{^}{a}}_{0}+{\stackrel{^}{a}}_{1}h\left({X}_{ki}\right)\right)\right]}^{2}$,

where

$h\left(x\right)=\{\begin{array}{l}x\mathrm{,}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{for a type 1 relation ship;}\\ \sqrt{x}\mathrm{,}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{for a type 2 relation ship;}\\ \frac{1}{x}\mathrm{,}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{for a type 3 relation ship}\text{.}\end{array}$

${t}^{\mathrm{*}}$ follows the t-distribution at $n-2$ degrees of freedom.

Decision-making: The null hypothesis ${H}_{0}$ is rejected at confidence level $\left(1-\alpha \right)\times \mathrm{100\%}$, if there is a ${t}^{\mathrm{*}}$, such that ${t}^{*}>{t}_{n-2;\alpha}$.

If the existence of heteroskedasticity is validated, then the relationship with the highest ${t}^{\mathrm{*}}$ represents the form of existing heteroskedasticity.

3.4. White’s Test

White’s test consists in testing the existence of a relationship between the square of the residue and one or more explanatory variables or its squares. The test procedures can be summarized as follows:

Step 1: Determination of model’s residues.

1) When the parameters of the model $Y=Xa+\u03f5$ are estimated, then we have the estimation of the residues: $e=Y-X\stackrel{^}{a}$.

2) Step 2: Regression of ${e}^{2}$ to ${x}_{1},{x}_{1}^{2},\cdots ,{x}_{k}$ and ${x}_{k}^{2}$ and validation.

3) We consider the model:

${e}_{i}^{2}={a}_{1}{x}_{i1}+{b}_{1}{x}_{i1}^{2}+{a}_{2}{x}_{i2}+{b}_{2}{x}_{i2}^{2}+\cdots +{a}_{k}{x}_{ik}+{b}_{2}{x}_{ik}^{2}+{a}_{0}+{v}_{i}\mathrm{,}\text{\hspace{0.17em}}i=\stackrel{\xaf}{\mathrm{1,}n}\mathrm{,}$ (10)

what can be written in matrix form: $E=Wu+v$, where

$E=\left(\begin{array}{c}{e}_{1}^{2}\\ {e}_{2}^{2}\\ \vdots \\ {e}_{n}^{2}\end{array}\right),\text{\hspace{0.17em}}u=\left(\begin{array}{c}{a}_{0}\\ {a}_{1}\\ {b}_{1}\\ \vdots \\ {a}_{n}\\ {b}_{n}\end{array}\right),\text{\hspace{0.17em}}v=\left(\begin{array}{c}{v}_{1}\\ {v}_{2}\\ \vdots \\ {v}_{n}\end{array}\right)$ and $W=\left(\begin{array}{cccccc}1& {x}_{11}& {x}_{12}^{2}& \cdots & {x}_{1k}& {x}_{1k}^{2}\\ 1& {x}_{21}& {x}_{22}^{2}& \cdots & {x}_{2k}& {x}_{2k}^{2}\\ \vdots & \vdots & \vdots & \ddots & \vdots & \vdots \\ 1& {x}_{n1}& {x}_{n1}^{2}& \cdots & {x}_{nk}& {x}_{nk}^{2}\end{array}\right)$

4) The estimator of u is: $\stackrel{^}{u}={\left({W}^{\prime}\cdot W\right)}^{-1}\cdot {W}^{\prime}\cdot E$

5) Calculate the variance of the errors: ${\stackrel{^}{\sigma}}_{\u03f5}^{2}=\frac{{\displaystyle {\sum}_{i=1}^{n}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{\stackrel{^}{v}}_{i}^{2}}{n-k-1}$, with $\stackrel{^}{v}=E-W\stackrel{^}{u}$.

6) Calculate the variance-covariance matrix of parameters ${a}_{i}$ and ${b}_{i}$ : ${\stackrel{^}{\Omega}}_{u}={\stackrel{^}{\sigma}}_{\u03f5}^{2}{\left({W}^{\prime}\cdot W\right)}^{-1}$.

In this case, the variance of i-th element of the vector u is: ${\stackrel{^}{\sigma}}_{{\stackrel{^}{u}}_{i}}$ = i-th element of the diagonal of ${\stackrel{^}{\Omega}}_{u}$.

7) Significance test of parameters ${a}_{1}\mathrm{,}{b}_{1}\mathrm{,}\cdots \mathrm{,}{a}_{k}\mathrm{,}{b}_{k}$ : We calculate: ${t}_{{a}_{i}}^{*}=\frac{\left|{\stackrel{^}{a}}_{i}\right|}{{\stackrel{^}{\sigma}}_{{\stackrel{^}{a}}_{i}}}$ and ${t}_{{b}_{i}}^{\mathrm{*}}=\frac{\left|{\stackrel{^}{b}}_{i}\right|}{{\stackrel{^}{\sigma}}_{{\stackrel{^}{b}}_{i}}}$, $i=\stackrel{\xaf}{\mathrm{1,}k}$.

The statistics ${t}_{{a}_{i}}^{\mathrm{*}}$ and ${t}_{{b}_{i}}^{\mathrm{*}}$ follow the t-distribution at $n-k-1$ degrees of freedom.

Decision-making: The null hypothesis ${H}_{0}$ is rejected at the confidence level $\left(1-\alpha \right)\times \mathrm{100\%}$, if there is a ${t}_{{u}_{i}}^{*}$, such that ${t}_{{u}_{i}}^{*}>{t}_{n-k-1;\alpha}$. That means, the null hypothesis ${H}_{0}$ is rejected if there is a parameter ${u}_{i}$ significantly different from 0.

3.5. ANOVA Methods

^{1}Maurice Stevenson Bartlett (June 18, 1910-January 8, 2002).

In order to determine the existence of heteroskedasticity, researchers proposed the method of analysis of variances, commonly said ANOVA. According to the application example presented in ( [1], p. 147-148), the application of ANOVA consists in dividing the observations into several classes of values. Following the example of this same example by R. Bourbonnais, we propose the following steps:

1) Order the observations according to the increasing values of the explanatory variable ${X}_{i}$ suspected to be the source of heteroskedasticity.

2) Group the value of the variable ${X}_{i}$ into z classes of values. To determine z, one of the following expressions can be used in ( [11], p. 33):

a) $z=Int\left(\sqrt{n}\right)$, where n is the total number of observations, and $Int\left(\mathrm{.}\right)$ is the integer part function;

b) Sturge’s formula: $z=Int\left(1+3.3lo{g}_{10}\left(n\right)\right)$ ;

c) Yule’s formula: $z=Int\left(2.5\sqrt[4]{n}\right)$.

3) Group the values of the variable to be explained Y according to their corresponding classes ( ${y}_{i}$ in the class corresponding to ${x}_{i}$ ). Thus, we obtain z samples of Y.

4) Apply the ANOVA test to the z samples of Y, then draw a conclusion.

In the following subsections, we will present some ANOVA tests that can be done in step 4.

3.5.1. Bartlett’s Test

Bartlett’s statistic^{1} is defined as follows:

$B=\frac{Q}{L}$ (11)

where

$Q=\left(n-z\right)\mathrm{ln}\left({\displaystyle {\sum}_{i}^{z}}\frac{{n}_{i}-1}{n-z}{s}_{i}^{2}\right)-{\displaystyle {\sum}_{i=1}^{z}}\left({n}_{i}-1\right)\mathrm{ln}\left({s}_{i}^{2}\right)$,

$L=1+\frac{1}{3\left(z-1\right)}\left({\displaystyle {\sum}_{i=1}^{z}}\frac{1}{{n}_{i}-1}-\frac{1}{n-z}\right)$,

$n={\displaystyle {\sum}_{i=1}^{z}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{n}_{i}$ and ${n}_{i}$ is the number of observations belonging to the i-th class, $i=\stackrel{\xaf}{\mathrm{1,}z}$ ( [12], p. 273).

Remark: Bartlett’s statistic B follows the chi-square distribution with $z-1$ degrees of freedom, noted as ${\chi}_{z-1}^{2}$, if the residues ${\u03f5}_{i}$ are independent and follow the standard normal distribution $\mathcal{N}\left(\mathrm{0,1}\right)$.

Decision-making: The homoskedasticity hypothesis ${H}_{0}$ is rejected at confidence level $\left(1-\alpha \right)\times \mathrm{100\%}$, if $Q\ge {\chi}_{z-\mathrm{1;1}-\alpha}^{2}$.

3.5.2. Levene’s Test

The Howard Levene’s statistic proposed in 1960 ( [13], p. 4) is defined as follows:

$F=\frac{n-z}{z-1}\times \frac{{\displaystyle \underset{i=1}{\overset{z}{\sum}}}{\left({\stackrel{\xaf}{d}}_{i.}-{\stackrel{\xaf}{d}}_{\mathrm{..}}\right)}^{2}}{{\displaystyle \underset{i=1}{\overset{z}{\sum}}}{\displaystyle \underset{j=1}{\overset{{n}_{i}}{\sum}}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{n}_{i}{\left({d}_{ij}-{\stackrel{\xaf}{d}}_{i.}\right)}^{2}}$ (12)

where,

• z is the number of groups or value categories obtained,

• ${n}_{i}$ is the number of observations belonging to the i-th class, and $n={\displaystyle {\sum}_{i=1}^{z}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{n}_{i}$,

• ${d}_{ij}=\left|{y}_{ij}-{\stackrel{\xaf}{y}}_{i\mathrm{.}}\right|$,

• ${d}_{i.}=\frac{1}{{n}_{i}}{\displaystyle {\sum}_{j=1}^{{n}_{i}}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{d}_{ij}$ (average of ${d}_{ij}$ in the i class),

• ${d}_{\mathrm{..}}=\frac{1}{n}{\displaystyle {\sum}_{i=1}^{z}}{\displaystyle {\sum}_{j=1}^{{n}_{i}}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{d}_{ij}$ (average of all ${d}_{ij}$ ).

Remark: Levene’s F statistic follows the F-distribution with $z-1$ and $n-z$ degrees of freedom, noted ${F}_{z-\mathrm{1;}n-z}$. Bartlett’s test is not robust if the normality assumption of $epsilo{n}_{i}$ is not verified. However, the Levene test is stable even in the absence of this hypothesis.

Decision making: The null hypothesis ${H}_{0}$ is rejected at the confidence level $\left(1-\alpha \right)\times \mathrm{100\%}$ if $F>{F}_{z-1;n-z;\alpha}$.

3.5.3. Brown-Forsythe’s Test

The Brown-Forsythe test is an improvement on the Levene test. To get the Brown-Forsythe statistic, just change ${d}_{ij}=\left|{y}_{ij}-{\stackrel{\xaf}{y}}_{i\mathrm{.}}\right|$ to ${d}_{ij}=\left|{y}_{ij}-m{e}_{i}\right|$, where $m{e}_{i}$ is the median of the i-th group of values. Brown-Forsythe’s statistic is more robust than Levene’s.

3.5.4. Hartley’s Test

We define the Hartley’s statistic ( [14], p. 14) by:

$H=\frac{{s}_{\mathrm{max}}^{2}}{{s}_{\mathrm{min}}^{2}}$ (13)

where ${s}_{\mathrm{max}}^{2}=\mathrm{max}\left\{{s}_{1}^{2};\cdots ;{s}_{z}^{2}\right\}$, ${s}_{\mathrm{min}}^{2}=\mathrm{min}\left\{{s}_{1}^{2};\cdots ;{s}_{z}^{2}\right\}$ and ${s}_{i}^{2}$ = variance of the Y values of the i-th group, such as $i=1,\mathrm{2,}\cdots \mathrm{,}z$.

Remark: The Hartley test cannot be used if the group sizes ${n}_{i}$ are not equal. The critical values of the H statistic are tabulated in the Hartley table.

Decision making: We reject null hypothesis ${H}_{0}$ ${H}_{0}$ at the confidence level $\left(1-\alpha \right)\times \mathrm{100\%}$ if $H>{H}_{critical}(\; \alpha \; )$

3.5.5. Cochran’s Test

The Cochran’s statistic is defined as follows:

$C=\frac{{s}_{\mathrm{max}}^{2}}{{\displaystyle {\sum}_{i=1}^{z}{s}_{i}^{2}}}$ (14)

Remarks: The Cochran’s test cannot be used if the group sizes ${n}_{i}$ are not equal. The critical values of the C statistic are tabulated in the Cochran’s table.

Decision making: We reject the null hypothesis ${H}_{0}$ at the confidence level $\left(1-\alpha \right)\mathrm{\%}$ if $C>{C}_{critical}\left(\alpha \right)$.

3.6. Zhaoyuan Li and Jianfeng Yao Test

Zhaoyuan Li and Jianfeng Yao [9] proposed two measures to detect heteroskedasticity in a multivariate linear model.

1) Test based on the likelihood ratio:

${T}_{1}=\mathrm{ln}\left(\frac{\frac{1}{n}{\displaystyle {\sum}_{i=1}^{n}{\stackrel{^}{\u03f5}}_{i}^{2}}}{{\left({\displaystyle {\prod}_{i=1}^{n}{\stackrel{^}{\u03f5}}_{i}^{2}}\right)}^{1/n}}\right)$ (15)

where $\stackrel{^}{\u03f5}=Y-X\stackrel{^}{a}$ and $\stackrel{^}{a}={\left({X}^{\prime}X\right)}^{-1}{X}^{\prime}Y$ ( [9], p. 9).

${Z}_{1}=\frac{\sqrt{n}\left({T}_{1}-\left[ln\left(2\right)+\gamma \right]\right)}{\frac{{\pi}^{2}}{2}-2}$ follows the standard normal distribution $\mathcal{N}\left(\mathrm{0,1}\right)$, and $\gamma ~0.5772$ is the Euler’s constant ( [9], p. 10).

Decision making: the ${H}_{0}$ assumption is rejected at the confidence level $\left(1-\alpha \right)\times \mathrm{100\%}$, if ${Z}_{1}>{z}_{\alpha /2}$, where ${z}_{\alpha /2}$ is the quantile of $\mathcal{N}\left(\mathrm{0,1}\right)$ at the risk threshold $\alpha $. For $\alpha =0.05$, we have ${z}_{\alpha /2}=1.96$.

2) Coefficient of variation test:

${T}_{2}=\frac{\frac{1}{n}{\displaystyle {\sum}_{i=1}^{n}{\left({\stackrel{^}{\u03f5}}_{i}^{2}-\stackrel{\xaf}{\u03f5}\right)}^{2}}}{{\stackrel{\xaf}{\u03f5}}^{2}}$ (16)

where $\stackrel{\xaf}{\u03f5}=\frac{1}{n}{\displaystyle {\sum}_{i=1}^{n}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{\stackrel{^}{\u03f5}}_{i}^{2}$.

${Z}_{2}=\frac{\sqrt{n}\left({T}_{2}-2\right)}{24}$ follows the standard normal distribution $\mathcal{N}\left(\mathrm{0,1}\right)$ ( [9], p. 11).

Decision making: the ${H}_{0}$ assumption is rejected at the confidence level $\left(1-\alpha \right)\times \mathrm{100\%}$, if ${Z}_{2}>{z}_{\alpha /2}$.

This last test shows a trend in the use of coefficient of variation in the detection of heteroskedasticity.

4. Application of the Equality Test of Coefficients of Variation to the Heteroskedasticity Test

4.1. Our Approach

In this section, we will show that the test of equality of coefficients of variation allows us to detect the existence of heteroskedasticity. The steps of our approach can be summarized as follows:

1) Estimate the parameter a of the regression model of Y to X, noted as $\stackrel{^}{a}$.

2) Estimate the model’s residues: $\stackrel{^}{\u03f5}=Y-X\stackrel{^}{a}$.

3) Calculate the square of residues: ${\stackrel{^}{\u03f5}}^{2}$.

4) As the Goldfeld-Quandt method, divide the residue squares into two groups:

${\stackrel{^}{\u03f5}}_{1}^{2}=\left(\begin{array}{c}{e}_{1}^{2}\\ {e}_{2}^{2}\\ \vdots \\ {e}_{{n}_{1}}^{2}\end{array}\right)\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{et}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{\stackrel{^}{\u03f5}}_{2}^{2}=\left(\begin{array}{c}{e}_{{n}_{2}+1}^{2}\\ {e}_{{n}_{2}+2}^{2}\\ \vdots \\ {e}_{n}^{2}\end{array}\right)$

where ${n}_{1}=n/3$ and ${n}_{2}=2n/3$.

5) Calculate the Johannes Forkman’s statistic ( [7], p. 10):

${F}^{\mathrm{*}}=\frac{{c}_{1}^{2}/\left[1+{c}_{1}^{2}\left({n}_{1}-1\right)/{n}_{1}\right]}{{c}_{2}^{2}/\left[1+{c}_{2}^{2}\left({n}_{2}-1\right)/{n}_{2}\right]}$ (17)

where ${c}_{i}={s}_{i}/{m}_{i}$ for $i=\stackrel{\xaf}{1,2}$,

${m}_{1}=\frac{1}{{n}_{1}}{\displaystyle {\sum}_{i=1}^{{n}_{i}}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{e}_{i}^{2}$, ${m}_{2}=\frac{1}{n-{n}_{2}+1}{\displaystyle {\sum}_{i={n}_{2}}^{n}}\text{\hspace{0.05em}}{e}_{i}^{2}$,

${s}_{1}=\sqrt{\frac{1}{{n}_{1}-1}{\displaystyle {\sum}_{i=1}^{{n}_{1}}}{\left({e}_{i}^{2}-{m}_{1}\right)}^{2}}$ and ${s}_{2}=\sqrt{\frac{1}{n-{n}_{2}+1}{\displaystyle {\sum}_{i={n}_{2}}^{n}}{\left({e}_{i}^{2}-{m}_{2}\right)}^{2}}$.

Decision making: if ${F}^{*}<{F}_{\left({n}_{1}-1,{n}_{2}-1,\alpha \right)}$, then we accept ${H}_{0}$ at the confidence level $\left(1-\alpha \right)\times \mathrm{100\%}$. ${F}_{\left({n}_{1}-\mathrm{1,}{n}_{2}-\mathrm{1,}\alpha \right)}$ is the quantile $\alpha $ of F-distribution with ${n}_{1}-1$ and ${n}_{2}-1$ degrees of freedom.

We chose Forkman’s statistic because ${F}^{\mathrm{*}}$ is stable for all ${n}_{i}\ge 3$, where $i=\stackrel{\xaf}{\mathrm{1,2}}$ ( [7], p. 11).

4.2. Monte Carlo Simulation

Now, we will test the robustness of these measures proposed in the literature and the one in which we have proposed.

4.2.1. Methodology

Like the Gleisjer method, our simulation consists of generating two variables X and Y of size $n=\left\{\mathrm{15;30;40;50;80;100}\right\}$, such as $Y=b+aX+\u03f5$ and $\u03f5={a}_{0}+{a}_{1}\cdot h\left(X\right)$ (see the Section 3.3). Thus, we consider 3 forms of heteroskedasticity: 1) $h\left(X\right)=X$, 2) $h\left(X\right)=\sqrt{X}$ and 3) $h\left(X\right)=1/X$ ( [1], p. 151).

Moreover, in order to enrich the forms of heteroskedasticity studied, we also propose to take the other three forms considered by Li and Yao: 4) $h\left(X\right)={g}_{i}exp\left(c{X}_{i}\right)$, 5) $h\left(X\right)={g}_{i}{\left(1+csin\left(10{X}_{i}\right)\right)}^{2}$ and 6) $h\left(X\right)={g}_{i}{\left(1+c{X}_{i}\right)}^{2}$, where ${g}_{i}$ is a random variable following the standard normal distribution $\mathcal{N}\left(\mathrm{0,1}\right)$ ( [9], p. 15).

In this simulation, we consider only the simple regression model. We repeat $m=100$ times this test, and we count the number k of times the test rejects the ${H}_{0}$ hypothesis at the 95% confidence level. Then, the probability $p=k/m$ is calculated.

As p is a random variable, then we repeat these procedures several times (1000 times), then we calculate $\stackrel{\xaf}{p}=\left({\displaystyle {\sum}_{i=1}^{1000}}{p}_{i}\right)/1000$. We really put ourselves in the

case where the error is significantly not negligible (value of ${a}_{1}$ sufficiently different from 0).

So, if $\stackrel{\xaf}{p}>0.05$, then the test is considered robust. In addition, the measure with the highest $\stackrel{\xaf}{p}$ is the measure considered most sensitive to the type of error i considered ( $i=\stackrel{\xaf}{\mathrm{1,6}}$ ).

As we want to test the robustness of the test, then it would be better to check whether the test in question detects small variations or not. During the simulations we did, we took $a=3$, $b=2$, ${a}_{0}=2$ and ${a}_{1}=c=1$. We took ${a}_{1}=1$, because it is already different from 0, but judged subjectively low value.

In Table 1, the probabilities ${p}_{1}$, ${p}_{2}$, ${p}_{3}$, ${p}_{4}$, ${p}_{5}$, ${p}_{6}$, ${p}_{7}$ and ${p}_{8}$ correspond respectively to the rejection probabilities of the null hypothesis ${H}_{0}$ of the Breush, Goldfeld-Quandt, Gleisjer, White, Bartlette, Levene, Li and Yao tests, and our proposal.

4.2.2. Simulation Results

From Table 1, we obtain the classifications in Tables 2-6.

4.3. Discussion

First of all, from these simulations, it is indisputable that the Levene test is the most robust and sensitive of all the tests considered in this study.

However, these results show that, among the 06 forms of heteroskedasticity proposed, our proposal can detect 04 for $n<50$, and 05 for $n\ge 50$.

In general, our proposal fails to detect the only form of heteroskedasticity $h\left(X\right)=g{\left(1+csin\left(10X\right)\right)}^{2}$ (whether for $n<50$ or $n\ge 50$.)

Furthermore, it is the second best test to detect the heteroskedasticity of type $h\left(X\right)=1/X$ for $n\ge 50$.

In addition, our proposal seems better than the Li and Yao test, which is, to our knowledge, the first tendency to use the coefficient of variation to detect heteroskedasticity.

Table 1. Results of monte carlo simulations.

Table 2. Classification of tests in ascending order according to their wrong acceptance numbers of H_{0}.

Table 3. Classification in ascending order of tests according to their sensitivities to the 03 types of heteroskedasticity proposed by Gleisjeir for $n<50$.

Table 4. Classification in ascending order of tests according to their sensitivities to the 03 types of heteroskedasticity proposed by Gleisjeir for $n\ge 50$.

Table 5. Classification in ascending order of tests according to their sensitivities to the 03 types of heteroskedasticity considered by Li and Yao for $n<50$.

Table 6. Classification in ascending order of tests according to their sensitivities to the 03 types of heteroskedasticity considered by Li and Yao for $n\ge 50$.

Finally, these results contribute to the justification of the weakness of Bartlette’s test. Indeed, we see from these results that this test is less robust than our proposal.

5. Conclusions

In this paper, we proposed a technique to detect the existence of heteroskedasticity by an equality test of the coefficients of variation. Thus, to illustrate our state of the art, we first recalled some tests to detect the existence of heteroskedasticity existing in the literature, such as the Breusch-Pagan test, the Goldfeld-Quandt test, the Gleisjer test, the White test and some heteroskedasticity tests based on an analysis of variance (ANOVA): Bartlett’s test, Levene’s test, Brown-Forsythe’s test, Hartley’s test and Cochran’s test.

Next, we also presented the heteroskedasticity test of Zhaoyuan Li and Jianfeng Yao. To the best of our knowledge, the Zhaoyuan Li and Jianfeng Yao test was the first tendency to use coefficients of variation to determine the existence of heteroskedasticity.

Among the equality tests of coefficients of variation available in the literature, we have considered Forkman’s test to illustrate our approach, as it is a robust and stable test for a sample with size $n\ge 3$. The results of our performance tests have shown that our approach can detect 5 types of heteroskedasticity among the 6 types considered in this paper.

At the end of this analysis, we affirm that the equality test of coefficients of variation allows us to detect the existence of possible heteroskedasticity in a simple regression model. Thus, our study contributes to the reapplication of several equality tests of coefficients of variation that have already appeared in the literature.

Acknowledgements

We thank the Editor and the referee for their comments and assistance.

Conflicts of Interest

The authors declare that there is no conflict of interests regarding the publication of this paper.

[1] |
Bourbonnais, R. (2015) économétrie-Cours et exercices corrigés. Dunod 9è édition.
https://docplayer.fr/66598708-Econometrie-cours-et-exercices-corriges-regis-bourbonnais-9-e-edition.html |

[2] |
Leblond, S. (2003) Guide d’économétrie appliquée. Université de Montréal, Montréal. http://www2.cirano.qc.ca/~mccauslw/ECN3949/GuideEconometrie.pdf |

[3] |
Hamisultane, H. (2002) économétrie. Licence. France.
https://halshs.archives-ouvertes.fr/cel-01261163 |

[4] |
Curto, J.D. and Pinto, J.C. (2009) The Coefficient of Variation Asymptotic Distribution in the Case of Non-IID Random Variables. Journal of Applied Statistics, 36, 21-32. https://doi.org/10.1080/02664760802382491 |

[5] |
Pardo, M.C. and Pardo, J.A. (2000) Use of R\’{e}nyi Divergence to Test for the Equality of the Coefficients of Variation. Journal of Computational and Applied Mathematics, 116, 93-104. https://doi.org/10.1016/S0377-0427(99)00312-X |

[6] |
Gokpinar, G. and Esra, G. (2015) A Computational Approach for Testing Equality of Coefficients of Variation in k Normal Populations. Hacettepe Journal of Mathematics and Statistics, 44, 1197-1213. https://doi.org/10.15672/HJMS.2014317482 |

[7] |
Krishnamoorthy, K. and Meesook, L. (2013) Improved Tests for the Equality of Normal Coefficients of Variation. In: Computational Statistics, Springer, New York.
https://doi.org/10.1007/s00180-013-0445-2 |

[8] |
Shipra, B., Kibria, B.M.G. and Sharma, D. (2012) Testing the Population Coefficient of Variation. Journal of Modern Applied Statistical Methods, 11, 325-335.
http://digitalcommons.wayne.edu/jmasm/vol11/iss2/5 https://doi.org/10.22237/jmasm/1351742640 |

[9] |
Zhaoyuan, L. and Jianfeng, Y. (2017) Testing for Heteroscedasticity in High-dimensional Regressions. Department of Statistics and Actuarial Science, The University of Hong Kong, Hong Kong. https://arxiv.org/abs/1510.00097 https://doi.org/10.1016/j.ecosta.2018.01.001 |

[10] |
Crépon, B. (2003) économétrie linéaire.
http://www.crest.fr/ckfinder/userfiles/files/Pageperso/crepon/poly20052006.pdf |

[11] |
Chekroun, A. (2017) Statistiques descriptives et exercices. Universiti Abou Bekr Belkaid Tlemcen-Algérie.
https://www.coursehero.com/file/27893347/chekroun-statistiquespdf/ |

[12] |
Bertoneche, M. (1979) Existence d’hétéroscédasticité dans le modèle de marché appliqué aux bourses européennes de valeurs mobilières. Journal de la société statistique de Paris, tome 120, 270-276.
http://www.numdam.org/item/JSFS_1979__120_4_270_0 |

[13] |
Gastwirth, J.L., Gel, Y.R. and Miao, W. (2009) The Impact of Levene’s Test of Equality of Variances on Statistical Theory and Practice. Statistical Science, 24, 343-360. https://doi.org/10.1214/09-STS301 |

[14] |
Vessereau, A. (1974) Essais interlaboratoires pour l’estimation de la fidélité des méthodes d’essais. Revue de statistique appliquée, tome 22, 5-48.
http://www.numdam.org/item/RSA_1974__22_1_5_0/ |

Journals Menu

Contact us

+1 323-425-8868 | |

customer@scirp.org | |

+86 18163351462(WhatsApp) | |

1655362766 | |

Paper Publishing WeChat |

Copyright © 2024 by authors and Scientific Research Publishing Inc.

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.