Confidence Intervals for the Mean of Non-Normal Distribution: Transform or Not to Transform

Abstract

In many areas of applied statistics, confidence intervals for the mean of the population are of interest. Confidence intervals are typically constructed as-suming normality although non-normally distributed data are a common occurrence in practice. Given a large enough sample size, confidence intervals for the mean can be constructed by applying the Central Limit Theorem or by the bootstrap method. Another commonly used method in practice is the back-transformation method, which takes on the following three steps. First, apply a transformation to the data such that the transformed data are normally distributed. Second, obtain confidence intervals for the transformed mean in the usual manner, which assumes normality. Third, apply the back- transformation to obtain confidence intervals for the mean of the original, non-transformed distribution. The parametric Wald method and a small sample likelihood-based third order method, which can address non-normality, are also reviewed in this paper. Our simulation results suggest that common approaches such as back-transformation give erroneous and misleading results even when the sample size is large. However, the likelihood-based third order method gives extremely accurate results even when the sample size is small.

Share and Cite:

Pek, J. , Wong, A. and Wong, O. (2017) Confidence Intervals for the Mean of Non-Normal Distribution: Transform or Not to Transform. Open Journal of Statistics, 7, 405-421. doi: 10.4236/ojs.2017.73029.

1. Introduction

In the last two decades, there has been a push in psychological science to improve research reporting with an emphasis on effect size and confidence interval reporting (see American Education Research Association [1] ; Cumming [2] ; Wilkinson and the Task Force for Statistical Inference [3] ). Effect sizes communicate the magnitude and direction of a practically important effect (e.g., treatment decreased depression scores by 13%), and confidence intervals communicate this effect’s estimate precision. The importance of confidence intervals, their basic construction, and interpretation have thus been the focus of several influential pedagogical articles (e.g., see Cumming and Fidler [4] ; Cumming and Finch [5] ; Greenland et al. [6] ).

Most, if not all, modern introductory statistics textbooks review and describe the construction of confidence intervals (e.g., see Moore et al. [7] ). Let ( x 1 , , x n ) be a sample obtained from a normally distributed population with mean μ and variance σ 2 . Then a ( 1 α ) 100 % confidence interval for μ can be calculated by

( x ¯ t * s x n , x ¯ + t * s x n ) (1)

where

x ¯ = i = 1 n x i n , s x 2 = i = 1 n ( x i x ¯ ) 2 n 1 ,

and t* is the ( 1 α / 2 ) 100 t h percentile of the Student t distribution with ( n 1 ) degrees of freedom. Moreover, when the sample size n is large (usually stated n is larger than 30), then the ( 1 α ) 100 % confidence interval for μ can still be obtained from (1) except that we replaced t* by z*, which is the ( 1 α / 2 ) 100 t h percentile of the standard normal distribution.

The fundamental assumption underlying the construction of this confidence interval is that the data are normally distributed. However, collected data are usually non-normally distributed in practice (for examples in psychology, see Cain et al. [8] ; Micceri [9] ). In public health research, Bland and Altman [10] reported that serum triglyceride measurements are distributed with positive skewness. In biology, McDonald [11] reported that the number of Eastern mud- minnows in Maryland streams are non-normally distributed.

In this paper, we compare various methods for constructing confidence intervals when data are non-normally distributed. Three of the most popular and commonly used methods are the method based on the Central Limit Theorem, the bootstrap method, and the back-transformation method, which are reviewed in Section 2. The parametric based Wald method and likelihood-based third order method are also discussed in Section 2. Note that the popular back-trans- formation method requires the existence of a transformation such that the transformed data are normally distributed. The selection of such a transformation by the Box-Cox transformation and the Tukey’s ladder of power transformation are briefly discussed in Section 2. Two empirical examples are presented in Section 3 to illustrate that confidence intervals based on the different methods discussed in Section 2 can be vastly different. Simulation results are presented in Section 4 to compare the accuracy of the methods discussed in this paper and illustrated that the likelihood-based third order method gives extremely accurate coverage probability even when the sample size is small, the Wald method, the Central Limit Theorem method and then bootstrap method all performed poorly when sample size is small but the performance increases when the sample size increases, and the popular back-transformation method should not be used because it does not construct the confidence interval for the correct parameter. Finally, some concluding remarks are given in Section 5.

2. Methodology

This section reviews four commonly used methods, namely the Central Limit Theorem, bootstrap, back-transformation, and Wald for obtaining a confidence interval for the mean of a non-normal distribution. A very accurate likelihood-based method is also introduced in this section.

2.1. Central Limit Theorem Method

Let ( x 1 , , x n ) be a sample from a non-normal distribution with mean ψ . When the sample size n is large, the Central Limit Theorem gives

X ¯ ψ v a r ( X ¯ ) N ( 0,1 )

where X ¯ = i = 1 n X i n and v a r ( X ¯ ) = v a r ( X ) n . Since x ¯ and s x 2 are the unbiased

estimates of ψ and v a r ( X ) respectively; by the Central Limit Theorem, an approximate ( 1 α ) 100 % confidence interval for ψ is

( x ¯ z * s x n , x ¯ + z * s x n ) (2)

where z * is the ( 1 α / 2 ) 100 t h percentile of the standard normal distribution.

2.2. Bootstrap Method

The bootstrap method is a popular non-parametric method, which does not require any distributional assumptions. Efron and Tibshirani [12] provide a detailed review of the bootstrap method. The following is an algorithmic approach of obtaining a ( 1 α ) 100 % percentile bootstrap confidence interval for the population mean, ψ .

Sample: ( x 1 , , x n )

Step 1: Resample the observed sample with replacement and calculate the sample mean for this bootstrap sample.

Step 2: Repeat Step 1 B times, where, typically, B 200 .

Step 3: Sort the B bootstrapped sample means; the ( α / 2 ) 100 t h and ( 1 α / 2 ) 100 t h percentiles give the ( 1 α ) 100 % percentile bootstrap confidence interval for the population mean.

Note that as with the Central Limit Theorem method, the bootstrap method requires the observed sample size to be large so as to be representative of the population.

2.3. Back-Transformation Method

Recall that X is a non-normally distributed random variable with mean ψ . Assume there exists a transformation g ( ) such that Y = g ( X ) is normally dis- tributed with mean μ and variance σ 2 . By the delta method,

ψ = E ( X ) = E [ g 1 ( Y ) ] g 1 ( μ ) ,

and an approximate ( 1 α ) 100 % confidence interval for ψ from (1) is

( g 1 ( y ¯ t * s y n ) , g 1 ( y ¯ + t * s y n ) ) . (3)

It is important to note that (3) could be misleading because g 1 ( μ ) can be very different from ψ . For example, if X follows a lognormal ( μ , σ 2 ) distribution, then Y = log ( X ) is distributed as N ( μ , σ 2 ) . It follows that the delta method gives ψ = E ( X ) exp ( μ ) . However, as shown in Table 1, ψ = exp ( μ + σ 2 / 2 ) , which can be quite different from exp ( μ ) , especially when σ 2 is large. Consider another example where Y = X follows a N ( μ , σ 2 ) distribution. Here, the delta method gives ψ μ 2 . However, Table 1 shows that ψ = μ 2 + σ 2 , which can be quite different from μ 2 , especially when σ 2 is large.

The rest of this subsection is to provide a systematic way of choosing the transformation g ( ) . In practice, the most common simple transformations are the logarithmic transformation and square root transformation. Box and Cox [13] proposed a more complicated transformation, which requires the determination of the power parameter. Similarly, Tukey [14] suggested a ladder of power transformation, which also requires the determination of the power parameter. We review Tukey’s method in a later subsection. With an observed sample ( x 1 , , x n ) , our aim is to obtain confidence intervals for ψ . In this paper, focus is placed on the two most commonly used transformations in practice: the logarithmic transformation and the square root transformation. Note that the tansformation methods discussed can be generalized to any known transformation in theory (cf., Box-Cox or Tukey’s transformations).

When observed data are non-normally distributed, a common approach is to first apply a transformation such that the transformed data become somewhat

Table 1. Transformation and the parameter of interest.

normally distributed. In the statistical literature, two very similar families of transformations are frequently discussed: the Box-Cox transformation and Tukey’s ladder of power transformation. In particular, Osborne [15] gives a detailed discussion of the application of the Box-Cox transformation. Mathematically, the Box-Cox transformation and Tukey’s ladder of power transformation are very similar. Because Tukey’s ladder of power transformation is easier to interpret compared to the Box-Cox transformation, we review the ladder of power transformation and suggested criteria to choose an appropriate transformation to address non-normally distributed data below.

Tukey’s ladder of power transformation takes the form

Y = X λ = { log ( X ) if λ = 0 X λ if λ 0

where λ is called the power parameter of this transformation, where λ is chosen such that Y is approximately normally distributed. Moreover, λ should be chosen such that the power parameter is easy to interpret. Note that λ = 1 is equivalent to no transformation. In practice, the popular reciprocal transformation, logarithmic transformation, and square root transformation are equivalent

to λ = 1 , 0 and 1 2 , respectively.

Table 1 presents the mean of distribution prior to transformation, ψ , in terms of μ and σ 2 based on the type of transformation used. Since ψ does not exist for the reciprocal transformation, this transformation is not considered in this paper.

With an observed sample, we suggest the choice of λ be based on three criteria:

1. de-trended normal quantile-quantile (Q-Q) plot,

2. p-value of the Shapiro-Wilk test of normality, and

3. skewness.

First, when the de-trended normal Q-Q plot deviates from the horizontal reference line which indicates identical quantiles between the data and a theoretical normal distribution, the plot suggests that the data are likely non-normally distributed. Second, simulation studies by Razali and Wah [16] illustrate that the Shapiro-Wilk test is the most powerful test among all formulated statistical tests for normality. Under the assumption of a normal distribution, the smaller the p-value associated with the Shaprio-Wilk test, the more evidence against the normality assumption. Thus, the transformation which gives the largest p-value of the Shapiro-Wilk test is associated with the least evidence against the transformed data being normally distributed. Finally, with regard to skewness, the normal distribution has skewness 0. In this vein, the transformation which results in a skewness value closest to 0 is most symmetric and would be the preferred transformation.

2.4. Wald Method

As in the previous subsection, we assume that X be a non-normally distributed random variable with mean ψ and there exists a transformation g ( ) such that Y = g ( X ) is normally distributed with mean μ and variance σ 2 . Moreover, ψ = ψ ( μ , σ 2 ) .

The log-likelihood function concerning Y can be written as

l ( μ , σ 2 ) = a n 2 log σ 2 1 2 σ 2 i = 1 n ( y i μ ) 2 (4)

where a is an additive constant. Without loss of generality, a is set to zero hereafter. The overall maximum likelihood estimate (MLE), denoted by ( μ ^ , σ ^ 2 ) , can be obtained by solving

l ( μ , σ 2 ) μ | μ ^ , σ ^ 2 = 1 σ ^ 2 i = 1 n ( y i μ ^ ) = 0

l ( μ , σ 2 ) σ 2 | μ ^ , σ ^ 2 = n 2 σ ^ 2 + 1 2 σ ^ 4 i = 1 n ( y i μ ^ ) 2 = 0.

Hence, we have

μ ^ = y ¯ , σ ^ 2 = 1 n i = 1 n ( y i μ ^ ) 2 = ( n 1 ) s y 2 n .

The observed information matrix is the negative of the second derivatives of the log-likelihood function with respect to the parameters:

j ( μ , σ 2 ) = ( n σ 2 1 σ 4 i = 1 n ( y i μ ) 1 σ 4 i = 1 n ( y i μ ) n 2 σ 4 + 1 σ 6 i = 1 n ( y i μ ) 2 ) .

The variance-covariance matrix for ( μ ^ , σ ^ 2 ) can be approximated by the inverse of Fisher’s expected information matrix, { E [ j ( μ , σ 2 ) ] } 1 , which, in general, can be difficult to obtain in practice. Nevertheless, the variance-covariance matrix for ( μ ^ , σ ^ 2 ) can be approximated by the inverse of the observed information evaluated at the MLE, j 1 ( μ ^ , σ ^ 2 ) where

j ( μ ^ , σ ^ 2 ) = ( n σ ^ 2 0 0 n 2 σ ^ 4 ) .

It is well-known that ( μ ^ , σ ^ 2 ) is asymptotically distributed as normal with mean ( μ , σ 2 ) and variance j 1 ( μ ^ , σ ^ 2 ) .

Recall that the parameter of interest is ψ = ψ ( μ , σ 2 ) , and we denote ψ ^ = ψ ( μ ^ , σ ^ 2 ) . By the delta method,

v a r ^ ( ψ ^ ) ( l ( μ ^ , σ ^ 2 ) ( μ , σ 2 ) ) j 1 ( μ ^ , σ ^ 2 ) ( l ( μ ^ , σ ^ 2 ) ( μ , σ 2 ) )

where

l ( μ ^ , σ ^ 2 ) ( μ , σ 2 ) = ( l ( μ ^ , σ ^ 2 ) μ l ( μ ^ , σ ^ 2 ) σ 2 ) | ( μ ^ , σ ^ 2 ) .

Thus, an approximate ( 1 α ) 100 % confidence interval for ψ is

( ψ ^ z * v a r ^ ( ψ ^ ) , ψ ^ + z * v a r ^ ( ψ ^ ) ) . (5)

For the case of the logarithmic transformation (i.e., Tukey’s ladder of power transformation where λ = 0 ), the parameter of interest is ψ = exp ( γ ) , where γ = μ + σ 2 / 2 . Therefore, by the Wald method, a ( 1 α ) 100 % confidence in- terval for γ is

( γ ^ z * v a r ^ ( γ ^ ) , γ ^ + z * v a r ^ ( γ ^ ) )

where γ ^ = μ ^ + σ ^ 2 / 2 and v a r ^ ( γ ^ ) σ ^ 2 n + σ ^ 4 2 n . Thus, an approximate ( 1 α ) 100 %

confidence interval for ψ is

( e x p { γ ^ z * v a r ^ ( γ ^ ) } , e x p { γ ^ + z * v a r ^ ( γ ^ ) } ) .

For the case of the square root transformation (i.e., Tukey’s (1977) ladder of

power transformation where λ = 1 2 ), the parameter of interest is ψ = μ 2 + σ 2 .

Therefore, an approximate ( 1 α ) 100 % confidence interval for ψ is given by (5), where

ψ ^ = μ ^ 2 + σ ^ 2 and v a r ^ ( ψ ^ ) = 2 σ ^ 2 ( 2 μ ^ 2 + σ ^ 2 ) n .

2.5. Likelihood-Based Third Order Method

Both the Central Limit Theorem method and Wald method have a theoretical rate of convergence of O ( n 1 / 2 ) , and both the back-transformation method and the bootstrap method have no known rate of convergence. In recent years, many methods have been developed to improve the rate of convergence of existing asymptotic methods. In this subsection, we review the modified signed log-like- lihood ratio method by Barndorff-Nielsen [17] . The modified signed log-like- lihood ratio statistic is defined as

r * = r * ( ψ ) = r ( ψ ) 1 r ( ψ ) log r ( ψ ) q ( ψ ) (6)

where

r ( ψ ) = s i g n ( ψ ^ ψ ) { 2 [ l ( μ ^ , σ ^ 2 ) l ( μ ^ ψ , σ ^ ψ 2 ) ] } 1 / 2 (7)

is the signed log-likelihood ratio statistic, ( μ ^ ψ , σ ^ ψ 2 ) is the constrained MLE obtained by maximizing the log-likelihood function for a given ψ value, and q ( ψ ) is a statistic based on the log-likelihood function given in (4). Barndorff- Nielsen [17] showed that r * is asymptotically distributed as a standard normal distribution with a rate of convergence of O ( n 3 / 2 ) . Thus, a ( 1 α ) 100 % confidence interval obtained based on r * is ( ψ L , ψ U ) such that ψ L and ψ U satisfies | r * ( ψ L ) | < z * , | r * ( ψ U ) | < z * , and ψ L < ψ U .

If the model is an exponential family model and the parameter of interest ψ is a component parameter of the canonical parameter, Fraser [18] showed that q ( ψ ) is the standardized MLE statistic. Given a general model and this idea in mind, Fraser and Reid [19] first approximate the model using an approximate tangent exponential model to obtain the locally defined canonical parameter. Then, they express the parameter of interest in terms of the locally defined canonical parameter and also derived the variance the estimated parameter of interest in this locally defined canonical parameter scale. Thus, q ( ψ ) is the standardized MLE statistic expressed in the locally defined canonical parameter scale, and the modified signed likelihood ratio statistic can be used to obtain confidence interval for ψ . Details of this algorithmic approach of obtaining r * is outlined below.

Notation: l ( θ ) is the log-likelihood function;

θ is a k-dimensional vector of parameters;

φ = φ ( θ ) is a k-dimensional vector of canonical parameters for the exponential family model;

ψ = ψ ( θ ) is a scalar parameter of interest;

( x 1 , , x n ) is the observed data.

Aim: Inference about ψ .

Step 1: From the log-likelihood function, obtain the overall MLE, θ ^ , ψ ^ = ψ ( θ ^ ) , l ( θ ^ ) and j ^ = j θ θ ( θ ^ ) can be obtained.

Step 2: Apply the Lagrange multiplier technique to obtain the constrained MLE at ψ = ψ 0 . More specifically, maximize

H ( θ , λ ) = l ( θ ) + λ ( ψ ( θ ) ψ 0 )

with respect to ( θ , λ ) , where λ is defined as the Lagrange multiplier. Denote the result of the maximization be ( θ ˜ ψ 0 , λ ˜ ) .

Step 3: Define the tilted log-likelihood function as

l ˜ ( θ ) = l ( θ ) + λ ˜ ( ψ ( θ ) ψ )

where ψ is a fixed value. Obtain the constrained MLE either from the tilted log-likelihood function or from Step 2, θ ˜ ψ , l ( θ ˜ ψ ) = l ˜ ( θ ˜ ψ ) and j ˜ θ θ ( θ ˜ ψ ) , which is the matrix of the negative of the second derivatives of the tilted log-likelihood function.

Step 4: The signed log-likelihood ratio statistic is

r = s g n ( ψ ^ ψ ) { 2 [ l ( θ ^ ) l ( θ ˜ ψ ) ] } 1 / 2 .

Step 5: Define

χ ( θ ) = ψ θ ( θ ˜ ψ ) φ θ 1 ( θ ˜ ψ ) φ ( θ )

where ψ θ ( θ ) is the first derivative of ψ ( θ ) with respect to θ , and φ θ ( θ ) is the first derivative of φ ( θ ) with respect θ . This quantity is a recalibration of the parameter of interest ψ in the canonical parameter φ space.

Step 6: The quantity | χ ( θ ^ ) χ ( θ ˜ ψ ) | measures the departure of ψ ^ from ψ in φ space.

Step 7: The estimated variance for the departure in φ space is given by

v a r ^ ( χ ( θ ^ ) χ ( θ ˜ ψ ) ) = ψ θ ( θ ˜ ψ ) j ˜ θ θ 1 ( θ ˜ ψ ) ψ θ ( θ ˜ ψ ) | j ˜ θ θ ( θ ˜ ψ ) | | φ θ ( θ ˜ ψ ) | 2 | j θ θ ( θ ^ ) | | φ θ ( θ ^ ) | 2 .

Step 8: The standardized MLE departure under the φ scale is given by

q = s i g n ( ψ ^ ψ ) | χ ( θ ^ ) χ ( θ ˜ ψ ) | v a r ^ ( χ ( θ ^ ) χ ( θ ˜ ψ ) ) .

Step 9: The modified signed log-likelihood ratio statistic is given by

r * = r 1 r log r q .

Although the algorithm involves many steps, it can easily be implemented into algebraic or statistical software such as MATLAB, Maple and R.

3. Empirical Examples

In this section, the different methods of constructing a confidence interval about the mean of non-normally distributed data are illustrated with two empirical examples. We demonstrate that the results obtained by the methods discussed in this paper can be very different.

3.1. Example 1: Serum Triglyceride Measurements

Bland and Altman [10] considered n = 278 serum triglyceride measurements, which had a positively skewed data distribution with an average of 0.51 mmol/l and a standard deviation of 0.22 mmol/l. By applying a base 10 logarithm transformation to the data to obtain a less skewed distribution, the transformed distribution became bell-shaped with an average of −0.33 and a standard deviation of 0.17. By applying the Central Limit Theorem, they report a 95% confidence interval for the mean serum triglyceride measurements to be (0.48, 0.54). Using the back-transformation method, the corresponding interval is (0.45, 0.49). Table 2 presents the 95% confidence intervals for the mean serum triglyceride

Table 2. 95% confidence interval for the mean serum triglyceride measurements.

measurements for the alternative methods reviewed above Note that for this example, the bootstrap method cannot be applied because the original data set is not unavailable.

Bland and Altman [10] noted that the interval obtained by the back-trans- formation method is actually the 95% confidence interval for the geometric mean of serum triglyceride measurements instead of the mean serum triglyceride measurements, where the latter is the parameter of interest. Stated differently, the back-transformation method does not provide information about the focal parameter of interest (i.e., the mean of the non-normal distribution). From Table 2, it can be observed that the results from the Central Limit Theorem method are different from those obtained by the Wald method and third order method. Additionally, the Wald method and third order method give results which agree up to the second decimal place. This observation is not surprising because these two methods theoretically converge to the same answer when the sample size goes to infinity. The only difference is that the third order method will have a faster rate of convergence than the Wald method (i.e., O ( n 1 / 2 ) versus O ( n 3 / 2 ) , respectively). The different rates of convergence are more formally illustrated in Section 4.

3.2. Example 2: Abundance of Eastern Mudminnows

McDonald [11] reported on data on the abundance of Eastern mudminnows in Maryland streams, which is reproduced below:

These data are non-normally distributed and McDonald [11] suggested that both the logarithmic and square root transformed data are suitable for analysis because they are more normally distributed compared to the original and other competing transformations. His final analysis makes use of the logarithmic transformed data.

Table 3 presents the 95% confidence intervals for the mean of the non-trans- formed distribution obtained by applying the Central Limit Theorem method

Table 3. 95% confidence intervals for the mean of the abundance of Eastern mud- minnows in Maryland streams.

and the bootstrap method with B = 5000 to the original data; and the back- transformation method, Wald method, and likelihood-based third order method to both the logarithmic transformed data and square root transformed data.

The results obtained by the methods discussed in this paper are very different for different transformations. In particular, the logarithmic transformation results in a much larger upper bound of the interval compared to the square root transformation. Thus, it is essential to identify which transformation is more appropriate for a given set of data.

The de-trended normal Q-Q plots for the original data, logarithmic transformed data and square root transformed data are shown in Figure 1. From

Figure 1. De-trended Normal Q-Q plots for original and transformed data of the abundance of Eastern mudminnows.

these plots, it is obvious that the original data are not normally distributed because the points deviate from the horizontal reference line, which indicates identical quantiles between the data and a theoretical normal distribution. The two sets of transformed data are more closely normally distributed because the points in the de-trended normal Q-Q plots lie more closely to the reference line relative to the original data.

The Shapiro-Wilk test on normality of the original data gives a p-value of 0.1091. The same test on the logarithmic transformed data gives a p-value of 0.5261, and it gives a p-value of 0.6479 on the square root transformed data. Consistent with the de-trended Q-Q plot, the p-values of the Shapiro-Wilk test similarly suggest that the two transformed data sets are more likely to be normally distributed. Additionally, the empirical skewness of the original data, logarithmic transformed data, and square root transformed data are 0.5864, −0.4886, and 0.1632, respectively. These quantifications of skewness imply that the square root transformed data are more symmetrical than the original data and logarithmic transformed data. Thus, based on the criteria discussed in Section 2.3, the square root transformation is recommended for these data.

4. Simulation Study

A simulation study was carried out to compare the accuracies of the methods discussed in this paper. R code for the simulation is available to the interested reader upon request. For each ( n , μ , σ ) combination, we generated 10,000 samples from N ( μ , σ 2 ) . These are our simulated transformed samples, and the non-transformed (i.e., original) samples can be obtained by applying the inverse transformation to the simulated data. The transformations examined are the natural logarithm and square root. For each simulated sample, we computed a 95% confidence interval for the mean of the untransformed population from the five reviewed methods: Central Limit Theorem, bootstrap (B = 5000), back- transformation, Wald, and likelihood-based third order. The following quantities are recorded: the proportion of true means falling within the 95% confidence interval (coverage proportion), the proportion of true means less than the lower 95% confidence limit (lower error), and the proportion of true means greater than the upper 95% confidence limit (upper error). The nominal values of coverage, lower error, upper error, and bias are: 0.95, 0.025, and 0.025, respectively. We present only a small subset of the simulations we conducted to highlight several key points below, and other simulation results are available upon request.

Table 4 presents results with the natural logarithmic transformed data being

generated from N ( μ , σ 2 ) and the parameter of interest is e x p ( μ + σ 2 2 ) .

It can be observed that the likelihood-based third order method outperforms the other methods especially when the sample size is small; coverage, lower and upper errors associated with the likelihood-based third order method are relatively closer to nominal rates compared to the alternative methods. Among the remaining methods, the Central Limit Theorem method and the bootstrap

Table 4. 95% coverage probability for the logarithmic transformation case.

method give similar results. The Wald method seems to converge faster than the Central Limit Theorem and bootstrap methods. As discussed in Section 2, the back-transformation method gives unacceptable coverage probability because it is constructing confidence intervals about a parameter that is not of interest.

It can be observed that the likelihood-based third order method outperforms the other methods especially when the sample size is small; coverage, lower and upper errors associated with the likelihood-based third order method are relatively closer to nominal rates compared to the alternative methods. Among the remaining methods, the Central Limit Theorem method and the bootstrap method give similar results. The Wald method seems to converge faster than the Central Limit Theorem and bootstrap methods. As discussed in Section 2, the back-transformation method gives unacceptable coverage probability because it is constructing confidence intervals about a parameter that is not of interest.

Table 5 presents results with the square root transformed data being generated from N ( μ , σ 2 ) and the parameter of interest is μ 2 + σ 2 .

Similar to results in Table 4, we can observe that the likelihood-based third order method outperforms the other methods, especially when sample size is small. In this context, the Central Limit Theorem method and the bootstrap method give similar results and they seem to converge faster than the Wald method. The back-transformation method continues to give unacceptable coverage probability because it constructs confidence intervals about a parameter that is not of interest.

Based on these simulation results, the Central Limit Theorem method, bootstrap method and Wald method converge slowly relative to the likelihood-based third order method. Hence, we recommend using the likelihood-based third order method to obtain confidence intervals for the mean of the non-transformed distribution after applying a normalizing transformation to non-normal data, especially for small sample sizes or large departures from normality. It is important to note that researchers should not use the popular back-transformation method despite its simplicity except for the special case where ψ = E ( X ) Math_195#.

More simulations have been performed with the same pattern of results. They are not reported here, but are available upon request.

5. Conclusion

When interest is in constructing a confidence interval about a non-normal distribution, normalizing transformations are typically recommended as a first step. This paper recommends the use of de-trended normal Q-Q plots, the largest p-value of the Shapiro-Wilk test, and quantifications of skewness on the transformed data to determine the power parameter ( λ ) for Tukey’s ladder of power transformation when the exact transformation is unavailable. Our results strongly advise against using the popular back-transformation approach in applied work because it does not construct confidence intervals about the parameter of interest (i.e., the mean of the original distribution). Instead, we recommend the

Table 5. 95% coverage probability for the square root transformation case.

likelihood-based third order method because of its superior performance in terms of its rate of convergence, coverage, and accuracy relative to the Central Limit Theorem, bootstrap and Wald methods, even when the sample size is small or the distribution is far from being normal.

Acknowledgements

We thank the editor and the referee for their comments. This work was based on O.C.Y. Wong’s undergraduate honors thesis. J. Pek was supported by the Natural Sciences and Engineering Research Council of Canada Discovery Grant (RGPIN-04301-2014) and the Early Researcher Award by the Ontario Ministry of Research and Innovation (ER15-11-004). A.C.M. Wong was supported by the Natural Sciences and Engineering Research Council of Canada Discovery Grant (RGPIN-163597-2012).

Conflicts of Interest

The authors declare no conflicts of interest.

References

[1] American Education Research Association (2006) Standards for Reporting on Empirical Social Science Research in AERA Publications. Educational Researcher, 35, 33-40.
https://doi.org/10.3102/0013189X035006033
[2] Cumming, G. (2014) The New Statistics: Why and How. Psychological Science, 25, 7-9.
https://doi.org/10.1177/0956797613504966
[3] Wilkinson, L. and the Task Force on Statistical Inference (1999) Statistical Methods in Psychology Journals: Guidelines and Explanations. American Psychologist, 54, 594-604.
https://doi.org/10.1037/0003-066X.54.8.594
[4] Cumming, G. and Fidler, F. (2009) Confidence Intervals: Better Answers to Better Questions. Journal of Psychology, 217, 15-26.
https://doi.org/10.1027/0044-3409.217.1.15
[5] Cumming, G. and Finch, S. (2001) A Primer on the Understanding, Use, and Calculation of Confidence Intervals that Are Based on Central and Noncentral Distributions. Educational and Psychological Measurement, 61, 532-574.
https://doi.org/10.1177/0013164401614002
[6] Greenland, S., Senn, S.J., Rothman, K.J., Carlin, J.B., Poole, C. Goodman, S.N. and Altman, D.G. (2016) Statistical Tests, P Values, Confidence Intervals, and Power: A Guide to Misinterpretations. European Journal of Epidemiology, 31, 337-350.
https://doi.org/10.1007/s10654-016-0149-3
[7] Moore, D.S., McCabe, G.P. and Craig, B.A. (2014) Introduction to the Practice of Statistics 8th Edition, W.H. Freeman and Company, New York.
[8] Cain, M.K., Zhang, Z. and Yuan, K.H. (2016) Univariate and Multivariate Skewness and Kurtosis for Measuring Nonnormality: Prevalence, Influence and Estimation. Behavior Research Methods, 1-20.
https://doi.org/10.3758/s13428-016-0814-1
[9] Micceri, T. (1989) The Unicorn, the Normal Curve and Other Improbable Creatures. Psychological Bulletin, 105, 156-166.
https://doi.org/10.1037/0033-2909.105.1.156
[10] Bland, J.M. and. Altman, D.G. (1996) Transformations, means, and confidence intervals. British Medical Journal, 312, 1079.
https://doi.org/10.1136/bmj.312.7038.1079
[11] McDonald, J.H. (2014) Handbook of Biological Statistics. Sparky House, Maryland.
[12] Efron, B. and Tibshirani, R.J. (1994) An Introduction to the Bootstrap. Chapman and Hall, New York.
[13] Box, G.E. and Cox, D.R. (1964) An Analysis of Transformation (with Discussion). Journal of the Royal Statistical Society B, 26, 211-252.
[14] Tukey, J.W. (1977) Exploratory Data Analysis. Addison-Wesley, Massachusetts.
[15] Osborne, J.W. (2010) Improving Your Data Transformation: Applying the Box-Cox Transformation. Practical Assessment, Research & Evaluation, 15, Article 12.
[16] Razali, N. and Wah, Y. (2011) Power Comparisons of Shapiro-Wilk, Kolmogorov-Smirnov, Lilliefors and Anderson-Darling tests. Journal of Statistical Modeling and Analytics, 2, 21-33.
[17] Barndorff-Nielsen, O.E. (1991) Modified Signed Log Likelihood Ratio. Biometrika, 78, 557-561.
https://doi.org/10.1093/biomet/78.3.557
[18] Fraser, D.A.S. (1991) Statistical Inference: Likelihood to Significance. Journal of American Statistical Association, 86, 258-265.
https://doi.org/10.1080/01621459.1991.10475029
[19] Fraser, D.A.S. and Reid, N. (1995) Ancillaries and Third Order Significance. Utilitas Mathematica, 7, 33-53.

Copyright © 2024 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.