Antithetic Power Transformed Random Variables in Computer Simulations: An Error Correction Mechanism

Abstract

A traditional method of Monte Carlo computer simulation is to obtain uniformly distributed random numbers on the interval from zero to one from a linear congruential generator (LCG) or other methods. Random variates can then be obtained by the inverse transformation technique applied to random numbers. The random variates can then be used as input to a computer simulation. A response variable is obtained from the simulation results. The response variable may be biased for various reasons. One reason may be the presence of small traces of serial correlation in the random numbers. The purpose of this paper is to introduce an alternative method of response variable acquisition by a power transformation applied to the response variable. The power transformation produces a new variable that is negatively correlated with the response variable. The response variable is then regressed on its power transformation to convert the units of the power transformed variable back to those of the original response variable. A weighted combination of these two variables gives the final estimate. The combined estimate is shown to have negligible bias. The correlations of various antithetic variates obtained from the power transformation are derived and illustrated to provide insights for this research and for future research into this method.

Share and Cite:

Ridley, D. and Ngnepieba, P. (2023) Antithetic Power Transformed Random Variables in Computer Simulations: An Error Correction Mechanism. Journal of Applied Mathematics and Physics, 11, 1707-1727. doi: 10.4236/jamp.2023.116111.

1. Introduction

A sequence of random numbers that are serially perfectly uncorrelated in its totality can contain what appear to be subsets of correlated sequences. Indeed, positively and negatively correlated subsets are a necessary part of true randomness. Traces of serial correlation in any subset can create a trend that biases the random numbers to the high or low side of the true value. An opposite trend can develop subsequently. But a very large number of random numbers can occur before the net bias is corrected. Proykova [1] describes random walk simulation experiments in which the walker is temporarily trapped in the vicinity of correlated numbers. Either way, apparent subsets of traces of serial correlation introduce unreliability into the simulation. Serial correlation in random numbers that are used in Monte Carlo computer simulation experiments, are known to cause bias in estimates of the response variable (Ferrenberg, Lanau and Wong [2] ). Where possible, bias and variance reduction methods can be very useful for improving accuracy in some applications. For example, physics experiments can utilize as many as 1019 random numbers. Reducing the length of a simulation for the same level of accuracy will save considerable time. Depending on the application, bias free accuracy is of paramount importance. Therefore, apart from their not repeating, the long run quality of the random numbers is irrelevant, since subsets will be correlated. Perhaps it is better to focus on methods that have the potential to improve the results obtained from random numbers that are deficient for any reason related to departure from randomness. Accordingly, some ideas are explored and discussed. The novelty of this paper is to present a new method of antithetic variables that eliminates bias error in the simulation response variable.

1.1. Background

Monte Carlo computer simulation is a widely used method for the study and analysis of complex systems that are not tractable to analytical mathematical formulae. Several methods of random number generation have been devised for use in computer simulation. Once the most common of these was the multiplicative congruential generator first suggested by Lehmer [3] . Various design issues have been identified by Marsaglia [4] and Park and Miller [5] . Pursuant to these issues and others associated with the mixed linear congruential generator, various more complicated methods have been suggested (Marsaglia [6] [7] ; Marsaglia & Tsang [8] ; Marsaglia & Tsay [9] ). Much attention has also been paid to the speed of generating random numbers (Marsaglia, MacLaren & Bray [10] ; Marsaglia & MacLaren [11] ; Marsaglia, Ananthanarayan & Paul [12] ; Marsaglia & Tsang [13] ; Leva [14] ). Kroese, Taimre & Botev [15] discusses the pros and cons of a wide variety of methods. The disconcerting outcome from the use of so called better random numbers is that the bias found by Ferrenberg, Lanau and Wong [2] is worse than that for the simpler linear congruential generator.

Inversely correlated random numbers were suggested by Hammersley and Morton [16] . They proposed a simple method for use in Monte Carlo computer simulation. In that method two simulations are conducted. One simulation uses uniformly distributed (0, 1) random numbers ${r}_{i}$ , $i=1,2,3,\cdots$ . The other uses the complementary random numbers $1-{r}_{i}$ . The output responses from the two simulations are then averaged. The method may produce an average response that has a smaller variance than for the individual simulations. Aljeddani [17] uses Bayesian methodology in Monte Carlo simulation to improve estimates where time and cost constraints are prohibitive. In practice, the variance reduction is not guaranteed and may actually increase (Cheng [18] ; Kleijnen [19] ). Cheng [18] goes on to suggest an improvement. However, reducing variance does not correct bias error. It only makes the response variable more precisely wrong. Our aim in this paper is bias elimination.

Antithetic random variables by power transformation of actual time series data was introduced by Ridley [20] . The Ridley [20] antithetic time series theorem states that “if ${X}_{t}>0$ , $t=1,2,3,\cdots$ is a discrete realization of a lognormal stochastic process, such that $\mathrm{ln}{X}_{t}~N\left(\mu ,{\sigma }^{2}\right)$ , then if the correlation between ${X}_{t}$ and ${X}_{t}^{p}$ is ρ, then ${\mathrm{lim}}_{\sigma \to 0,p\to {0}^{-}}\rho =-1$ .” Serially correlated antithetic random variables can be combined so as to generate a new random variable that is not serially correlated. Ridley [20] provides antithetic fitted function, and antithetic fitted error variance function theorems for lognormally distributed random variables. Ridley [21] [22] [23] , Ridley, Ngnepieba & Duke [24] , Ridley & Ngnepieba [25] [26] demonstrate applications in time series analysis and forecasting. At first thought, it appears that the creation of antithetic random variables by this method might be useful for computer simulation. However, the random numbers used in computer simulation are uniformly distributed, not lognormally distributed.

1.2. Proposed Research

The proposed research is concerned with artificial sequences of antithetic random numbers and variables as opposed to the analysis of antithetic time series obtained from recorded data. We define antithetic variables as follows.

Definition. Two random variables are antithetic if their linear correlation is negative. A bivariate collection of random variables is asymptotically antithetic if its limiting correlation approaches minus one asymptotically.

Definition. r(x,i) is an ensemble of random variables, where x belongs to a sample space and i belongs to an index set representing order, such that ${r}_{i}$ is a discrete realization of a stochastic process from the ensemble, ${r}_{i}~U\left(0,1\right)$ , and ${r}_{i},i=1,2,3,\cdots ,n$ are serially correlated.

The objective of the proposed research is not to create better random numbers. The objective is to reduce the effect of serial correlation in random numbers on bias in the estimate of the response variable from computer simulations regardless of the particular method of random number generation. Combining antithetic computer simulations is investigated for this purpose. Therefore, the worse the random numbers are, the better is the test of efficacy of combining antithetic computer simulations. So, only the simple linear congruential generator (LCG) was used. If the random numbers, however generated, were ideal in every way, the expectation is that no improvement in the accuracy of the response variable will occur due to antithetic combining. We do not know ahead of time the distribution of the response variable. So, beyond what we know about the lognormal case from Ridley [20] , we begin by exploring the correlation between the sequence of various random variables to establish the inverse correlations that are the basis of antithetic combining. This serves to improve our general understanding of antithetic combining and gain insights for what might work best. Then we apply antithetic combining to a response variable from a computer simulation. After examining the results of the exploration, we hypothesize that the estimates of response variables from computer simulations can be combined with their antithetic counterpart to obtain a new response variable that has negligible bias.

The paper is organized as follows. In section 2 we derive the analytic function for the correlation between a uniform random number and the random variate obtained from its pth exponent. In section 3 we derive the analytic function for the correlation between antithetic exponential random variates obtained by power transformation. In Section 4 we derive the correlation between random variates from inverse and power transformations. In section 5 we present the results of a computer simulation experiment. In section 6 we explore the effect of combining a response variable from a Monte Carlo simulation performed with random variates obtained by inverse transformation and by combining the response variable with its power transformed antithetic variable counterpart. Section 7 contains conclusions and suggestions for future research.

2. Correlation between ${r}_{i}$ and ${r}_{i}^{p}$

Antithetic uniform random numbers theorem: If ${r}_{i}>0$ , $i=1,2,3,\cdots ,n$ is a discrete realization of a uniform stochastic process, such that ${r}_{i}~U\left(a,b\right)$ for any interval a, b, then if ${\rho }_{r{r}^{p}}$ is the correlation between ${r}_{i}$ and ${r}_{i}^{p}$ , then

$\underset{p\to {0}^{-}}{\mathrm{lim}}{\rho }_{r{r}^{p}}=\underset{p\to {0}^{-}}{\mathrm{lim}}\frac{p|p+1|\sqrt{2p+1}\cdot \sqrt{12}}{2|p|\left(p+2\right)\left(p+1\right)}=-\frac{\sqrt{3}}{2}$

See Appendix A for the proof.

This correlation is illustrated in Table 1 and Figure 1 for ${r}_{i}~U\left(0,1\right)$ and values of p on and between −0.1 and −0.001. The correlations are averages calculated from 1000 samples of 500 numbers each, obtained from Matlab [27] . The correlation between ${r}_{i}$ and ${r}_{i}^{p}$ is seen to be negative, approaching −0.8666 recurring, or $-\sqrt{3}/2$ .

Unlike the case of the lognormal distribution where the correlation between the random variable and its pth exponent is −1, here the correlation is $-\sqrt{3}/2$ . That is, ${r}_{i}$ and ${r}_{i}^{p}$ can only offer partial antithetic combining. In order to convert ${r}^{-p}$ into the units of r, consider the regression of ${r}^{p}$ on ${r}^{-p}$ , ${r}^{p}=f\left({r}^{-p}\right)+\epsilon =\alpha +\beta {r}^{-p}+\epsilon$ . Since ${\mathrm{lim}}_{p\to {0}^{-}}Corr\left({r}^{p},{r}^{-p}\right)=-1$ (see Appendix B),

Table 1. Correlation between ${r}_{i}$ and ${r}_{i}^{p}$ vs p.

Figure 1. Correlation between ${r}_{i}$ and ${r}_{i}^{p}$ vs p.

that is ${r}^{p}$ and ${r}^{-p}$ are perfectly negatively correlated, it follows that ${\mathrm{lim}}_{p\to {0}^{-}}{r}^{p}=\alpha +\beta {r}^{-p}+\epsilon$ where $\epsilon \to 0$ . That is, ${r}^{p}\approx \alpha +\beta {r}^{-p}$ . Let the fitted values of ${r}^{p}$ be denoted by ${\stackrel{^}{r}}^{p}$ , let $\stackrel{^}{\alpha }$ and $\stackrel{^}{\beta }$ denote least squares estimates of $\alpha$ and $\beta$ respectively, and let s denote standard deviation. Then, ${\stackrel{^}{r}}^{p}=\stackrel{^}{\alpha }+\stackrel{^}{\beta }{r}^{-p}=\stackrel{¯}{{r}^{p}}+Corr\left({r}^{p},{r}^{-p}\right)\left({s}_{{r}^{p}}/{s}_{{r}^{-p}}\right)\left({r}^{-p}-\stackrel{¯}{{r}^{-p}}\right)$ . This random variable is negatively correlated with r. However, unlike r, the probability distribution of ${\stackrel{^}{r}}^{p}$ is not uniform. For purpose of comparison and uniformity, ${\stackrel{^}{r}}^{-p}$ can be converted back to the original scale of and distribution of ${\stackrel{^}{r}}^{p}$ by computing the inverse transform ${r}^{\prime }={\left({\stackrel{^}{r}}^{p}\right)}^{1/p}$ . As it turns out, r and ${r}^{\prime }$ are positively correlated.

3. Antithetic Variates

Antithetic random variates theorem: If ${r}_{i}>0$ , $i=1,2,3,\cdots ,n$ is a discrete realization of a uniform stochastic process, such that ${r}_{i}~U\left(0,1\right)$ , then if ${\rho }_{{r}^{+p}{r}^{-p}}$ is the correlation between ${r}_{i}^{p}$ and ${r}_{i}^{-p}$ , then

${\mathrm{lim}}_{p\to 0}{\rho }_{{r}^{+p}{r}^{-p}}={\mathrm{lim}}_{p\to 0}-\frac{\sqrt{\left(2p+1\right)\left(p+1\right)}\sqrt{\left(-2p+1\right)\left(-p+1\right)}}{\left(1-p\right)\left(1+p\right)}=-1$ .

See Appendix B for the proof.

This correlation is illustrated in Table 2 and Figure 2 for values of p on and between −0.1 and −0.001. The correlations are calculated from just one sample of 100 random numbers. These correlations converge very rapidly. The correlation between ${r}_{i}^{p}$ and ${r}_{i}^{-p}$ is seen to approach −1.

Histograms for ${r}_{i}~U\left(0,1\right)$ and ${\text{lim}}_{p\to {0}^{-}}{r}_{i}^{p}$ and ${\text{lim}}_{\sigma \to 0,p\to {0}^{-}}{r}_{i}^{-p}$ based on 300 random numbers are shown in Figure 3. The histograms illustrate that in the limit as $p\to {0}^{-}$ , the distributions of ${r}_{i}^{p}$ and ${r}_{i}^{-p}$ are similar to negative

Table 2. Correlation between ${r}_{i}^{p}$ and ${r}_{i}^{-p}$ vs p.

Figure 2. Correlation between ${r}_{i}^{p}$ and ${r}_{i}^{-p}$ vs p.

Figure 3. Distributional characteristics $p\to {0}^{-}$ .

and positive exponential distributions, respectively.

Consider the possibility of combining response variables obtained from two computer simulations that are based on antithetic exponential random variates. Let the mean and standard deviation be 1/λ. Then, the exponential random variates can be obtained from ${r}_{i}^{p}$ by converting the mean and standard deviation. This is accomplished by subtracting its sample average $\stackrel{¯}{{r}^{p}}$ and dividing by the standard deviation ( ${s}_{{r}^{p}}$ ), then multiply by the population standard deviation (1/λ) and adding the population mean (1/λ) of interest. This suggests that computer simulations based on the antithetic pair $\left({r}_{i}^{p}-\stackrel{¯}{{r}^{p}}\right)/{s}_{{r}^{p}}\lambda +1/\lambda$ and

$\left({r}_{i}^{-p}-\stackrel{¯}{{r}^{-p}}\right)/{s}_{{r}^{-p}}\lambda +1/\lambda$ might yield a combined response with improved statistics. However, if the distribution of $\left({r}_{i}^{p}-\stackrel{¯}{{r}^{p}}\right)/{s}_{{r}^{p}}\lambda +1/\lambda$ matches that of the actual problem variable, the distribution of $\left({r}_{i}^{-p}-\stackrel{¯}{{r}^{-p}}\right)/{s}_{{r}^{-p}}\lambda +1/\lambda$ will not

match, and will not be a good representation. These random variates are perfectly negatively correlated.

4. Correlation between Random Variates from Inverse and Power Transformations

We established that the random variates ${r}_{i}^{p},p\to {0}^{-}$ are exponentially distributed. Therefore, we explored the comparison between exponential random variates obtained from the traditional inverse transformation and those obtained from the proposed power transformation.

Consider the exponential random variable x with parameter λ. Its pdf given by

$f\left(x\right)=\left\{\begin{array}{l}\lambda {\text{e}}^{-\lambda x}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{for}\text{\hspace{0.17em}}x\ge 0\\ 0\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{elsewhere}\end{array}$

The corresponding cumulative distribution is given by

$F\left(x\right)=P\left(X

The exponential random variate is calculated by setting ${r}_{i}$ equal to the cumulative distribution

${r}_{i}=1-{\text{e}}^{-\lambda {x}_{i}}$

The exponential random variate obtained by inverse transformation is given by

${x}_{i}=-\left(1/\lambda \right)\mathrm{ln}\left(1-{r}_{i}\right)$

The exponential random variate obtained by power transformation is given by

${{x}^{\prime }}_{i}=\frac{{r}_{i}^{p}-\stackrel{¯}{{r}^{p}}}{{s}_{{r}^{p}}\lambda }+\frac{1}{\lambda }$

Substituting the expected values for $\stackrel{¯}{{r}^{p}}$ and ${s}_{{r}^{p}}$ from Appendix A,

${{x}^{\prime }}_{i}=\frac{{r}_{i}^{p}-1/\left(p+1\right)}{\left(|p|/|p+1|\sqrt{2p+1}\right)\cdot \lambda }+\frac{1}{\lambda }$ ,

where ${{x}^{\prime }}_{i}$ has the property of equality of mean and standard deviation.

Antithetic inverse and power transform random variates theorem: If ${r}_{i}~U\left(0,1\right)$ is a uniformly distributed random number, ${X}_{i}>0$ , $i=1,2,3,\cdots ,n$ is a discrete realization of an exponential stochastic process such that

${X}_{i}~E\left(\lambda \right)$ derived from the inverse transformation ${x}_{i}=-\left(1/\lambda \right)\mathrm{ln}\left(1-{r}_{i}\right)$ with parameter λ, and ${{X}^{\prime }}_{i}~E\left(\lambda ,p\right)$ derived from ${{x}^{\prime }}_{i}=\frac{{r}_{i}^{p}-1/\left(p+1\right)}{\left(|p|/|p+1|\sqrt{2p+1}\right)\cdot \lambda }+\frac{1}{\lambda }$ , then if ${\rho }_{X,{X}^{\prime }}$ is the correlation between ${X}_{i}$ and ${{X}^{\prime }}_{i}$ , then

${\mathrm{lim}}_{p\to {0}^{-}}{\rho }_{X,{X}^{\prime }}=1-{\pi }^{2}/6$ .

See Appendix C for the proof.

This correlation is illustrated in Table 3 for values of p on and between −0.1 and −0.001 and in Figure 4. The correlations are averages calculated from 2000 samples of 500 numbers each, obtained from Matlab [27] . The correlation between ${x}_{i}$ and ${{x}^{\prime }}_{i}$ is seen to be negative, approaching −0.6484 recurring. That

Table 3. Correlation between ${x}_{i}$ and ${{x}^{\prime }}_{i}$ vs p.

is, approximately equal to $1-{\pi }^{2}/6$ . This correlation falls short of the ideal correlation of −1. That is, ${x}_{i}$ and ${{x}^{\prime }}_{i}$ can offer only partial antithetic combining.

5. Computer Simulation Experiment

Consider a simple M/M/1 queue as depicted in Figure 5. There are many systems that can be modelled by queueing theory. They include waiting lines at a business establishment, library or other institution, jobs waiting to be processed by a computer central processing unit, automobile traffic, telephone calls, statistical particle physics, etc. The test system used in the following experiment is limited to a simple queue for which we know precisely the value of the response variable. The expectation is that if bias reduction is effective in the test system, it will also be effective in more complex systems where the answers are unknown and are to be determined. The schematic shows one queueing system and one Monte Carlo experiment. In both cases, entities arrive at the mean rate of λ per unit time and are served at the mean rate of μ per unit time. The time between arrivals is exponentially distributed with mean 1/λ. The service time is exponentially distributed with mean 1/μ. In the standard traditional experiment, the

Figure 4. Correlation between ${x}_{i}$ and ${{x}^{\prime }}_{i}$ vs p.

Figure 5. Simple queueing system.

experiment is conducted with an inter arrival time denoted by ${x}_{a}$ derived from ${r}_{a}$ and service time ${x}_{s}$ derived from ${r}_{s}$ . The LCG is ${z}_{i}=630360016{z}_{i-1}\mathrm{mod}\left({2}^{31}\right)$ (suggested by Harrell, Ghosh & Bowden [28] .

In the simple queue, the source population for arriving entities is infinite. The queue is un-capacitated. There is a single server. The entity processing rule is first come first served. The first actual simulation is based on a mean inter arrival time of 3 minutes and a mean service rate of 2.4 minutes. That is, λ = 1/3 per minute and μ = 1/2.4 per minute. The response variable chosen is the average time that an entity spends in the system. The average time that entities spend in the system is well known to be precisely ${T}_{theorical}=1/\left(\mu -\lambda \right)=1/\left(1/2.4-1/3\right)=12$ minutes. Based on a 100,000 arrivals simulation, the estimated simulation time in the system T = 11.63 (see Table 4). The bias is 12-11.65 = 0.35 = 2.92%. For most applications this level of accuracy may suffice. However, as the λ/μ ratio increases toward 1 the bias increases and the decline in accuracy is significant. Table 4 shows the results for various of λ/μ ratios. For an example of high bias, consider μ = 1/2.4, 1/λ = 2.474, λ/μ = 0.97, the ${T}_{theorical}=1/\left(\mu -\lambda \right)=1/\left(1/2.4-1/2.474\right)=80.24$ minutes. The estimated simulation time in the system is T = 65.3807 (see Figure 6). The bias = 65.3807 − 80.24 = −14.8593 = −18.51%. This bias is unacceptable as are many estimates in Table 4.

To reduce this bias the antithetic time in system is obtained from ${T}^{p}$ where p = −0.0001. Then T is regressed on ${T}^{p}$ to express ${T}^{p}$ in the original units of T. The estimated regression equation is given by

${T}^{\prime }=\stackrel{¯}{T}+Corr\left(T,{T}^{p}\right)\left({s}_{T}/{s}_{{T}^{p}}\right)\left({T}^{p}-\stackrel{¯}{{T}^{p}}\right)$ .

Figure 6. Simulation estimates of time in system ${T}_{C}=\omega T+\left(1-\omega \right){T}^{\prime }$ when $\omega =1$ .

Table 4. Computer simulation results 1/μ = 2.4 minutes and various 1/λ, λ/μ and ω.

The final result is the weighted average ${T}_{C}=\omega T+\left(1-\omega \right){T}^{\prime }$ , where ω and $\left(1-\omega \right)$ are combining weights. For each combination of μ and λ there is an ω for the corresponding λ/μ ratio that results in negligible bias (estimated TC in bold numbers).

Ferrenberg, Lanau, & Wong [2] tried more advanced random number generators to no avail. So, the LCG simulation experiments here were repeated with the most advanced Mersenne Twister algorithm for random numbers. The results were biased similarly.

6. Optimizing the Combining Weights

A graph of ω for various λ/μ ratios that result in negligible bias (estimated TC in bold numbers) is given in Figure 7. This graph of ω is a smooth monotonically increasing function λ/μ. Optimization of the weight is done by consulting Figure 7 once the λ/μ ratio for the system design is known. A graph of simulation estimates of time in system when 1/λ = 2.474, 1/μ = 2.4 minutes, ω = 1.25 and ${T}_{C}=\omega T+\left(1-\omega \right){T}^{\prime }$ converges to 81.48 minutes is given is Figure 8. The graph is created once ω is obtained from Figure 7. The bias is now 80.24 − 81.48 = 1.24 = 1.5%. This is acceptable. We also observe that the shapes of the graphs in

Figure 7. Weight ω for various λ/μ ratios that result in negligible bias.

Figure 8. Simulation estimates of time in system ${T}_{C}=\omega T+\left(1-\omega \right){T}^{\prime }$ when $\omega =1.25$ .

Figure 6 and Figure 8 are essentially the same except for one is biased and one is unbiased.

7. Conclusion

The experiments in this paper were Monte Carlo computer simulations. Antithetic random variates were created by inverse transformation. The response variable was observed to be highly biased. This appears to be an artifact of a simulation that cannot be improved by the choice of random numbers. Therefore, antithetic random variables were created from the response variable and its power transformation. The idea for this came from insights that were gained from the derivations of the correlation with antithetic counterparts of known functions. These formed antithetic variables that were combined. The combined variable was demonstrated to contain negligible bias. The combined estimates were closer to their corresponding values that are known to be true from theory. Combining reduced bias in the estimated values. The simulations in this research were applied to a simple queue. Like Ferrenberg, Lanau, & Wong [2] in their study of the Ising model two-dimensional lattice gauge physics problem, we conclude here that there is bias leading to the wrong answer for time in the queueing system. Therefore, we repeat their caution here. This research supports the notion that if the bias is due to pre-simulation traces of serial correlation, then since subsets of correlation are part of overall randomness, bias cannot be avoided by better random numbers. Therefore, it is better to pursue ideas for cancelling bias, post-simulation. This research is limited to the simple queuing system. Future research might be conducted for different system designs and antithetic variables combining strategies. The Ising lattice gauge physics problem can be repeated using the method proposed in this research to see if the bias is eliminated there.

Nomenclature

Antithetic variables—Two variables that are oppositely correlated.

Antithetic power transformation—Uses an infinitesimal negative exponent to produce a new variable that is negatively correlated with the original variable.

LCG—Linear congruential generator of pseudorandom numbers.

Mersenne Twister algorithm—Fast generation of high-quality very long period pseudorandom numbers.

Monte Carlo Simulation—A mathematical method for calculating the odds of multiple possible outcomes occurring in an uncertain process through repeated random sampling.

PDF—Probability Density Function.

Random variate—Uniformly distributed [0, 1] random number sequence.

Random variable—Random number sequence that follows a distribution corresponding to a known variable.

Appendix A: Correlation between ${r}_{i}$ and ${r}_{i}^{p}$

Consider ${X}_{i}~U\left(0,1\right)$ with pdf

$f\left({x}_{i}\right)=\left\{\begin{array}{l}1\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{on}\text{\hspace{0.17em}}\left(0,1\right)\\ 0\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{otherwise}\end{array}$

$\mathbb{E}\left[{X}_{i}^{p}\right]={\int }_{ℝ}{x}^{p}\text{d}x={\int }_{0}^{1}{x}^{p}\text{d}x=\frac{1}{p+1},\text{\hspace{0.17em}}\text{ }p\ne -1$ (A.1)

$\mathbb{E}\left[{X}_{i}^{p+1}\right]=\frac{1}{p+2},\text{\hspace{0.17em}}\text{ }p\ne -2$

$\mathbb{E}\left[{X}_{i}^{2p}\right]=\frac{1}{2p+1},\text{\hspace{0.17em}}p\ne -1/2$

${\sigma }_{X}=\sqrt{\mathbb{E}\left[{X}_{i}^{2}\right]-{\left(\mathbb{E}\left[{X}_{i}\right]\right)}^{2}}=\sqrt{\frac{1}{3}-{\left(\frac{1}{2}\right)}^{2}}=\frac{1}{\sqrt{12}}$

$\begin{array}{c}{\sigma }_{{X}^{p}}=\sqrt{\mathbb{E}\left[{X}_{i}^{2p}\right]-{\left(\mathbb{E}\left[{X}_{i}^{p}\right]\right)}^{2}}\\ =\sqrt{\frac{1}{2p+1}-\frac{1}{{\left(p+1\right)}^{2}}}=\sqrt{\frac{{p}^{2}+2p+1-2p-1}{\left(2p+1\right){\left(p+1\right)}^{2}}}\\ =\sqrt{\frac{{p}^{2}}{\left(2p+1\right){\left(p+1\right)}^{2}}}=\frac{|p|}{|p+1|}\cdot \frac{1}{\sqrt{2p+1}},\text{\hspace{0.17em}}p\ne -\text{1},\text{\hspace{0.17em}}p\ne -1/2\end{array}$ (A.2)

Next, consider ${r}_{i}~U\left(a,b\right)$ . Then ${r}_{i}=\left(b-a\right){X}_{i}+a$

$\mathbb{E}\left[{r}_{i}\right]=\left(b-a\right)\mathbb{E}\left[{X}_{i}\right]+a=\frac{b-a}{2}+a=\frac{a+b}{2}$

${\sigma }_{r}=\left(b-a\right){\sigma }_{X}=\frac{b-a}{\sqrt{12}}$

${\left({r}_{i}-a\right)}^{p}={\left(b-a\right)}^{p}{X}_{i}^{p}$

Substituting for $\mathbb{E}\left[{X}_{i}^{p}\right]$ from (A.1)

$\mathbb{E}\left[{\left({r}_{i}-a\right)}^{p}\right]=\frac{{\left(b-a\right)}^{p}}{p+1}$

Assume a = 0, b>0, then $\mathbb{E}\left[{r}_{i}^{p}\right]=\frac{{b}^{p}}{p+1}$ , $\mathbb{E}\left[{r}_{i}^{2p}\right]=\frac{{b}^{2p}}{2p+1}$ and ${\sigma }_{r}=\frac{b}{\sqrt{12}}$

$\begin{array}{c}{\sigma }_{{r}^{p}}=\sqrt{\mathbb{E}\left[{r}_{i}^{2p}\right]-{\left(\mathbb{E}\left[{r}_{i}^{p}\right]\right)}^{2}}=\sqrt{\frac{{b}^{2p}}{2p+1}-\frac{{b}^{2p}}{{\left(p+1\right)}^{2}}}\\ ={b}^{p}\sqrt{\frac{{p}^{2}}{\left(2p+1\right){\left(p+1\right)}^{2}}}={b}^{p}\frac{|p|}{|p+1|}\cdot \frac{1}{\sqrt{2p+1}}\end{array}$

${\rho }_{r{r}^{p}}=\frac{\mathbb{E}\left[{r}_{i}^{p+1}\right]-\mathbb{E}\left[{r}_{i}^{p}\right]\mathbb{E}\left[{r}_{i}\right]}{{\sigma }_{r}{\sigma }_{{r}^{p}}}=\frac{\frac{{b}^{p+1}}{p+2}-\frac{{b}^{p}}{p+1}\cdot \frac{b}{2}}{\frac{b}{\sqrt{12}}\cdot {b}^{p}\frac{|p|}{|p+1|}\cdot \frac{1}{\sqrt{2p+1}}}$

$=\frac{\frac{1}{p+2}-\frac{1}{p+1}\cdot \frac{1}{2}}{\frac{1}{\sqrt{12}}\cdot \frac{|p|}{|p+1|}\cdot \frac{1}{\sqrt{2p+1}}}$

$\begin{array}{c}{\rho }_{r{r}^{p}}=\frac{\mathbb{E}\left[{r}_{i}^{p+1}\right]-\mathbb{E}\left[{r}_{i}^{p}\right]\mathbb{E}\left[{r}_{i}\right]}{{\sigma }_{r}{\sigma }_{{r}^{p}}}\\ =\frac{\frac{{b}^{p+1}}{p+2}-\frac{{b}^{p}}{p+1}\cdot \frac{b}{2}}{\frac{b}{\sqrt{12}}\cdot {b}^{p}\frac{|p|}{|p+1|}\cdot \frac{1}{\sqrt{2p+1}}}=\frac{\frac{1}{p+2}-\frac{1}{p+1}\cdot \frac{1}{2}}{\frac{1}{\sqrt{12}}\cdot \frac{|p|}{|p+1|}\cdot \frac{1}{\sqrt{2p+1}}}\\ =\frac{\frac{2p+2-p-2}{2\left(p+2\right)\left(p+1\right)}}{\frac{|p|}{\sqrt{12}\cdot |p+1|\cdot \sqrt{2p+1}}}=\frac{\sqrt{12}\cdot p\cdot |p+1|\cdot \sqrt{2p+1}}{2|p|\left(p+2\right)\left(p+1\right)}\end{array}$

$\begin{array}{c}\underset{p\to {0}^{-}}{\mathrm{lim}}{\rho }_{r{r}^{p}}=\underset{p\to {0}^{-}}{\mathrm{lim}}\frac{\sqrt{12}\cdot p\cdot |p+1|\cdot \sqrt{2p+1}}{2|p|\left(p+2\right)\left(p+1\right)}\\ =\underset{p\to {0}^{-}}{\mathrm{lim}}\frac{p}{|p|}\cdot \underset{p\to {0}^{-}}{\mathrm{lim}}\frac{\sqrt{2p+1}}{2\left(p+2\right)}\cdot \underset{p\to {0}^{-}}{\mathrm{lim}}\frac{|p+1|}{p+1}\sqrt{12}\\ =-1\cdot \frac{1}{4}\cdot 1\cdot \sqrt{12}\end{array}$

$\underset{p\to {0}^{-}}{\mathrm{lim}}{\rho }_{r{r}^{p}}=-\frac{\sqrt{3}}{2}$ .

This correlation is invariant with respect to the lower limit of the interval a, b.

Appendix B: Correlation between ${r}_{i}^{+p}$ and ${r}_{i}^{-p}$

Consider ${r}_{i}~U\left(0,1\right)$ . Then from Appendix A,

$\mathbb{E}\left[{r}_{i}^{+p}\right]=\frac{1}{p+1},\text{\hspace{0.17em}}\text{\hspace{0.17em}}p\ne -1$

$\mathbb{E}\left[{r}_{i}^{-p}\right]=\frac{1}{-p+1},\text{\hspace{0.17em}}\text{\hspace{0.17em}}p\ne \text{1}$

${\sigma }_{{r}^{p}}^{2}=\frac{{p}^{2}}{\left(2p+1\right)\left(p+1\right)},\text{\hspace{0.17em}}\text{\hspace{0.17em}}p\ne -1,p\ne -1/2$

Similarly,

${\sigma }_{{r}^{-p}}^{2}=\frac{{p}^{2}}{\left(-2p+1\right)\left(-p+1\right)},\text{\hspace{0.17em}}\text{\hspace{0.17em}}p\ne 1,p\ne 1/2$

$\begin{array}{l}\mathbb{E}\left[{r}_{i}^{-p}{r}_{i}^{+p}\right]-\mathbb{E}\left[{r}_{i}^{-p}\right]E\left[{r}_{i}^{+p}\right]=\mathbb{E}\left[1\right]-\mathbb{E}\left[{r}_{i}^{-p}\right]\mathbb{E}\left[{r}_{i}^{+p}\right]\\ =1-\frac{1}{\left(-p+1\right)}\frac{1}{\left(p+1\right)}=1-\frac{1}{\left(1-p\right)\left(1+p\right)}\\ =\frac{\left(1-p\right)\left(1+p\right)-1}{\left(1-p\right)\left(1+p\right)}=\frac{1-{p}^{2}-1}{\left(1-p\right)\left(p+1\right)}=\frac{-{p}^{2}}{\left(1-p\right)\left(1+p\right)}\end{array}$

$\begin{array}{c}{\rho }_{{r}^{+p}{r}^{-p}}=\frac{\mathbb{E}\left[{r}_{i}^{-p}{r}_{i}^{+p}\right]–\mathbb{E}\left[{r}_{i}^{-p}\right]E\left[{r}_{i}^{+p}\right]}{{\sigma }_{{r}^{p}}{\sigma }_{{r}^{-p}}}\\ =\frac{\frac{-{p}^{2}}{\left(1-p\right)\left(p+1\right)}}{\sqrt{\frac{{p}^{2}}{\left(2p+1\right)\left(p+1\right)}}\cdot \sqrt{\frac{{p}^{2}}{\left(-2p+1\right)\left(-p+1\right)}}}\\ =\frac{-{p}^{2}}{\left(1-p\right)\left(1+p\right)}\cdot \frac{\sqrt{\left(2p+1\right)\left(p+1\right)}}{|p|}\cdot \frac{\sqrt{\left(-2p+1\right)\left(-p+1\right)}}{|p|}\\ =\frac{-{p}^{2}}{|p|\cdot |p|}\cdot \frac{\sqrt{\left(2p+1\right)\left(p+1\right)}}{1-p}\cdot \frac{\sqrt{\left(-2p+1\right)\left(-p+1\right)}}{1+p}\\ =-\frac{\sqrt{\left(2p+1\right)\left(p+1\right)}\sqrt{\left(-2p+1\right)\left(-p+1\right)}}{\left(1-p\right)\left(1+p\right)}\end{array}$

Therefore,

${\mathrm{lim}}_{p\to {0}^{-}}{\rho }_{{r}^{+p}{r}^{-p}}={\mathrm{lim}}_{p\to {0}^{-}}-\frac{\sqrt{\left(2p+1\right)\left(p+1\right)}\sqrt{\left(-2p+1\right)\left(-p+1\right)}}{\left(1-p\right)\left(1+p\right)}=-1$

Likewise, ${\mathrm{lim}}_{p\to {0}^{+}}{\rho }_{{r}^{+p}{r}^{-p}}=-1$

Appendix C: Correlation between ${X}_{i}$ and ${{X}^{\prime }}_{i}$

Given

$\left\{\begin{array}{l}{X}_{i}=-\frac{1}{\lambda }\mathrm{ln}\left(1-{r}_{i}\right)\\ {{X}^{\prime }}_{i}=\frac{{r}_{i}^{p}-\stackrel{¯}{{r}^{p}}}{\left({s}_{{r}^{p}}\right)\lambda }+\frac{1}{\lambda }\end{array}$ , ${r}_{i}~U\left(0,1\right)$

The expected value of the original variable ${X}_{i}$ is

$\mathbb{E}\left[{X}_{i}\right]={\int }_{0}^{1}-\frac{1}{\lambda }\mathrm{ln}\left(1-{r}_{i}\right)\text{d}{r}_{i}$

Since ${\int }_{0}^{1}\mathrm{ln}\left(1-{r}_{i}\right)\text{d}{r}_{i}={\left[-\left(1-{r}_{i}\right)\mathrm{ln}\left(1-{r}_{i}\right)+\left(1-{r}_{i}\right)\right]}_{0}^{1}=-1$ , (C.1)

$\mathbb{E}\left[{X}_{i}\right]=\frac{1}{\lambda }$ . (C.2)

Also,

$\mathbb{E}\left[{X}_{i}^{2}\right]={\int }_{0}^{1}{\left(-\frac{1}{\lambda }\mathrm{ln}\left(1-{r}_{i}\right)\right)}^{2}\text{d}{r}_{i}$

and since ${\int }_{0}^{1}{\mathrm{ln}}^{2}\left(1-{r}_{i}\right)\text{d}{r}_{i}={\left[\left(1-{r}_{i}\right){\mathrm{ln}}^{2}\left(1-{r}_{i}\right)-2\left(1-{r}_{i}\right)\mathrm{ln}\left(1-{r}_{i}\right)+2\left(1-{r}_{i}\right)\right]}_{1}^{0}=2$ ,

$\mathbb{E}\left[{X}_{i}^{2}\right]=\frac{2}{{\lambda }^{2}}$ .

Therefore, ${\sigma }_{X}^{2}=\mathbb{E}\left[{X}_{i}^{2}\right]-\mathbb{E}{\left[{X}_{i}\right]}^{2}=\frac{2}{{\lambda }^{2}}-\frac{1}{{\lambda }^{2}}$ , so that

${\sigma }_{X}^{2}=\frac{1}{{\lambda }^{2}}$ . (C.3)

The expected value of the antithetic variable ${{X}^{\prime }}_{i}$ is

$\mathbb{E}\left[{{X}^{\prime }}_{i}\right]=E\left[\frac{{r}_{i}^{p}-\stackrel{¯}{{r}^{p}}}{\left({s}_{{r}^{p}}\right)\lambda }+\frac{1}{\lambda }\right]$

and since $\mathbb{E}\left[{r}_{i}^{p}-\stackrel{¯}{{r}^{p}}\right]=0$ ,

$\mathbb{E}\left[{{X}^{\prime }}_{i}\right]=\frac{1}{\lambda }$ (C.4)

with variance

${\sigma }_{{X}^{\prime }}^{2}=Var\left(\frac{{r}_{i}^{p}-\stackrel{¯}{{r}^{p}}}{\left({s}_{{r}^{p}}\right)\lambda }+\frac{1}{\lambda }\right)=\frac{1}{\left({s}_{{r}^{p}}^{2}\right){\lambda }^{2}}Var\left({r}_{i}^{p}-\stackrel{¯}{{r}^{p}}\right)$ .

For an n step computer simulation, $Var\left(\stackrel{¯}{{r}^{p}}\right)=Var\left({r}_{i}^{p}\right)/n\to 0$ as $n\to \infty$ .

Therefore,

${\sigma }_{{X}^{\prime }}^{2}=\frac{1}{\left({s}_{{r}^{p}}^{2}\right){\lambda }^{2}}Var\left({r}^{p}\right)=\frac{1}{\left({s}_{{r}^{p}}^{2}\right){\lambda }^{2}}{s}_{{r}^{p}}^{2}$

${\sigma }_{{X}^{\prime }}^{2}=\frac{1}{{\lambda }^{2}}$ . (C.5)

Remark. $\mathbb{E}\left[{X}_{i}\right]={\sigma }_{X}=\mathbb{E}\left[{{X}^{\prime }}_{i}\right]={\sigma }_{{X}^{\prime }}=\frac{1}{\lambda }$ is consistent with ${X}_{i}$ & ${{X}^{\prime }}_{i}$ expontially distributed.

To obtain the expected value $\mathbb{E}\left[{X}_{i}{{X}^{\prime }}_{i}\right]$ , consider,

${X}_{i}{{X}^{\prime }}_{i}=-\frac{1}{\lambda }\mathrm{ln}\left(1-{r}_{i}\right)\left(\frac{{r}_{i}^{p}-\stackrel{¯}{{r}^{p}}}{\left({s}_{{r}^{p}}\right)\lambda }+\frac{1}{\lambda }\right)$

${X}_{i}{{X}^{\prime }}_{i}=-\frac{1}{{\lambda }^{2}}\mathrm{ln}\left(1-{r}_{i}\right)-\frac{1}{\lambda }\mathrm{ln}\left(1-{r}_{i}\right)\cdot \frac{{r}_{i}^{p}-\stackrel{¯}{{r}^{p}}}{\left({s}_{{r}^{p}}\right)\lambda }$ (C.6)

From (A.1) and (A.2)

$\mathbb{E}\left[{X}_{i}^{p}\right]=\frac{1}{p+1}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{and}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{\sigma }_{{X}^{p}}=\frac{|p|}{|p+1|}\cdot \frac{1}{\sqrt{2p+1}}$

Therefore, replacing ${X}_{i}$ with ${r}_{i}$ , $\mathbb{E}\left({r}_{i}^{p}\right)$ with $\stackrel{¯}{{r}^{p}}$ and ${\sigma }_{{r}^{p}}$ with ${s}_{{r}^{p}}$ ,

$\frac{{r}_{i}^{p}-\stackrel{¯}{{r}^{p}}}{\left({s}_{{r}^{p}}\right)\lambda }=\frac{{r}_{i}^{p}-\frac{1}{p+1}}{\lambda \frac{|p|}{|p+1|\sqrt{2p+1}}}=\frac{\left(p+1\right){r}^{p}-1}{|p|}\cdot \frac{|p+1|\sqrt{2p+1}}{\lambda \left(p+1\right)}=\varphi \left(p\right)\cdot \psi \left(p\right)$ (C.7)

where

$\varphi \left(p\right)=\frac{\left(p+1\right){r}^{p}-1}{|p|}$ and $\psi \left(p\right)=\frac{|p+1|\sqrt{2p+1}}{\lambda \left(p+1\right)}$ , (C.8)

from which we obtain

${\mathrm{lim}}_{p\to {0}^{-}}\psi \left(p\right)=\frac{1}{\lambda }$ . (C.9)

The Taylor expansion at $p=0$ of ${r}_{i}^{p}$ is

${r}_{i}^{p}=1+p\cdot \mathrm{ln}\left({r}_{i}\right)+O\left({p}^{2}\right)$ .

Substituting for ${r}_{i}^{p}$ in (C.8)

$\begin{array}{c}\varphi \left(p\right)=\frac{\left(p+1\right){r}_{i}^{p}-1}{|p|}=\frac{p{r}_{i}^{p}+{r}_{i}^{p}-1}{|p|}\\ =\frac{p\left(1+p\cdot \mathrm{ln}\left({r}_{i}\right)+O\left({p}^{2}\right)\right)+1+p\cdot \mathrm{ln}\left({r}_{i}\right)+O\left({p}^{2}\right)-1}{|p|}\\ =\frac{p+{p}^{2}\cdot \mathrm{ln}\left({r}_{i}\right)+pO\left({p}^{2}\right)+p\cdot \mathrm{ln}\left({r}_{i}\right)+O\left({p}^{2}\right)}{|p|}\\ =\frac{p}{|p|}\left(1+\mathrm{ln}\left({r}_{i}\right)\right)+\frac{{p}^{2}\cdot \mathrm{ln}\left({r}_{i}\right)+O\left({p}^{2}\right)}{|p|}\end{array}$

from which we obtain

${\mathrm{lim}}_{p\to {0}^{-}}\varphi \left(p\right)=-1-\mathrm{ln}\left({r}_{i}\right)$ . (C.10)

Substituting (C.9) and (C.10) in (C.7)

${\mathrm{lim}}_{p\to {0}^{-}}\frac{{r}_{i}^{p}-\stackrel{¯}{{r}^{p}}}{\left({s}_{{r}^{p}}\right)\lambda }=-\frac{1}{\lambda }\left(1+\mathrm{ln}\left({r}_{i}\right)\right)$ . (C.11)

Substituting (C.11) in (C.6)

${\mathrm{lim}}_{p\to {0}^{-}}{X}_{i}{{X}^{\prime }}_{i}=-\frac{1}{{\lambda }^{2}}\mathrm{ln}\left(1-{r}_{i}\right)+\frac{1}{\lambda }\mathrm{ln}\left(1-{r}_{i}\right)\frac{1}{\lambda }\left(1+\mathrm{ln}\left({r}_{i}\right)\right)=\frac{1}{{\lambda }^{2}}\mathrm{ln}\left({r}_{i}\right)\cdot \mathrm{ln}\left(1-{r}_{i}\right)$ .

From the Lebesque dominated converge theorem,

${\mathrm{lim}}_{p\to {0}^{-}}\mathbb{E}\left[{X}_{i}{{X}^{\prime }}_{i}\right]=E\left[{\mathrm{lim}}_{p\to {0}^{-}}{X}_{i}{{X}^{\prime }}_{i}\right]$ .

Therefore,

${\mathrm{lim}}_{p\to {0}^{-}}\mathbb{E}\left[{X}_{i}{{X}^{\prime }}_{i}\right]=\frac{1}{{\lambda }^{2}}\mathbb{E}\left[\mathrm{ln}\left({r}_{i}\right)\cdot \mathrm{ln}\left(1-{r}_{i}\right)\right]$ ,

and since ${\int }_{0}^{1}\mathrm{ln}\left({r}_{i}\right)\mathrm{ln}\left(1-{r}_{i}\right)\text{d}{r}_{i}=2-{\pi }^{2}/6$ (see Appendix D),

${\mathrm{lim}}_{p\to {0}^{-}}\mathbb{E}\left[{X}_{i}{{X}^{\prime }}_{i}\right]=\frac{1}{{\lambda }^{2}}\left(2-{\pi }^{2}/6\right)$ . (C.12)

Finally, the correlation between ${X}_{i}$ and ${{X}^{\prime }}_{i}$

${\rho }_{X{X}^{\prime }}=Cov\left({X}_{i},{{X}^{\prime }}_{i}\right)/{\sigma }_{X}{\sigma }_{{X}^{\prime }}=\frac{\mathbb{E}\left[{X}_{i}{{X}^{\prime }}_{i}\right]-\mathbb{E}\left[{X}_{i}\right]\mathbb{E}\left[{{X}^{\prime }}_{i}\right]}{{\sigma }_{X}{\sigma }_{{X}^{\prime }}}$

And substituting from (C.2), (C.3), (C.4), (C.5), (C.12),

${\rho }_{X{X}^{\prime }}=\frac{\frac{1}{{\lambda }^{2}}\left(2-{\pi }^{2}/6\right)-\frac{1}{\lambda }\cdot \frac{1}{\lambda }}{\frac{1}{\lambda }\cdot \frac{1}{\lambda }}$

${\rho }_{X{X}^{\prime }}=1-{\pi }^{2}/6$ .

Appendix D

$\begin{array}{l}{\int }_{0}^{1}\mathrm{ln}\left({r}_{i}\right)\mathrm{ln}\left(1-{r}_{i}\right)\text{d}{r}_{i}\\ =-{\int }_{0}^{1}\mathrm{ln}\left({r}_{i}\right){\sum }_{n=1}^{\infty }\frac{{r}_{i}{}^{n}}{n}\text{d}{r}_{i}\\ =-{\sum }_{n=1}^{\infty }\frac{1}{n}{\int }_{0}^{1}\mathrm{ln}\left({r}_{i}\right)\left(\frac{{r}_{i}{}^{n}}{n}\right)\text{d}{r}_{i}\\ =-{\sum }_{n=1}^{\infty }\frac{1}{n}\left\{{\left[\mathrm{ln}\left({r}_{i}\right)\left(\frac{{r}_{i}{}^{n+1}}{n+1}\right)\right]}_{0}^{1}-{\int }_{0}^{1}\frac{1}{{r}_{i}}\frac{{r}_{i}{}^{n+1}}{n+1}\right\}\text{d}{r}_{i}\\ ={\sum }_{n=1}^{\infty }\frac{1}{n}\frac{1}{n+1}{\left[\frac{{r}_{i}{}^{n+1}}{n+1}\right]}_{0}^{1}\end{array}$

$\begin{array}{l}={\sum }_{n=1}^{\infty }\frac{1}{n}\frac{1}{{\left(n+1\right)}^{2}}\\ ={\sum }_{n=1}^{\infty }\frac{1}{n}-\frac{1}{n+1}-\frac{1}{{\left(n+1\right)}^{2}}\\ =1-\left[{\sum }_{n=1}^{\infty }\frac{1}{{n}^{2}}-1\right]\\ =2-\zeta \left(2\right)\\ =2-\frac{{\pi }^{2}}{6}\end{array}$

where ζ is the Reimann zeta function. See Chapman [29] .

Conflicts of Interest

The authors declare no conflicts of interest regarding the publication of this paper.

 [1] Proykova, A. (2000) How to Improve a Random Number Generator. Computer Physics Communications, 124, 125-131. https://doi.org/10.1016/S0010-4655(99)00434-8 [2] Ferrenberg, A.M., Landau, D.P. and Wong, Y.J. (1992) Monte Carlo Simulations: Hidden Errors from “Good” Random Number Generators. Physical Review Letters, 69, 3382-3384. https://doi.org/10.1103/PhysRevLett.69.3382 [3] Lehmer, D.H. (1951) Mathematical Methods in Large-Scale Computing Units. The Annals of' the Computation Laboratory of Harvard University, 26, 141-146. [4] Marsaglia, G. (1968) Random Numbers Fall Mainly in the Planes. Proceedings of the National Academy of Sciences of the United States of America, 61, 25-28. https://doi.org/10.1073/pnas.61.1.25 [5] Park, S.K. and Miller, K.W. (1988) Random Number Generators: Good Ones Are Hard To Find. Communications of the ACM, 31, 1192-1201.https://doi.org/10.1145/63039.63042 [6] Marsaglia, G. (1961) Expressing a Random Variable in Terms of Uniform Random Variables. Annals of Mathematical Statistics, 32, 894-898. https://doi.org/10.1214/aoms/1177704983 [7] Marsaglia, G. (1964) Generating a Variable from the Tail of the Normal Distribution. Technometrics, 6, 101-102. https://doi.org/10.1080/00401706.1964.10490150 [8] Marsaglia, G. and Tsang, W.W. (1984) A Fast, Easily Implemented Method for Sampling from Decreasing or Symmetric Unimodal Density Functions. SIAM Journal of Scientific and Statistical Computing, 5, 349-359. https://doi.org/10.1137/0905026 [9] Marsaglia, G. and Tsay, L.H. (1985) Matrices and the Structure of Random Number Sequences. Linear Algebra and its Applications, 67, 147-156. https://doi.org/10.1016/0024-3795(85)90192-2 [10] Marsaglia, G., MacLaren, M.D. and Bray, T.A. (1964) A Fast Procedure for Generating Normal Random Variables. Communications of the ACM, 7, 3-10. https://doi.org/10.1145/363872.363883 [11] Marsaglia, G. and MacLaren, M.D. (1964). A Fast Procedure for Generating Exponential Random Variables. Communications of the ACM, 7, 298-300. https://doi.org/10.1145/364099.364330 [12] Marsaglia, G., Ananthanarayan, K. and Paul, N.J. (1976) Improvements on Fast Methods for Generating Normal Random Variables. Information Processing Letters, 5, 27-30. https://doi.org/10.1016/0020-0190(76)90074-0 [13] Marsaglia, G. and Tsang, W.W. (1988). The Monte Python Method for Generating Random Variable. ACM Transactions on Mathematical Software, 24, 341-350. https://doi.org/10.1145/292395.292453 [14] Leva, J.L. (1992) A Normal Random Number Generator. ACM Transactions Mathematical Software, 18, 454-455. https://doi.org/10.1145/138351.138367 [15] Kroese, D.P., Taimre, T. and Botev, Z.I. (2011) Handbook of Monte Carlo Methods. John Wiley & Sons, New York. https://doi.org/10.1002/9781118014967 [16] Hammersley, J.M. and Morton, K.W. (1956) A New Monte Carlo Technique: Antithetic Variates, Mathematical Proceedings of the Cambridge Philosophical Society, 52, 449-475. https://doi.org/10.1017/S0305004100031455 [17] Aljeddani, S. (2022) Bayesian Variable Selection for Mixture Process Variable Design Experiment. Open Journal of Modelling and Simulation, 10, 391-416. https://doi.org/10.4236/ojmsi.2022.104022 [18] Cheng, R.C.H. (1982) The Use of Antithetic Variates in Computer Simulations. Journal of the Operational Research Society, 33, 229-237. https://doi.org/10.1057/jors.1982.48 [19] Kleijnen, J.P.C. (1975) Antithetic Variates, Common Random Numbers and Optimal Computer Time Allocation in Simulation. Management Science, 21, 1176-1185. https://doi.org/10.1287/mnsc.21.10.1176 [20] Ridley, A.D. (1999) Optimal Antithetic Weights for Lognormal Time Series Forecasting. Computers and Operations Research, 26, 189-209. https://doi.org/10.1016/S0305-0548(98)00058-6 [21] Ridley, A.D. (1995) Combining Global Antithetic Forecasts. International Transactions in Operational Research, 2, 387-398. https://doi.org/10.1111/j.1475-3995.1995.tb00030.x [22] Ridley, A.D. (1997) Optimal Weights for Combining Antithetic Forecasts. Computers & Industrial Engineering, 32, 371-381. https://doi.org/10.1016/S0360-8352(96)00296-3 [23] Ridley, A.D. (2016) Advances in Antithetic Time Series: Separating Fact from Artifact. Operations Research and Decisions, 26, 57-68. [24] Ridley, A.D., Ngnepieba, P. and Duke, D. (2013) Parameter Optimization for Combining Lognormal Antithetic Time Series. European Journal of Mathematical Sciences, 2, 235-245. [25] Ridley, A.D. and Ngnepieba, P. (2014) Antithetic Time Series Analysis and the Companyx Data. Journal of the Royal Statistical Society Series A: Statistics in Society, 177, 83-94. https://doi.org/10.1111/j.1467-985X.2012.12001.x [26] Ridley, A.D. and Ngnepieba, P. (2015) General Theory of Antithetic Time Series. Journal of Applied Mathematics and Physics, 3, 1726-1741.http://www.scirp.org/journal/jamphttps://doi.org/10.4236/jamp.2015.312197 [27] MATLAB (2008) Application Program Interface Reference. Version 8, The MathWorks, Inc, The Natick Mall. [28] Harrell, C., Ghosh, B. and Bowden, R. (2004) Simulation Using Promodel. McGraw Hill, New York. [29] Chapman, R. (1999) Evaluating ζ(2). https://empslocal.ex.ac.uk/people/staff/rjchapma/etc/zeta2.pdf