Scientific Research

An Academic Publisher

Inferences on the Difference of Two Proportions: A Bayesian Approach

**Author(s)**Leave a comment

_{1}-π

_{2}be the difference of two independent proportions related to two populations. We study the test H

_{0}:π≥0 against different alternatives, in the Bayesian context. The various Bayesian approaches use standard beta distributions, and are simple to derive and compute. But the more general test H

_{0}:π≥η, with η＞0, requires more advanced mathematical tools to carry out the computations. These tools, which include the density of the difference of two general beta variables, are presented in the article, with numerical examples for illustrations to facilitate comprehension of results.

KEYWORDS

1. Introduction

For two independent proportions ${\pi}_{1}$ and ${\pi}_{2}$ , their difference is frequently encountered in the frequentist statistical literature, where tests, or confidence intervals, for ${\pi}_{1}-{\pi}_{2}$ are well accepted notions in theory and in practice, although most frequently, the case under study is the equality, or inequality of these proportions. For the Bayesian approach, Pham-Gia and Turkkan ( [1] and [2] ) have considered the case of independent, and dependent proportions for inferences, and also in the context of sample size determination [3] .

But testing ${\pi}_{1}={\pi}_{2}$ is only a special case of testing ${H}_{0}:{\pi}_{1}-{\pi}_{2}\le \eta $ , with $\eta $ being a positive constant value, which is much less frequently dealt with. In Section 2 we recall the unconditional approaches to testing ${H}_{0}$ based on the maximum likelihood estimators of the two proportions and normal approximations. A new exact approach not using normal approximation has been developed by our group and will be presented elsewhere. Fisher’s exact test is also recalled here, for comparison purpose. The Bayesian approach to testing the equality of two proportions and the computation of credible intervals are given in Section 3. The Bayesian approach using the general beta distributions is given in Section 4. All related problems are completely solved, thanks to some closed form formulas that we have established in earlier papers.

2. Testing the Equality of Two Proportions

2.1. Test Using Normal Approximation

As stated before, taking $\eta =0$ we have a test for equality between two proportions. Several well-known methods are presented in the literature. For example, the conditional test is usually called Fisher’s exact test, and is based on the hypergeometric distribution. It is used when the sample size is small. Pearson’s Chi-square test using Yates correction is usually used for intermediary sample size while Pearson’s Chi-square is used for large samples. Their appropriateness is discussed in D’Agostino et al. [4] . Normal approximation methods are based on formulas using estimated values of the mean and the variance of the two populations. For example, we have

${T}_{1}=\frac{{X}_{1}/{n}_{1}-{X}_{2}/{n}_{2}}{{\left[\left({X}_{1}/{n}_{1}\right)\left(1-{X}_{1}/{n}_{1}\right)/{n}_{1}+\left({X}_{2}/{n}_{2}\right)\left(1-{X}_{2}/{n}_{2}\right)/{n}_{2}\right]}^{1/2}}$ , and the pooled version ${T}_{2}=\frac{{X}_{1}/{n}_{1}-{X}_{2}/{n}_{2}}{{\left[\left({X}_{1}+{X}_{2}\right)/\left({n}_{1}+{n}_{2}\right)\left(\left(1-\left({X}_{1}+{X}_{2}\right)\right)/\left({n}_{1}+{n}_{2}\right)\right)\left(1/{n}_{1}+1/{n}_{2}\right)\right]}^{1/2}}$ , both being

approximately $N\left(0,1\right)$ under ${H}_{0}:{\pi}_{1}\le {\pi}_{2}$ . Cressie [5] gives conditions under which ${T}_{2}$ is better than ${T}_{1}$ , in terms of power. Previously, Eberhardt and Fligner [6] studied the same problem for a bilateral test.

Numerical Example 1

To investigate its proportions of customers in two separate geographic areas of the country, a company picks a random sample of 25 shoppers in area A, in which 17 are found to be its customers. A similar random sample of 20 shoppers in area B gives 8 customers. We wish to test the hypothesis that ${H}_{0}:{\pi}_{1}\le {\pi}_{2}$ against ${H}_{1}:{\pi}_{1}>{\pi}_{2}$ .

We have here the observed value of ${T}_{1}=1.9459$ and of ${T}_{2}=1.8783$ which lead, in both cases, to the rejection of ${H}_{0}$ at significance level 5% (the critical value is 1.64) for ${H}_{1}:{\pi}_{1}>{\pi}_{2}$ .

2.2. Fisher’s Exact Test

Under ${H}_{0}$ the number of successes coming from population 1 has the $\text{Hyp}\left({n}_{1}+{n}_{2},t={x}_{1}+{x}_{2},{n}_{1},x\right)$ distribution. The argument is that, in the combined sample of size ${n}_{1}+{n}_{2}$ , with ${x}_{1}$ successes from population 1 out of the total number of successes $t={x}_{1}+{x}_{2}$ , the number of x successes coming from population 1 is a hypergeometric variable.

To compute the significance of the observation we have to compute several tables corresponding to more extreme results than the observed table. It is known that the conditional test is less powerful than the unconditional one.

Numerical Example 2

We use the same data as in numerical example 1 to test ${H}_{0}:{\pi}_{A}={\pi}_{B}$ vs ${H}_{1}:{\pi}_{A}>{\pi}_{B}$ i.e. the proportion of customers in area A is significantly higher than the one in area B. We have Table 1:

the observed data $\left({x}_{B}=8\right)$ , and also cases more extreme, which means ${x}_{B}=0,1,2,\cdots ,7$ . The p-value of the test is hence

$p\text{-value}={\displaystyle \underset{{x}_{B}=0}{\overset{8}{\sum}}\frac{\left(\begin{array}{c}25\\ 25-{x}_{B}\end{array}\right)\left(\begin{array}{c}20\\ {x}_{B}\end{array}\right)}{\left(\begin{array}{c}45\\ 25\end{array}\right)}}=0.0542$ .

Although technically not significant at the 5% level, this result shows that the proportion of customers in area B can practically be considered as lower than the one in area A, in agreement with the frequentist test.

REMARK: The problem is often associated with a 2 ´ 2 table where there are three possibilities: constant column sums and row sums, one set constant the other variable and both variables. Other measures can then be introduced (e.g. Santner and Snell [7] ). A Bayesian approach has been carried out by several authors, e.g. Howard [8] and also Pham Gia and Turkkan [2] , who computed the credible intervals for several of these measures.

3. The Bayesian Approach

In the estimation of the difference of two proportions the Bayesian approach certainly plays an important role. Agresti and Coull [9] provide some interesting remarks on various approaches.

Again, let $\pi ={\pi}_{1}-{\pi}_{2}$ . Using the Bayesian approach will certainly encounter some serious computational difficulties if we do not have a closed form expression for the density of the difference of two independently beta distributed random variables. Such an expression has been obtained by the first author some time ago and is recalled below.

3.1. Bayesian Test on the Equality of Two Proportions

Let us recall first the following theorem:

Theorem 1: Let ${\pi}_{i}~\text{beta}\left({\alpha}_{i},{\beta}_{i}\right),\text{for}i=1,2$ be two independent beta distributed random variables with parameters $\left({\alpha}_{1},{\beta}_{1}\right)$ and $\left({\alpha}_{2},{\beta}_{2}\right)$ , respectively. Then the difference $\pi ={\pi}_{1}-{\pi}_{2}$ has density defined on $\left(-1,1\right)$ as follows:

$\begin{array}{l}{p}_{\pi}\left(x\right)=\{\begin{array}{l}B\left({\alpha}_{2},{\beta}_{1}\right){x}^{{\beta}_{1}+{\beta}_{2}-1}{\left(1-x\right)}^{{\alpha}_{2}+{\beta}_{1}-1}\\ \text{}{F}_{1}\left({\beta}_{1},{\alpha}_{1}+{\alpha}_{2}+{\beta}_{1}+{\beta}_{2}-2,1-{\alpha}_{1};{\beta}_{1}+{\alpha}_{2};\left(\left(1-x\right),1-{x}^{2}\right)\right)/A,\text{\hspace{0.17em}}0\le x<1\\ B\left({\alpha}_{1}+{\alpha}_{2}-1;{\beta}_{1}+{\beta}_{2}-1\right)/A,\text{}x=0,\text{if}{\alpha}_{1}+{\alpha}_{2}>1,\text{}{\beta}_{1}+{\beta}_{2}>1\\ B\left({\alpha}_{1},{\beta}_{2}\right){\left(-x\right)}^{{\beta}_{1}+{\beta}_{2}-1}{\left(1+x\right)}^{{\alpha}_{1}+{\beta}_{2}-1}\\ \text{}{F}_{1}\left({\beta}_{2},1-{\alpha}_{2},1-{\alpha}_{2};{\alpha}_{1}+{\alpha}_{2}+{\beta}_{1}+{\beta}_{2}-2,{\alpha}_{1}+{\beta}_{2};1-{x}^{2},1+x\right)/A,\text{\hspace{0.17em}}-1\le x<0\end{array}\\ A=B\left({\alpha}_{1},{\beta}_{1}\right)B\left({\alpha}_{2},{\beta}_{2}\right)\end{array}$ (1)

${F}_{1}(.)$ is Appell’s first hypergeometric function, which is defined as

${F}_{1}\left(a,{b}_{1},{b}_{2};c;{x}_{1},{x}_{2}\right)={\displaystyle \underset{i=0}{\overset{\infty}{\sum}}{\displaystyle \underset{j=0}{\overset{\infty}{\sum}}\frac{{a}^{\left[i+j\right]}}{{c}^{\left[i+j\right]}}}}\frac{{b}_{1}^{\left[i\right]}{b}_{2}^{\left[j\right]}{x}_{1}^{i}{x}_{2}^{j}}{i!j!}$ (2)

where ${a}^{\left[b\right]}=a\left(a+1\right)\cdots \left(a+b-1\right)$ . This infinite series is convergent for $\left|{x}_{1}\right|<1$ and $\left|{x}_{2}\right|<1$ , where, as shown by Euler, it can also be expressed as a convergent integral:

$\frac{\Gamma \left(c\right)}{\Gamma \left(a\right)\Gamma \left(c-a\right)}{\displaystyle \underset{0}{\overset{1}{\int}}{u}^{a-1}}{\left(1-u\right)}^{c-a-1}{\left(1-u{x}_{1}\right)}^{-{b}_{1}}{\left(1-u{x}_{2}\right)}^{-{b}_{2}}\text{d}u$ (3)

which converges for $c-a>0$ , $a>0$ . In fact, Pham-Gia and Turkkan [1] established the expression of the density of the difference using (3) directly and not the series. Hence, the infinite series (5) can be extended outside the two circles of convergence, by analytic continuation, where it is also denoted by ${F}_{1}(.)$ .

Here, we denote the above density (1) by $\pi ~\psi \left({\alpha}_{1},{\beta}_{1},{\alpha}_{2},{\beta}_{2}\right)$ .

Proof: See Pham-Gia and Turkkan [1] .

The prior distribution of $\pi $ is hence $\psi \left({\alpha}_{1},{\beta}_{1},{\alpha}_{2},{\beta}_{2}\right)$ , obtained from the two beta priors. Various approaches in Bayesian testing are given below.

Bayesian Testing Using a Significance Level

While frequentist statistics frequently does not test ${H}_{0}:\pi \le \eta \text{vs}\text{.}{H}_{1}:\pi >\eta $ , for $\eta >0$ and limits itself to the case $\eta =0$ , Bayesian statistics can easily do it.

a) One-sided test:

Proposition 1: To perform the above test at the 0.05 significance level, using the two independent samples ${\left\{{X}_{1,i}\right\}}_{i=1}^{{n}_{1}}$ and ${\left\{{X}_{2,i}\right\}}_{i=1}^{{n}_{2}}$ , we compute ${p}_{{\pi}_{1}-{\pi}_{2}}\left({\pi}_{1}-{\pi}_{2}|{\alpha}_{1}^{\ast},{\beta}_{1}^{\ast},{\alpha}_{2}^{\ast}{\beta}_{2}^{\ast}\right)$ , where ${\alpha}_{i}^{\ast}={\alpha}_{i}+{x}_{i}$ and ${\beta}_{i}^{\ast}={\beta}_{i}+{n}_{i}-{x}_{i}$ , $i=1,2$ . This expression of the posterior density of $\pi $ , obtained by the conjugacy of binomial sampling with the beta prior, will allow us to compute $P\left(\pi >\eta \right)$ and compare it with the significance level $\alpha $ .

For example, as in the frequentist example of Section 2.1, we consider ${n}_{1}=25$ , ${x}_{1}=17$ , ${n}_{2}=20$ , ${x}_{2}=8$ and use two non-informative beta priors, that is, $\text{Beta}\left(0.5,0.5\right)$ .

We note first that ${\stackrel{^}{\pi}}_{1}=17/25=0.68,\text{}{\stackrel{^}{\pi}}_{2}=8/20=0.40$ , giving $\stackrel{^}{\pi}=0.28$ .

We obtain the prior and posterior distributions of ${\pi}_{1}$ and ${\pi}_{2}$ (Figure 1). We wish to test:

${H}_{0}:\pi \le 0.35\text{vs}{H}_{1}:\pi >0.35$ (4)

We have ${\alpha}_{1}^{\ast}=17.5,\text{}{\beta}_{1}^{\ast}=8.5,\text{}{\alpha}_{2}^{\ast}=8.5,\text{}{\beta}_{2}^{\ast}=12.5$ : ${H}_{1}$ has posterior probability $\mathrm{Pr}\left(\pi >0.35\right)={\displaystyle \underset{0.35}{\overset{1}{\int}}\psi \left(x;17.5,8.5,8.5,12.5\right)}\text{d}x=0.2855$ , and we fail to reject ${H}_{0}$ at the 0.05% level. This means that data combined with our judgment is not enough to make us accept that the difference of these proportions exceeds 0.35. Naturally, different informative, or non-informative, priors can be considered for ${\pi}_{1}$ and ${\pi}_{2}$ separately, and the test can be carried out in the same way.

b) Point-null hypothesis:

The point null hypothesis ${H}_{0}:\pi =\eta \text{vs}.\text{}{H}_{1}:\pi \ne \eta $ to be tested at the significance level $\alpha $ in Bayesian statistics has been a subject of study and discussion

(a)(a)

Figure 1. (a) Prior $\text{Beta}\left(0.5,0.5\right)$ and posterior $\text{Beta}\left(17.5,8.5\right)$ of ${\pi}_{1}$ and (b) Prior $\text{Beta}\left(0.5,0.5\right)$ and posterior $\text{Beta}\left(8.5,12.5\right)$ of ${\pi}_{2}$ .

in the literature. Several difficulties still remain concerning this case, especially on the prior probability assigned to the value $\eta $ (see Berger [10] ). We use here Lindley’s compromise (Lee [11] ), which consists of computing the $\left(1-\alpha \right)100\%$ highest posterior density interval and accept or reject ${H}_{0}$ depending on whether $\eta $ belongs or not to that interval. Here, for the same example, if $\eta =0.35$ , using Pham-Gia and Turkkan’s algorithm [12] , the 95% hpd interval for $\pi $ is $\left(-0.0079;0.5381\right)$ , which leads us to technically accept ${H}_{0}$ (see Figure 2), although the lower bound of the hpd interval can be considered as zero and we can practically reject ${H}_{0}$ .

We can see that the above conclusions on $\pi $ are consistent with each other.

3.2. Bayesian Testing Using the Bayes Factor

Bayesian hypothesis testing can also be carried out using the Bayes factor B, which would give the relative weight of the null hypothesis w.r.t. the alternative one, when data is taken into consideration. This factor is defined as the ratio of the posterior odds over the prior odds. With the above expression of the difference of two betas given by (1) we can now accurately compute the Bayes factor associated with the difference of two proportions. We consider two cases:

a) Simple hypothesis: ${H}_{0}:\pi =a\text{vs}{H}_{1}:\pi =b$ . Then $B=\frac{{p}_{\pi}\left(\pi |a\right)}{{p}_{\pi}\left(\pi |b\right)}$ , which

corresponds to the value of the posterior density of $\pi $ at $a$ , divided by the value of posterior density of $\pi $ at $b$ . As an application, let us consider the following hypotheses (different from the previous numerical example): ${H}_{0}:\pi =0.35$ vs. ${H}_{1}:\pi =0.25$ , where we have uniform priors for both ${\pi}_{1}$ and ${\pi}_{2}$ , and where we consider the sampling results from Table 1. We obtain the posterior parameters ${\alpha}_{1}^{\ast}=18,\text{}{\beta}_{1}^{\ast}=9,\text{}{\alpha}_{2}^{\ast}=9,\text{}{\beta}_{2}^{\ast}=13$ . Using the density of the difference (1), we calculate the Bayes factor,

$B=\frac{\psi \left(0.35|{\alpha}_{1}^{\ast},{\beta}_{1}^{\ast},{\alpha}_{2}^{\ast},{\beta}_{2}^{\ast}\right)}{\psi \left(0.25|{\alpha}_{1}^{\ast},{\beta}_{1}^{\ast},{\alpha}_{2}^{\ast},{\beta}_{2}^{\ast}\right)}=0.8416$ . This value indicates that the data slightly

favor ${H}_{1}$ over ${H}_{0}$ , which is a logical conclusion since $\stackrel{^}{\pi}=0.28$ .

Figure 2. Prior $\psi \left(0.5,0.5,0.5,0.5\right)$ and posterior $\psi \left(17.5,8.5,8.5,12.5\right)$ distributions of $\pi $ . The red dashed lines correspond to the bounds of the posterior 95%-hpd interval.

Table 1. Data on customers in area A and B.

b) Composite hypothesis: As an application, let us consider the hypotheses (4), that is, ${H}_{0}:\pi \le 0.35$ vs. ${H}_{1}:\pi >0.35$ .

In general, ${H}_{0}:\pi \in {\Theta}_{0}$ vs. ${H}_{1}:\pi \in {\Theta}_{1}$ , where ${\Theta}_{0}\cup {\Theta}_{1}=R$ . We have

${p}_{0}=\mathrm{Pr}\left(\pi \in {\Theta}_{0}|\text{posterior}\right)$ and ${p}_{1}=\mathrm{Pr}\left(\pi \in {\Theta}_{1}|\text{posterior}\right)$ (or ${p}_{1}=1-{p}_{0}$ ) as posterior probabilities. Consequently, we define the posterior odds on ${H}_{0}$ against ${H}_{1}$ as ${p}_{0}/{p}_{1}$ . Similarly, we have the prior odds on ${H}_{0}$ against ${H}_{1}$ ,

which we define here as ${z}_{0}/{z}_{1}$ . The Bayes factor is $B=\frac{{p}_{0}{z}_{1}}{{p}_{1}{z}_{0}}$ . Again, we use the

sampling results from Table 1, yielding the prior and posterior distributions presented in Figure 1 with $\text{Beta}\left(0.5,0.5\right)$ prior separately for both proportions.

Now, using (4), $\pi \sim \psi \left({\alpha}_{1}^{\ast},{\beta}_{1}^{\ast},{\alpha}_{2}^{\ast},{\beta}_{2}^{\ast}\right)$ , we can determine the required prior

and posterior probabilities. For example, ${p}_{0}={\displaystyle \underset{-1}{\overset{0.35}{\int}}\psi \left(t|{\alpha}_{1}^{\ast},{\beta}_{1}^{\ast},{\alpha}_{2}^{\ast},{\beta}_{2}^{\ast}\right)}\text{d}t$ gives

${p}_{0}=0.7145$ . In the same way, we obtain ${z}_{0}=0.745$ , using the prior $\psi \left(1/2,1/2,1/2,1/2\right)$ . Since ${p}_{1}=1-{p}_{0}$ and ${z}_{1}=1-{z}_{0}$ , we have ${p}_{1}=0.2855$ and ${z}_{1}=0.255$ . Finally, the Bayes factor is $B=0.8566$ , which is a mild argument in favor of ${H}_{1}$ .

4. Prior and Posterior Densities of $\pi -\eta $

The testing above can be seen to be quite straightforward, and is limited to some numerical values of the function $\psi (.)$ that can be numerically computed. But to make an in-depth study of the Bayesian approach to the difference $\pi -\eta ={\pi}_{1}-\left({\pi}_{2}+\eta \right)$ , we need to consider the analytic expressions of the prior and posterior distributions of this variable, which can be obtained only from the general beta distribution. Naturally, the related mathematical formulas become more complicated. But Pham-Gia and Turkkan [13] have also established the expression of the density of ${X}_{1}+{X}_{2}$ , where both have general beta distributions.

4.1. The Difference of Two General Betas

The general beta (or GB), defined on a finite interval, say (c, d), has a density:

${f}_{gb}\left(x;\alpha ,\beta ;c,d\right)={\left(x-c\right)}^{\alpha -1}{\left(d-x\right)}^{\beta -1}/\left[{\left(d-c\right)}^{\alpha +\beta -1}B\left(\alpha ,\beta \right)\right],\text{}\alpha ,\beta >0,\text{}c\le x\le d$ (5)

and is denoted by $X~GB\left(\alpha ,\beta ;c,d\right)$ . It reduces to the standard beta above when $c=0$ and $d=1$ . Conversely a standard beta can be transformed into a general beta by addition of, or/and, multiplication with a constant.

Theorem 2: Let $X~GB\left(\alpha ,\beta ;a,b\right)$ and any two scalars $\theta $ , $\lambda $ . Then

1) $X+\theta ~GB\left(\alpha ,\beta ;a+\theta ,b+\theta \right),$

2) $\lambda X~GB\left(\alpha ,\beta ;\lambda a,\lambda b\right)$ when $\lambda >0$ . Otherwise, $\lambda X~GB\left(\beta ,\alpha ;\lambda b,\lambda a\right)$ when $\lambda <0$ .

Proof:

1) We have

$\begin{array}{c}{f}_{X+\theta}\left(y\right)={f}_{X}\left(y-\theta \right)\\ ={\left(\left(y-\theta \right)-a\right)}^{\alpha -1}{\left(b-\left(y-\theta \right)\right)}^{\beta -1}/\left[{\left(b-a\right)}^{\alpha +\beta -1}B\left(\alpha ,\beta \right)\right],\\ a\le y-\theta \le b\\ ={\left(y-\left(a+\theta \right)\right)}^{\alpha -1}{\left(\left(b+\theta \right)-y\right)}^{\beta -1}/\left[{\left(\left(b+\theta \right)-\left(a+\theta \right)\right)}^{\alpha +\beta -1}B\left(\alpha ,\beta \right)\right],\\ a+\theta \le y\le b+\theta \end{array}$

2) For $\lambda >0$ ,

$\begin{array}{c}{f}_{\lambda X}\left(y\right)=\frac{1}{\lambda}{f}_{X}\left(y/\lambda \right)\\ =\frac{1}{\lambda}{\left(y/\lambda -a\right)}^{\alpha -1}{\left(b-y/\lambda \right)}^{\beta -1}/\left[{\left(b-a\right)}^{\alpha +\beta -1}B\left(\alpha ,\beta \right)\right],a\le y/\lambda \le b\\ =\left(y-\lambda a\right){}^{\alpha -1}{\left(\lambda b-y\right)}^{\beta -1}/\left[{\left(\lambda b-\lambda a\right)}^{\alpha +\beta -1}B\left(\alpha ,\beta \right)\right],\lambda a\le y\le \lambda b\end{array}$

When $\lambda <0$ ,

$\begin{array}{c}{f}_{\lambda X}\left(y\right)=-\frac{1}{\lambda}{f}_{X}\left(y/\lambda \right)\\ =-\frac{1}{\lambda}{\left(y/\lambda -a\right)}^{\alpha -1}{\left(b-y/\lambda \right)}^{\beta -1}/\left[{\left(b-a\right)}^{\alpha +\beta -1}B\left(\alpha ,\beta \right)\right],a\le y/\lambda \le b\\ ={\left(y-\lambda b\right)}^{\beta -1}{\left(\lambda a-y\right)}^{\alpha -1}/\left[{\left(\lambda a-\lambda b\right)}^{\alpha +\beta -1}B\left(\alpha ,\beta \right)\right],\lambda b\le y\le \lambda a\end{array}$

Q.E.D.

Pham-Gia and Turkkan [13] gave the expression of the density of ${X}_{1}+{X}_{2}$ , where ${X}_{1}$ and ${X}_{2}$ are independent general beta variables. The density of ${X}_{1}-{X}_{2}$ , which is only mentioned there, is explicitly given below.

Proposition 2:

Let ${X}_{1}~GB\left(\alpha ,\beta ;c,d\right)$ and ${X}_{2}~GB\left(\gamma ,\delta ;e,f\right)$ . For the difference ${X}_{1}-{X}_{2}$ defined on $\left(c-f,d-e\right)$ , there are two different cases to consider, depending on the relative values of $c-e$ and $d-f$ , since ${X}_{1}$ and ${X}_{2}$ do not have symmetrical roles.

Case 1:

$c-f\le d-f\le c-e\le d-e$ (6)

Case 2:

$c-f\le c-e\le d-f\le d-e$ (7)

Theorem 3: Let ${X}_{1}$ and ${X}_{2}$ be two independent general betas with their supports satisfying (6). Then $Y={X}_{1}-{X}_{2}$ has its density defined as follows:

For $c-f\le y\le d-f,$

$\begin{array}{l}f\left(y\right)=\frac{{\left(y-\left(c-f\right)\right)}^{\alpha +\delta -1}{\left(d-f-y\right)}^{\beta -1}B\left(\delta ,\alpha \right)}{{\left(d-c\right)}^{\alpha +\beta -1}{\left(f-e\right)}^{\delta}B\left(\delta ,\gamma \right)B\left(\alpha ,\beta \right)}\\ \text{}{F}_{1}\left(\delta ,1-\beta ,1-\gamma ;\alpha +\delta ;\frac{\left(c-f\right)-y}{\left(d-f\right)-y},\frac{y-\left(c-f\right)}{f-e}\right)\end{array}$ (8)

For $d-f\le y\le c-e,$

$\begin{array}{l}f\left(y\right)=\frac{{\left(y-\left(d+f\right)\right)}^{\delta -1}{\left(d-e-y\right)}^{\gamma -1}}{{\left(f-e\right)}^{\delta +\gamma -1}B\left(\delta ,\gamma \right)}\\ \text{}{F}_{1}\left(\beta ,1-\delta ,1-\gamma ;\alpha +\beta ;\frac{c-d}{y-\left(d-f\right)},\frac{d-c}{d-e-y}\right)\end{array}$ (9)

and for $c-e\le y\le d-e,$

$\begin{array}{l}f\left(y\right)=\frac{{\left(\left(d-e\right)-y\right)}^{\beta +\gamma -1}{\left(y-\left(d-f\right)\right)}^{\delta -1}B\left(\beta ,\gamma \right)}{{\left(d-c\right)}^{\beta}{\left(f-e\right)}^{\delta +\gamma -1}B\left(\delta ,\gamma \right)B\left(\alpha ,\beta \right)}\\ \text{}{F}_{1}\left(\beta ,1-\alpha ,1-\delta ;\beta +\gamma ;\frac{\left(d-e\right)-y}{d-c},\frac{y-\left(d-e\right)}{y-\left(d-f\right)}\right)\end{array}$ (10)

where ${F}_{1}(.)$ is Appell’s first hypergeometric function already discussed.

Proof:

The argument uses first part 2) of Theorem 1 to obtain that $-{X}_{2}~GB\left(\delta ,\gamma ;-f,-e\right)$ . Then, it uses the exact expression of the density of the sum of two general betas (see Theorem 2 in the article of T. Pham-Gia & N. Turkkan [14] ).

Q.E.D.

We denote the above density given by (8), (9) and (10) by ${\phi}_{\pi}\left({\alpha}_{1},{\beta}_{1},{\alpha}_{2},{\beta}_{2};c,d,e,f\right)$

Note: The corresponding case 2, when relation (7) is satisfied, is given in Appendix 1 (Theorem 3a).

To study the density of $\pi -\eta ={\pi}_{1}-\left({\pi}_{2}+\eta \right)$ , a particular case that will be used in our study here is the difference between ${X}_{1}~GB\left({\alpha}_{1},{\beta}_{1};0,1\right)$ and ${X}_{2}~GB\left({\alpha}_{2},{\beta}_{2};\eta ,\eta +1\right),-1\le \eta \le 1$ , with $\eta $ being a positive constant.

In this case both Theorem 2 and Theorem 3 apply since $c-e=d-f$ and the middle definition section of ${\phi}_{\pi}\left({\alpha}_{1},{\beta}_{1},{\alpha}_{2},{\beta}_{2};c,d,e,f\right)$ disappears.

Theorem 4: Let ${X}_{1}~GB\left({\alpha}_{1},{\beta}_{1};0,1\right)$ and ${X}_{2}~GB\left({\alpha}_{2},{\beta}_{2};\eta ,\eta +1\right)$ be two independent general beta distributed random variables. Then the density of $Y={X}_{1}-{X}_{2}$ , defined on $\left[-\left(\eta +1\right),1-\eta \right]$ , is:

1) for $-\eta -1\le y\le -\eta ,$

$\begin{array}{c}f\left(y\right)=\frac{{\left(y+\left(\eta +1\right)\right)}^{{\alpha}_{1}+{\beta}_{2}-1}{\left(-\eta -y\right)}^{{\alpha}_{2}-1}B\left({\alpha}_{1},{\beta}_{2}\right)}{B\left({\alpha}_{1},{\beta}_{1}\right)B\left({\alpha}_{2},{\beta}_{2}\right)}\\ \text{}{F}_{1}\left({\beta}_{2},1-{\beta}_{1},1-{\alpha}_{2};{\alpha}_{1}+{\beta}_{2};\frac{\left(\eta +1\right)+y}{\eta +y},y+\left(\eta +1\right)\right)\end{array}$

2) for $-\eta \le y\le 1-\eta ,$

$\begin{array}{l}f\left(y\right)=\frac{{\left(\left(1-\eta \right)-y\right)}^{{\alpha}_{2}+{\beta}_{1}-1}{\left(y+\eta \right)}^{{\beta}_{2}-1}B\left({\alpha}_{2},{\beta}_{1}\right)}{B\left({\alpha}_{1},{\beta}_{1}\right)B\left({\alpha}_{2},{\beta}_{2}\right)}\\ \text{}{F}_{1}\left({\beta}_{1},1-{\alpha}_{1},1-{\beta}_{2};{\alpha}_{2}+{\beta}_{1};\left(1-\eta \right)-y,\frac{y-\left(1-\eta \right)}{y+\eta}\right)\end{array}$

and we denote this distribution by

$Y~{\xi}_{\eta}\left({\alpha}_{1},{\beta}_{1},{\alpha}_{2},{\beta}_{2};\eta \right)$ . (11)

Proof:

This is a special case of Theorem 3.

Q.E.D.

An equivalent form using Theorem 4 leads to a slightly different expression, which gives however, the same numerical values for the density of $\pi -\eta $ (see Theorem 4a in Appendix 1).

4.2. Prior and Posterior Distributions of $\pi -\eta $

Let ${\pi}_{i},\text{}i=1,2$ be two independent beta distributed random variables, the first being a regular beta, ${\pi}_{1}~\text{beta}\left({\alpha}_{1},{\beta}_{1}\right)$ , and the second being a general beta, ${\pi}_{2}~GB\left({\alpha}_{2},{\beta}_{2};\eta ,1+\eta \right)$ .

Binomial sampling, with these two different beta priors, leads to the following

Proposition 3: The prior distribution of $\pi -\eta ={\pi}_{1}-\left({\pi}_{2}+\eta \right)$ is ${\xi}_{\eta}\left({\alpha}_{1},{\beta}_{1},{\alpha}_{2},{\beta}_{2};\eta \right)$ , given by (11), and its posterior distribution is ${\xi}_{\eta}\left({\alpha}_{1}^{\ast},{\beta}_{1}^{\ast},{\alpha}_{2}^{\ast},{\beta}_{2}^{\ast};\eta \right)$ with ${\alpha}_{i}^{\ast}={\alpha}_{i}+{x}_{i}$ and ${\beta}_{i}^{\ast}={\beta}_{i}+{n}_{i}-{x}_{i},\text{}i=1,2.$

Proof:

${\pi}_{1}-\left({\pi}_{2}+\eta \right)$ is the difference of two random variables with respective distribution $\text{beta}\left({\alpha}_{1},{\beta}_{1}\right)$ and $GB\left({\alpha}_{2},{\beta}_{2};\eta ,\eta +1\right)$ , The prior distributions of $\pi -\eta $ is hence ${\xi}_{\eta}\left({\alpha}_{1},{\beta}_{1},{\alpha}_{2},{\beta}_{2};\eta \right)$ , as given by (14).

Binomial sampling affects these 2 distributions in different ways. For the first, the posterior is $\text{beta}\left({\alpha}_{1}+{x}_{1},{\beta}_{1}+{n}_{1}-{x}_{1}\right)$ while the posterior distribution of the second is $GB\left({\alpha}_{2}+{x}_{2},{\beta}_{2}+{n}_{2}-{x}_{2};\eta ,\eta +1\right)$ (see Proposition 3a in Appendix 2). Figure 3 shows the prior and the posterior of ${\pi}_{2}+0.35$ .

From Theorem 4, we obtain the expression of the posterior density ${\xi}_{.35}\left(17.5,8.5,8.5,12.5;0.35\right)$ of $\pi -\eta $ as follows:

$f\left(x\right)=\{\begin{array}{l}\frac{{\left(x+1.35\right)}^{29}{\left(-0.35-x\right)}^{7.5}B\left(17.5,12.5\right)}{B\left(17.5,8.5\right)B\left(8.5,12.5\right)}\\ \text{}{F}_{1}\left(12.5,-7.5,-7.5;30;\frac{1.35+x}{0.35+x},x+1.35\right),\\ \text{}-1.35\le x<-0.35\\ \frac{{\left(0.65-x\right)}^{16}{\left(x+0.35\right)}^{11.5}B\left(8.5,8.5\right)}{B\left(17.5,8.5\right)B\left(8.5,12.5\right)}\\ \text{}{F}_{1}\left(8.5,-16.5,-11.5;17;0.65-x,\frac{x-0.65}{x+0.35}\right),\\ \text{}-0.35\le x<0.65\end{array}$ (12)

Figure 4 shows the above density.

(a)(b)

Figure 3. (a) Prior $GB\left(0.5,0.5,0.35,1.35\right)$ distribution of ${\pi}_{2}+0.35$ and (b) Posterior $GB\left(8.5,12.5;0.35,1.35\right)$ distribution of ${\pi}_{2}+0.35$ . The posterior of ${\pi}_{1}-\left({\pi}_{2}+\eta \right)$ is hence given by Theorem 4, as ${\xi}_{\eta}\left({\alpha}_{1}^{\ast},{\beta}_{1}^{\ast},{\alpha}_{2}^{\ast},{\beta}_{2}^{\ast};\eta \right)$ .

5. Conclusion

The Bayesian approach to testing the difference of two independent proportions leads to interesting results which agree with frequentist results when non-informative priors are considered. Undoubtedly, all preceding results can be

Figure 4. Posterior density ${\xi}_{.35}\left(17.5,8.5,8.5,12.5;0.35\right)$ of ${\pi}_{1}-\left({\pi}_{2}+0.35\right)$ .

generalized to other measures frequently used in a 2 ´ 2 table.

Acknowledgements

Research partially supported by NSERC grant 9249 (Canada). The authors wish to thank the Universite de Moncton Faculty of Graduate Studies and Research for the assistance provided while conducting this work.

Appendix 1

Below is the expression of the density of $Y={X}_{1}-{X}_{2}$ when (7) is satisfied, instead of (6). This expression, with the one given in Theorem 3, covers all cases.

Theorem 3a: Let ${X}_{1}$ and ${X}_{2}$ be two independent general betas with their supports satisfying (10). Then $Y={X}_{1}-{X}_{2}$ has its density defined as follows: for $c-f\le y\le c-e,$

$\begin{array}{l}f\left(y\right)=\frac{{\left(y-\left(c-f\right)\right)}^{\alpha +\delta -1}{\left(c-e-y\right)}^{\gamma -1}B\left(\alpha ,\delta \right)}{{\left(f-e\right)}^{\delta +\gamma -1}{\left(d-c\right)}^{\alpha}B\left(\alpha ,\beta \right)B\left(\delta ,\gamma \right)}\\ \text{}{F}_{1}\left(\alpha ,1-\gamma ,1-\beta ;\alpha +\delta ;\frac{\left(c-f\right)-y}{\left(c-e\right)-y},\frac{y-\left(c-f\right)}{d-c}\right)\end{array}$ (13)

For $c-e\le y\le d-f,$

$f\left(y\right)=\frac{{\left(y+\left(c+e\right)\right)}^{\alpha -1}{\left(d-e-y\right)}^{\beta -1}}{{\left(d-c\right)}^{\alpha +\beta -1}B\left(\alpha ,\beta \right)}{F}_{1}\left(\gamma ,1-\alpha ,1-\beta ;\delta +\gamma ;\frac{e-f}{y-\left(c-e\right)},\frac{f-e}{d-e-y}\right)$ (14)

For $d-f\le y\le d-e,$

$\begin{array}{l}f\left(y\right)=\frac{{\left(\left(d-e\right)-y\right)}^{\beta +\gamma -1}{\left(y-\left(c-e\right)\right)}^{\alpha -1}B\left(\beta ,\gamma \right)}{{\left(f-e\right)}^{\gamma}{\left(d-c\right)}^{\alpha +\beta -1}B\left(\delta ,\gamma \right)B\left(\alpha ,\beta \right)}\\ \text{}{F}_{1}\left(\gamma ,1-\delta ,1-\alpha ;\beta +\gamma ;\frac{\left(d-e\right)-y}{f-e},\frac{y-\left(d-e\right)}{y}\right)\end{array}$ (15)

Proof:

By rewriting $Y=\left(-{X}_{2}\right)-\left(-{X}_{1}\right)$ , we can apply the above Theorem 2 and Theorem 3.

Q.E.D

A parallel, and equivalent, result to Theorem 4 is given below:

Theorem 4a: The density of ${X}_{1}-{X}_{2}-\eta $ is:

For $-\eta -1\le y\le -\eta ,$

$\begin{array}{c}f\left(y\right)=\frac{{\left(y+\left(\eta +1\right)\right)}^{{\alpha}_{1}+{\beta}_{2}-1}{\left(-\eta -y\right)}^{{\alpha}_{2}-1}B\left({\alpha}_{1},{\beta}_{2}\right)}{B\left({\alpha}_{1},{\beta}_{1}\right)B\left({\alpha}_{2},{\beta}_{2}\right)}\\ \text{}{F}_{1}\left({\alpha}_{1},1-{\alpha}_{2},1-{\beta}_{1};{\alpha}_{1}+{\beta}_{2};\frac{\left(\eta +1\right)+y}{\eta +y},y+\left(\eta +1\right)\right)\end{array}$

For $-\eta \le y\le 1-\eta ,$

$\begin{array}{c}f\left(y\right)=\frac{{\left(\left(1-\mu \right)-y\right)}^{{\alpha}_{2}+{\beta}_{1}-1}{\left(y+\eta \right)}^{{\alpha}_{1}-1}B\left({\alpha}_{2},{\beta}_{1}\right)}{B\left({\alpha}_{1},{\beta}_{1}\right)B\left({\alpha}_{2},{\beta}_{2}\right)}\\ \text{}{F}_{1}\left({\alpha}_{2},1-{\beta}_{2},1-{\alpha}_{1};{\alpha}_{2}+{\beta}_{1};\left(1-\eta \right)-y,\frac{y-\left(1-\eta \right)}{y+\eta}\right)\end{array}$

and we denote $Y~{\xi}_{\eta}^{\ast}\left({\alpha}_{1},{\beta}_{1},{\alpha}_{2},{\beta}_{2};\eta \right)$ .

Proof:

Similar to the proof of Theorem 4.

Q.E.D

Appendix 2

Proposition 3a:

Suppose that ${X}_{2}~\text{Bin}\left({n}_{2},{\pi}_{2}\right)$ and ${\pi}_{2}$ has the prior distribution $\text{beta}\left({\alpha}_{2},{\beta}_{2}\right)$ then the posterior distribution of ${\pi}_{2}+\eta $ is $GB\left({\alpha}_{2}+{x}_{2},{\beta}_{2}+{n}_{2}-{x}_{2};\eta ,\eta +1\right)$ .

Proof:

The prior distribution of ${\pi}_{2}+\eta $ is $GB\left({\alpha}_{2},{\beta}_{2};\eta ,\eta +1\right)$ (see Theorem 2) with the pdf

${f}_{{\pi}_{2}+\eta}\left({\pi}_{2}|{x}_{2}\right)={\left[B\left({\alpha}_{2},{\beta}_{2}\right)\right]}^{-1}{\left({\pi}_{2}-\eta \right)}^{{\alpha}_{2}-1}{\left(1+\eta -{\pi}_{2}\right)}^{{\beta}_{2}-1},\text{}\eta \le {\pi}_{2}\le \eta +1$ ,

The likelihood function is

${f}_{{X}_{2}|{\pi}_{2}+\eta}\left({x}_{2}|\theta \right)={f}_{{X}_{2}|{\pi}_{2}}\left({x}_{2}|{\pi}_{2}\right)=\left(\begin{array}{c}{n}_{2}\\ {x}_{2}\end{array}\right){\pi}_{2}^{{x}_{2}}{\left(1-{\pi}_{2}\right)}^{{n}_{2}-{x}_{2}},\text{}{x}_{2}=0,1,\cdots ,n$

Thus the marginal distribution of ${X}_{2}$ , the number of success, with ${\pi}_{2}=\theta -\eta $ , has density:

$\begin{array}{c}K\left({x}_{2}|{\alpha}_{2},{\beta}_{2},{n}_{2}\right)=\frac{\left(\begin{array}{c}{n}_{2}\\ {x}_{2}\end{array}\right)}{B\left({\alpha}_{2},{\beta}_{2}\right)}{\displaystyle {\int}_{\eta}^{\eta +1}{\left(\theta -\eta \right)}^{{\alpha}_{2}-1}{\left(1+\eta -\theta \right)}^{{\beta}_{2}-1}{\pi}_{2}^{{x}_{2}}{\left(1-{\pi}_{2}\right)}^{{n}_{2}-{x}_{2}}\text{d}\theta},\text{}\\ =\frac{\left(\begin{array}{c}{n}_{2}\\ {x}_{2}\end{array}\right)}{B\left({\alpha}_{2},{\beta}_{2}\right)}{\displaystyle {\int}_{\eta}^{\eta +1}{\left(\theta -\eta \right)}^{{\alpha}_{2}-1}{\left(1+\eta -\theta \right)}^{{\beta}_{2}-1}{\left(\theta -\eta \right)}^{{x}_{2}}{\left(1+\eta -\theta \right)}^{{n}_{2}-{x}_{2}}\text{d}\theta}\\ =\frac{\left(\begin{array}{c}{n}_{2}\\ {x}_{2}\end{array}\right)}{B\left({\alpha}_{2},{\beta}_{2}\right)}{\displaystyle {\int}_{\eta}^{\eta +1}{\left(\theta -\eta \right)}^{{\alpha}_{2}+{x}_{2}-1}{\left(1+\eta -\theta \right)}^{{\beta}_{2}+{n}_{2}-{x}_{2}-1}\text{d}\theta}\\ =\frac{\left(\begin{array}{c}{n}_{2}\\ {x}_{2}\end{array}\right)}{B\left({\alpha}_{2},{\beta}_{2}\right)}B\left({\alpha}_{2}+{x}_{2},{\beta}_{2}+{n}_{2}-{x}_{2}\right)\end{array}$

Therefore, the posterior distribution of $\theta $ given ${X}_{2}={x}_{2}$ is

$\begin{array}{c}{f}_{{\pi}_{2}+\eta |{X}_{2}}\left(\theta |{x}_{2}\right)=\frac{{f}_{{\pi}_{2}+\eta}\left(\theta |{x}_{2}\right){f}_{{X}_{2}|{\pi}_{2}+\eta}\left({x}_{2}|\theta \right)}{K\left({x}_{2}|{\alpha}_{2},{\beta}_{2},{n}_{2}\right)}\\ =\frac{{\left[B\left({\alpha}_{2},{\beta}_{2}\right)\right]}^{-1}{\left(\theta -\eta \right)}^{{\alpha}_{2}-1}{\left(1+\eta -\theta \right)}^{{\beta}_{2}-1}\left(\begin{array}{c}{n}_{2}\\ {x}_{2}\end{array}\right){\pi}_{2}^{{x}_{2}}{\left(1-{\pi}_{2}\right)}^{{n}_{2}-{x}_{2}}}{\frac{\left(\begin{array}{c}{n}_{2}\\ {x}_{2}\end{array}\right)}{B\left({\alpha}_{2},{\beta}_{2}\right)}B\left({\alpha}_{2}+{x}_{2},{\beta}_{2}+{n}_{2}-{x}_{2}\right)},\\ \text{with}{\pi}_{2}=\theta -\eta ,\text{}\eta \le \theta \le \eta +1\\ =\frac{{\left(\theta -\eta \right)}^{{\alpha}_{2}+{x}_{2}-1}{\left(1+\eta -\theta \right)}^{{\beta}_{2}+{n}_{2}-{x}_{2}-1}}{B\left({\alpha}_{2}+{x}_{2},{\beta}_{2}+{n}_{2}-{x}_{2}\right)},\eta \le \theta \le \eta +1\end{array}$

This is the p.d.f. of $GB\left({\alpha}_{2}+{x}_{2},{\beta}_{2}+{n}_{2}-{x}_{2};\eta ,\eta +1\right)$ .

Q. E. D.

End

Conflicts of Interest

The authors declare no conflicts of interest.

Cite this paper

*Open Journal of Statistics*,

**7**, 1-15. doi: 10.4236/ojs.2017.71001.

[1] |
Pham-Gia, T. and Turkkan, N. (1993) Bayesian Analysis of the Difference of two Proportions. Communications in Statistics—Theory and Methods, 22, 1755-1771.
https://doi.org/10.1080/03610929308831114 |

[2] |
Pham-Gia, T. and Turkkan, N. (2008) Bayesian Analysis of a 2 × 2 Contingency Table with Dependent Proportions and Exact Sample Sizes. Statistics, 42, 127-147.
https://doi.org/10.1080/02331880701600380 |

[3] |
Pham-Gia, T. and Turkkan, N. (2003) Determination of the Exact Sample Sizes in the Bayesian Estimation of the Difference between Two Proportions. Journal of the Royal Statistical Society, 52, 131-150. https://doi.org/10.1111/1467-9884.00347 |

[4] | D’Agostino, R., Chase, W. and Belanger, A. (1988) The Appropriateness of Some Common Procedures for Testing the Equality of Two Independent Binomial Populations. The American Statistician, 42, 198-202. |

[5] |
Cressie, N. (1978) Testing the Equality of Two Binomial Proportions. Annals of the Institute of Statistical Mathematics, 30, 421-427.
https://doi.org/10.1007/BF02480232 |

[6] |
Eberhardt, K.R. and Fligner, M.A. (1977) A Comparison of Two Tests for Equality of Two Proportions. The American Statistician, 21, 151-155.
https://doi.org/10.1080/00031305.1977.10479225 |

[7] | Santner, T.J. and Snell, M.K. (1975) Small Sample Confidence Intervals for and in 2 × 2 Contingency Tables. JASA, 75, 386-394. |

[8] |
Howard, J.V. (1998) The 2 × 2 Table: A Discussion from a Bayesian Viewpoint. Statistical Sciences, 13, 351-367. https://doi.org/10.1214/ss/1028905830 |

[9] | Agresti, A. and Coull, B. (1998) Approximate Is Better than Exact for Interval Estimation of Binomial Proportions. The American Statistician, 52, 2. |

[10] | Berger, J. (1999) Bayes Factor. In: Kotz, S., Read, C.B. and Banks, D.L., Eds., Encyclopedia of Statistics, Update 3, Wiley, NY, 20-29. |

[11] | Lee, P.M. (2004) Bayesian Statistics. An Introduction. 3rd Edition, Hodder Arnold, London. |

[12] |
Pham-Gia, T. and Turkkan, N. (1993) Computation of the Highest Posterior Density Interval in Bayesian Analysis. Journal of Statistical Computation and Simulation, 44, 243-250. https://doi.org/10.1080/00949659308811461 |

[13] |
Pham-Gia, T. and Turkkan, N. (1998) Distribution of the Linear Combination of Two General Beta Variables and Applications. Communications in Statistics—Theory and Methods, 27, 1851-1869. https://doi.org/10.1080/03610929808832194 |

[14] |
Pham-Gia, T. and Turkkan, N. (1994) Reliability of a Standby System with Beta-Distributed Component Lives. IEEE Transactions on Reliability, R43, 71-75.
https://doi.org/10.1109/24.285114 |

Copyright © 2019 by authors and Scientific Research Publishing Inc.

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.