Quasi-Binomial Regression Model for the Analysis of Data with Extra-Binomial Variation ()

Mohamed M. Shoukri^{1}, Maha M. Aleid^{2}

^{1}Department of Epidemiology and Biostatistics, Schulich School of Medicine and Dentistry, University of Western Ontario, London Ontario, Canada.

^{2}Department of Biostatistics, Epidemiology and Scientific Computing, King Faisal Specialist Hospital and Research Center, Riyadh, KSA.

**DOI: **10.4236/ojs.2022.121001
PDF HTML XML
134
Downloads
1,280
Views
Citations

**Objectives****: **Developing inference procedures on the quasi-binomial distribution and
the regression model. **Methods**: Score testing and the method of maximum
likelihood for regression parameters estimation. **Data**: Several examples
are included, based on published data. **Results**: A quasi-binomial model
is used to model binary response data which exhibit extra-binomial variation. A
partial score test on the binomial hypothesis versus the quasi-binomial
alternative is developed and illustrated on three data sets. The extended logit
transformation on the binomial parameter is introduced and the large sample
dispersion matrix of the estimated parameters is derived. The Nonlinear Mixed
Procedure (NLMIXED) in SAS is shown to be very appropriate for the estimation
of nonlinear regression.

Keywords

Quasi-Binomial Distribution, Extra Binomial Variations, Score Test, Quasi-Binomial Regression Model, COVID-19 Case Fatality Data

Share and Cite:

Shoukri, M. and Aleid, M. (2022) Quasi-Binomial Regression Model for the Analysis of Data with Extra-Binomial Variation. *Open Journal of Statistics*, **12**, 1-14. doi: 10.4236/ojs.2022.121001.

1. Introduction

In many biological and toxicological experiments, the variable of interest is in the form of counts resulting from binary responses. In such experiments the data may sometimes exhibit greater heterogeneity (variation) than the binomial model. It has long been presumed that an inherent characteristic of data from these types of studies is the tendency for individual experimental units to respond more alike than individuals from other groups, which is commonly known as the “group effect”. When the experimental units are animals which are treated with varying doses of compounds, such group effect is also known as “litter effect”. The litters in each group contain varying numbers of live fetuses and some of these have a specific abnormality. To explain the extra-variation caused by the “litter effect”, several generalized statistical models have been proposed in the literature. Altham [1] proposed that the analysis of such experiments be based on two-parameter generalizations of the binomial model which allows for the presence of dependent responses within groups and gave two models. Kupper and Haseman [2] suggested a correlated binomial model which is identical to Altham’s additive generalization of the binomial model. Williams [3] proposed that the analysis of toxicological studies be based on the beta-binomial model, which is another generalization of the binomial model. However, [1] indicated that the beta-binomial model allows only positive association between the subjects of a group whereas the correlated binomial and the multiplicative generalization of the binomial model allow negative as well as positive associations. A much wider class of family of distributions known as “The generalized Linear Mixed Models” or GLMM [4] is developed and is used extensively in many applications and to deal with overdispersion that exists in count and binary data.

In this paper we show that the quasi-binomial distribution of Consul [5] reviewed by Shenton in [6] can be used as an alternative model for the analysis of overly dispersed dichotomous data. The quasi-binomial (QBD) model has two parameters *p* and
$\varphi $. The parameter *p* will be called the binomial parameter and the other parameter
$\varphi $ will be called the dispersion parameter. When
$\varphi =0$, the quasi-binomial distribution (QBD) reduces to the binomial distribution. Since the binomial distribution hypothesis is the focus of our investigations, it is natural to derive a test statistic for testing the null hypothesis
$\varphi =0$.

The paper is structured as follows: in Section 2 we derive the $C\left(\alpha \right)$ binomial score test of significance [7] and [8] which is asymptomatically optimal against a QBD alternative and apply the test to some real data in Section 3. In Section 4 we develop a QBD regression model to account for possible extraneous sources of variation. The methods are applied to COVID-19 mortality data.

The flowchart in the Appendix outlines the steps of the model developments and the applications.

2. Quasi-Binomial Distribution and $C\left(\alpha \right)$ Binomial Score Test of Significance

A discrete random variable *Y* is said to have a QBD if and only if its probability function is given from [6] as:

${P}_{r}\left(Y=y\right)=p\left(y\right)=\left(\begin{array}{c}m\\ y\end{array}\right)p{\left(p+y\varphi \right)}^{y-1}{\left(1-p-y\varphi \right)}^{m-y},$ (1)

for
$y=0,1,2,3,\cdots ,m$ and zero otherwise and where
$0<p<1$,
$-p/m<\varphi <\left(1-p\right)/m$. It reduces to the binomial when
$\varphi =0$. The r.v. *Y* represents the number of successes in *m* trials such that the probability for the first success is *p* and that the probability of success in each of the other trials is
$p+y\varphi $. Thus the probability of success increases or decreases as
$\varphi $ is positive or negative and is directly proportional to the number of successes *y*. All the moments of the QBD are finite and the parameter
$\varphi $ has a very substantial effect on the model. The Variance of the QBD is larger or smaller than the variance of the binomial model depending upon
$\varphi >0$ or
$\varphi <0$. Consul [9] provided a detailed study of the characteristics of the QBD and gave numerous properties and moment based estimation of the model parameters. The mean
$\mu $ of the QBD model (1) is given by

$\mu =mp\left[1+{\displaystyle {\sum}_{j=1}^{m-1}{\varphi}^{j}{\left(m-1\right)}_{\left(j\right)}}\right]$ (2)

We shall formulate a
$C\left(\alpha \right)$ test for testing the binomial model against the QBD alternative. This can be done by testing the null hypothesis
${H}_{0}:\varphi =0$ against its negation in the presence of the nuisance parameter *p*. Moran [9] showed that for such problems the
$C\left(\alpha \right)$ tests, suggested by Neyman [8], are asymptomatically equivalent to tests using the maximum likelihood estimates.

Let
${Y}_{1},{Y}_{2},\cdots ,{Y}_{n}$ be *n* independent random variables where each r. v.
${Y}_{i}$ is distributed as a QBD with
$\left({m}_{i},p,\varphi \right)$. The likelihood function *L* is given by (3):

$L=\underset{i=1}{\overset{n}{{\displaystyle \prod}}}\left[\left(\begin{array}{c}{m}_{i}\\ {y}_{i}\end{array}\right)p{\left(p+{y}_{i}\varphi \right)}^{{y}_{i}-1}{\left(1-p-{y}_{i}\varphi \right)}^{{m}_{i}-{y}_{i}}\right]$ (3)

and, its logarithm (4) equals

$\mathcal{l}=\text{constant}+n\mathrm{ln}p+\underset{i=1}{\overset{n}{{\displaystyle \sum}}}\left({y}_{i}-1\right)\mathrm{ln}\left(p+{y}_{i}\varphi \right)+\underset{i=1}{\overset{n}{{\displaystyle \sum}}}\left({m}_{i}-{y}_{i}\right)\mathrm{ln}\left(1-p-{y}_{i}\varphi \right)$ (4)

To derive the $C\left(\alpha \right)$ test statistic for ${H}_{0}:\varphi =0$, the first and second partial derivatives of the log-likelihood function $\mathcal{l}$, evaluated at $\varphi =0$, are needed.

All summations are from
$i=1$ to *n* in the expressions unless stated otherwise. Differentiating the right hand-side of (4) with respect to the model parameters, and setting
$\varphi =0$ we get

$\begin{array}{l}{\frac{\partial \mathcal{l}}{\partial \varphi}|}_{\varphi =0}={T}_{1}\left(p\right)={p}^{-1}{\displaystyle \sum {y}_{i}}\left({y}_{i}-1\right)-{q}^{-1}{\displaystyle \sum {y}_{i}}\left({m}_{i}-{y}_{i}\right)\\ {\frac{\partial \mathcal{l}}{\partial p}|}_{\varphi =0}={T}_{2}\left(p\right)={p}^{-1}{\displaystyle \sum {y}_{i}}-{q}^{-1}{\displaystyle \sum {y}_{i}}\left({m}_{i}-{y}_{i}\right)\end{array}\}$ (5)

where $q=1-p$.

Setting the second equation in (5) to zero and solving for *p* yields

$\stackrel{^}{p}={\displaystyle \sum {y}_{i}}/{\displaystyle \sum {m}_{i}}$ (6)

as the maximum likelihood estimator of *p* under
${H}_{0}:\varphi =0$.

Also, the second partial derivatives are given in (7), (8), (9)

$\frac{{\partial}^{2}\mathcal{l}}{\partial {\varphi}^{2}}=-{{\displaystyle \sum}}^{\text{}}\frac{{y}_{i}^{2}\left({y}_{i}-1\right)}{{\left(p+{y}_{i}\varphi \right)}^{2}}-{{\displaystyle \sum}}^{\text{}}\frac{{y}_{i}^{2}\left({m}_{i}-1\right)}{{\left(1-p-{y}_{i}\varphi \right)}^{2}},$ (7)

$\frac{{\partial}^{2}\mathcal{l}}{\partial \varphi \partial p}=-{{\displaystyle \sum}}^{\text{}}\frac{{y}_{i}\left({y}_{i}-1\right)}{{\left(p+{y}_{i}\varphi \right)}^{2}}-{{\displaystyle \sum}}^{\text{}}\frac{{y}_{i}\left({m}_{i}-1\right)}{{\left(1-p-{y}_{i}\varphi \right)}^{2}},$ (8)

$\frac{{\partial}^{2}\mathcal{l}}{\partial {p}^{2}}=-n{p}^{-2}-{{\displaystyle \sum}}^{\text{}}\frac{\left({y}_{i}-1\right)}{{\left(p+{y}_{i}\varphi \right)}^{2}}-{{\displaystyle \sum}}^{\text{}}\frac{\left({m}_{i}-{y}_{i}\right)}{{\left(1-p-{y}_{i}\varphi \right)}^{2}}$ (9)

Setting $\varphi =0$, the above three equations are obtained in their respective orders as:

${T}_{11}\left(p\right)=-{p}^{-2}{\displaystyle \sum {y}_{i}^{2}}\left({y}_{i}-1\right)-{q}^{-2}{\displaystyle \sum {y}_{i}^{2}}\left({m}_{i}-{y}_{i}\right)$ (10)

${T}_{12}\left(p\right)=-{p}^{-2}{\displaystyle \sum {y}_{i}}\left({y}_{i}-1\right)-{q}^{-2}{\displaystyle \sum {y}_{i}}\left({m}_{i}-{y}_{i}\right)$ (11)

and,

${T}_{22}\left(p\right)=-n{p}^{-2}-{p}^{-2}{{\displaystyle \sum}}^{\text{}}\left({y}_{i}-1\right)-{q}^{-2}{{\displaystyle \sum}}^{\text{}}\left({m}_{i}-{y}_{i}\right)$ (12)

Under the null hypothesis ${H}_{0}:\varphi =0$, the ${{Y}^{\prime}}_{i}s$ are independent binomial variates. Using the expected values of ${Y}_{i},{Y}_{i}^{2}$ and ${Y}_{i}^{3}$ for binomial variates one can easily see that $E\left[{T}_{1}\left(p\right)\right]=0$.

Denoting $-E\left[{T}_{11}\left(p\right)\right]={A}_{11}\left(p\right),-E\left[{T}_{12}\left(p\right)\right]={A}_{12}\left(p\right)$ and $-E\left[{T}_{22}\left(p\right)\right]={A}_{22}\left(p\right)$ we can then show that

${A}_{11}\left(p\right)=\left(2-3p\right){q}^{-1}{\displaystyle \sum {m}_{i}}\left({m}_{i}-1\right)+p{q}^{-1}{\displaystyle \sum {m}_{i}^{2}}\left({m}_{i}-1\right),$ (13)

${A}_{12}\left(p\right)={q}^{-1}{\displaystyle \sum {m}_{i}}\left({m}_{i}-1\right),$ (14)

and

${A}_{22}\left(p\right)={\left(pq\right)}^{-1}{\displaystyle \sum {m}_{i}}.$ (15)

Equations (13), (14), and (15) are in fact the elements of Fisher’s information matrix when the null hypothesis ${H}_{0}:\varphi =0$ is true.

To test the hypothesis
${H}_{0}:\varphi =0$, one can use the statistic
${T}_{1}\left(p\right)$ according to Neyman’s methodology [7]. Since *p* is unknown, we can follow Moran’s suggestion [8] and use the statistic
${T}_{1}\left(\stackrel{\u02dc}{p}\right)$, where
$\stackrel{\u02dc}{p}$ is any root-n consistent estimator of *p*. The maximum likelihood estimator
$\stackrel{^}{p}$, given in (5) is the simplest such estimator. On substituting
$\stackrel{^}{p}$ in (4) and on simplifying, we get

${T}_{1}\left(\stackrel{^}{p}\right)={\left(\stackrel{^}{p}\stackrel{^}{q}\right)}^{-1}{{\displaystyle \sum}}^{\text{}}{\left({y}_{i}-{m}_{i}\stackrel{^}{p}\right)}^{2}+{\stackrel{^}{q}}^{-1}{\displaystyle \sum {m}_{i}}\left({y}_{i}-{m}_{i}\stackrel{^}{p}\right)-{\displaystyle \sum {m}_{i}}$ (16)

It may be noted that when ${m}_{i}={m}_{2}=\cdots ={m}_{n}=m$, the expression for ${T}_{1}\left(\stackrel{^}{p}\right)$ reduces to

${\left(\stackrel{^}{p}\stackrel{^}{q}\right)}^{-1}{{\displaystyle \sum}}^{\text{}}{\left({y}_{i}-m\stackrel{^}{p}\right)}^{2}-mn$

which is like Fisher’s variance test statistic. From Cox and Hinkley [10],

$Var\left[{T}_{1}\left(\stackrel{^}{p}\right)\right]={A}_{11}\left(p\right)-{A}_{12}^{2}\left(p\right)/{A}_{22}\left(p\right)$ (17)

The substitution of
$\stackrel{^}{p}$ for *p* in (17) gives the functional form of the test statistic, under
${H}_{0}:\varphi =0$, as

${M}^{2}={\left[{T}_{1}\left(\stackrel{^}{p}\right)\right]}^{2}/\stackrel{^}{Var}\left[{T}_{1}\left(\stackrel{^}{p}\right)\right].$ (18)

The statistic *M*^{2} (18) has an asymptotic (for
$n\to \infty $ ) chi-square distribution with one degree of freedom. Accordingly, the above statistic provides a
$C\left(\alpha \right)$ a binomial score test which is asymptotically optimal against the quasi-binomial alternative.

3. Examples

We shall now consider two examples. In the first example the data sets are binomially distributed and the test statistic *M*^{2} does not reject the hypothesis of a binomial distribution and in the second example the test statistic *M*^{2} indicates that the data sets are not binomially distributed.

Example 1. Paul [11] discussed a teratological experiment in which pregnant Dutch rabbits were treated with varying doses of a compound. Each litter (group) consisted of a varying number of live fetuses in each rabbit. The number of fetuses in each litter with skeletal of visceral abnormalities were then observed. For illustration, we consider the group, treated with high dose, consisting of $n=17$ litters which gave the following observations:

$\begin{array}{l}{m}_{i}:9\text{\hspace{0.17em}}10\text{\hspace{0.17em}}7\text{\hspace{0.17em}}5\text{\hspace{0.17em}}4\text{\hspace{0.17em}}6\text{\hspace{0.17em}}3\text{\hspace{0.17em}}8\text{\hspace{0.17em}}5\text{\hspace{0.17em}}4\text{\hspace{0.17em}}4\text{\hspace{0.17em}}5\text{\hspace{0.17em}}3\text{\hspace{0.17em}}8\text{\hspace{0.17em}}6\text{\hspace{0.17em}}8\text{\hspace{0.17em}}6\\ {y}_{i}:\text{\hspace{0.17em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}1\text{\hspace{0.17em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}0\text{\hspace{0.17em}}\text{\hspace{0.17em}}1\text{\hspace{0.05em}}\text{\hspace{0.17em}}0\text{\hspace{0.17em}}1\text{\hspace{0.05em}}\text{\hspace{0.17em}}0\text{\hspace{0.17em}}\text{\hspace{0.05em}}1\text{\hspace{0.17em}}\text{\hspace{0.05em}}1\text{\hspace{0.17em}}2\text{\hspace{0.17em}}0\text{\hspace{0.05em}}\text{\hspace{0.17em}}4\text{\hspace{0.05em}}\text{\hspace{0.17em}}1\text{\hspace{0.17em}}\text{\hspace{0.05em}}1\text{\hspace{0.17em}}4\text{\hspace{0.17em}}2\text{\hspace{0.17em}}3\text{\hspace{0.17em}}1\end{array}$

Since $\sum {m}_{i}}=101$ and $\sum {y}_{i}}=23$, $\stackrel{^}{p}=23/101=0.228$.

To test the null hypothesis *H*_{0}: The data sets are binomially distributed *i.e.*
$\varphi \ne 0$ against *H*_{1}: The data sets are quasi-binomially distributed *i.e.*
$\varphi \ne 0$, we compute the following values for (13) to (14) and apply them to (15) and (16).

${A}_{12}\left(\stackrel{^}{p}\right)=\frac{570}{0.772}=738.342,\text{\hspace{0.17em}}\text{\hspace{0.17em}}{A}_{22}\left(\stackrel{^}{p}\right)=\frac{101}{\left(0.228\right)\left(0.772\right)}=573.811,$

${A}_{11}\left(\stackrel{^}{p}\right)=\frac{2-3\left(0.228\right)}{0.772}\left(570\right)+\frac{0.228}{0.772}\left(4206\right)=2213.84,$

and

$\stackrel{^}{Var}\left({T}_{1}\left(\stackrel{^}{p}\right)\right)=2213.84-\frac{{\left(738.342\right)}^{2}}{573.811}=\mathrm{1263.79.}$

Thus, from (11),

${M}^{2}=\frac{{\left(42.781\right)}^{2}}{\left(1263.79\right)}=1.448$

Since ${P}_{r}\left({M}^{2}\ge 1.448\right)={P}_{r}\left({X}_{i}^{2}\ge 1.448\right)=0.22$, the null hypothesis cannot be rejected. Thus, we conclude that the data sets are binomially distributed with $\stackrel{^}{p}=0.228$.

4. Quasi-Binomial Regression Model

It is well known that the logistic-linear model is a basis for analyzing regression data or the data from designed experiments when the response variable is measured on the binary scale. The purpose of this section is to modify the QBD so that a finite number of concomitant variables may be included which may account for most of the sources of the extra-binomial variation.

Suppose that the *i*^{th} response
${Y}_{i}\left(1\le i\le n\right)$ has the QBD given by (1). Also, let
${x}_{i1},{x}_{i2},\cdots ,{x}_{ik}$ be the values of *k* explanatory variables associated with the response variable
${y}_{i}$, where the
$n\times k$ matrix is of rank *k*. We now employ the customary logistic transformation on the binomial parameter *p* as indicated below”

${p}_{i}={\text{e}}^{{\theta}_{i}}{\left(1+{\text{e}}^{{\theta}_{i}}\right)}^{-1},$

where,

${\theta}_{i}=\mathcal{l}n\left[{p}_{i}{\left(1-{p}_{i}\right)}^{-1}\right]=\underset{j=1}{\overset{k}{{\displaystyle \sum}}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{x}_{ij}{\beta}_{j}$ (19)

where ${\beta}_{1},{\beta}_{2},\cdots ,{\beta}_{k}$ in the right-hand side of (19) are the regression coefficients which are to be estimated along with the parameter $\varphi $.

The likelihood function will be given by

$L={\displaystyle {\prod}_{i=1}^{n}\left[\left(\begin{array}{c}{m}_{i}\\ {y}_{i}\end{array}\right)\left(\frac{{\text{e}}^{{\theta}_{i}}}{1+{\text{e}}^{{\theta}_{i}}}\right){\left(\frac{{\text{e}}^{{\theta}_{i}}}{1+{\text{e}}^{{\theta}_{i}}}+{y}_{i}\varphi \right)}^{{y}_{1}-1}{\left(\frac{1}{1+{\text{e}}^{{\theta}_{i}}}-{y}_{i}\varphi \right)}^{{m}_{i}-{y}_{i}}\right]}$ (20)

Taking the log of the likelihood function (20) we get the log-likelihood function in (21)

$\begin{array}{c}\mathcal{l}={\displaystyle \sum \mathcal{l}n}\left[{\text{e}}^{{\theta}_{i}}{\left(1+{\text{e}}^{{\theta}_{i}}\right)}^{-1}\right]+{{\displaystyle \sum}}^{\text{}}\left({y}_{i}-1\right)\mathcal{l}n\left[{\text{e}}^{{\theta}_{i}}{\left(1+{\text{e}}^{{\theta}_{i}}\right)}^{-1}+{y}_{i}\varphi \right]\\ \text{\hspace{0.17em}}\text{\hspace{0.05em}}+{{\displaystyle \sum}}^{\text{}}\left({m}_{i}-{y}_{i}\right)\mathcal{l}n\left[{\left(1+{\text{e}}^{{\theta}_{i}}\right)}^{-1}-{y}_{i}\varphi \right]+\text{constant},\end{array}$ (21)

where the summations are for
$i=1$ to *n* and
${\theta}_{i}$ is defined in (19).

Differentiating $\mathcal{l}$, given in (21) partially with respect to ${\beta}_{r},r=1,2,\cdots ,k$, and $\varphi $, we have the following system of $\left(k+1\right)ML$ equations:

$\begin{array}{l}{\stackrel{\dot{}}{\mathcal{l}}}_{r}=\frac{\partial \mathcal{l}}{\partial {\beta}_{r}}={{\displaystyle \sum}}^{\text{}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{x}_{ir}-{{\displaystyle \sum}}^{\text{}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{\text{e}}^{{\theta}_{i}}{\left(1+{\text{e}}^{{\theta}_{i}}\right)}^{-1}{x}_{ir}\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}+{{\displaystyle \sum}}^{\text{}}\left({y}_{i}-1\right)\frac{{\text{e}}^{{\theta}_{i}}{\left(1+{\text{e}}^{{\theta}_{i}}\right)}^{-2}}{{\text{e}}^{{\theta}_{i}}{\left(1+{\text{e}}^{{\theta}_{i}}\right)}^{-1}+{y}_{i}\varphi}{x}_{ir}\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}-{{\displaystyle \sum}}^{\text{}}\left({m}_{i}-{y}_{i}\right)\frac{{\text{e}}^{{\theta}_{i}}{\left(1+{\text{e}}^{{\theta}_{i}}\right)}^{-2}}{{\left(1+{\text{e}}^{{\theta}_{i}}\right)}^{-1}-{y}_{i}\varphi}{x}_{ir}=0,\text{\hspace{0.17em}}r=1,2,\cdots ,k\end{array}$ (22)

and

${\stackrel{\dot{}}{\mathcal{l}}}_{\varphi}=\frac{\partial \mathcal{l}}{\partial \varphi}={{\displaystyle \sum}}^{\text{}}\frac{{y}_{i}\left({y}_{i}-1\right)}{{\text{e}}^{{\theta}_{i}}{\left(1+{\text{e}}^{{\theta}_{i}}\right)}^{-1}+{y}_{i}\varphi}-{{\displaystyle \sum}}^{\text{}}\frac{{y}_{i}\left({m}_{i}-{y}_{i}\right)}{{\left(1+{\text{e}}^{{\theta}_{i}}\right)}^{-1}-{y}_{i}\varphi}=0.$ (23)

The second partial derivatives are given by (where ${q}_{i}=1-{p}_{i}$ )

${\stackrel{\dot{}}{\mathcal{l}}}_{\varphi \varphi}=\frac{{\partial}^{2}\mathcal{l}}{\partial {\varphi}^{2}}=-{{\displaystyle \sum}}^{\text{}}\frac{{y}_{i}^{2}\left({y}_{i}-1\right)}{{\left({p}_{i}+{y}_{i}\varphi \right)}^{2}}-{{\displaystyle \sum}}^{\text{}}\frac{{y}_{i}^{2}\left({m}_{i}-{m}_{i}\right)}{{\left(1-{p}_{i}-{y}_{i}\varphi \right)}^{2}}$

${\stackrel{\dot{}}{\mathcal{l}}}_{\varphi r}=\frac{{\partial}^{2}\mathcal{l}}{\partial \varphi \text{\hspace{0.05em}}\partial {\beta}_{r}}=-{{\displaystyle \sum}}^{\text{}}\frac{{y}_{i}\left({y}_{i}-1\right){p}_{i}{q}_{i}}{{\left({p}_{i}+{y}_{i}\varphi \right)}^{2}}{x}_{ir}-{{\displaystyle \sum}}^{\text{}}\frac{{y}_{i}\left({m}_{i}-{y}_{i}\right){p}_{i}{q}_{i}}{{\left(1-{p}_{i}-{y}_{i}\varphi \right)}^{2}}$

and

$\begin{array}{l}{\stackrel{\dot{}}{\mathcal{l}}}_{rs}=\frac{{\partial}^{2}\mathcal{l}}{\partial {\beta}_{r}\partial {\beta}_{s}}=-{{\displaystyle \sum}}^{\text{}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{p}_{i}{q}_{i}{x}_{ir}{x}_{is}+{{\displaystyle \sum}}^{\text{}}\frac{{y}_{i}-1}{{p}_{i}+{y}_{i}\varphi}{p}_{i}{q}_{i}\left(1-2{p}_{i}\right){x}_{ir}{x}_{is}\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.05em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}-{{\displaystyle \sum}}^{\text{}}\frac{{y}_{i}-1}{{\left({p}_{i}-{y}_{i}\varphi \right)}^{2}}{p}_{i}^{2}{q}_{i}^{2}{x}_{ir}{x}_{is}-{{\displaystyle \sum}}^{\text{}}\frac{\left({m}_{i}-{y}_{i}\right){p}_{i}^{2}{q}_{i}^{2}}{{\left(1-{p}_{i}-{y}_{i}\varphi \right)}^{2}}{x}_{ir}{x}_{is}\end{array}$

$\mathcal{L}$ for $r,s=1,2,\cdots ,k$.

The expectations of the negatives of the above second partial derivatives would give the elements of the Fisher’s information matrix. For these we use some results from [9] on inverse moments of the QBD. Thus

$\begin{array}{c}{I}_{\varphi \varphi}={\displaystyle \sum E}\left[\frac{{Y}_{i}^{2}\left({Y}_{i}-1\right)}{{\left({p}_{i}-{Y}_{i}\varphi \right)}^{2}}\right]+{\displaystyle \sum E}\left[\frac{{Y}_{i}^{2}\left({m}_{i}-{Y}_{i}\right)}{{\left(1-{p}_{i}-{Y}_{i}\varphi \right)}^{2}}\right]\\ =\underset{i=1}{\overset{n}{{\displaystyle \sum}}}\frac{{m}_{i}\left({m}_{i}-1\right){p}_{i}\left[2{q}_{i}+\left({m}_{i}-1\right){p}_{i}\right]}{\left[{q}_{i}-\left({m}_{i}-1\right)\varphi \right]\left({p}_{i}+2\varphi \right)}\end{array}$ (24)

$\begin{array}{c}{I}_{\varphi r}={\displaystyle \sum E}\left[\frac{{Y}_{i}\left({Y}_{i}-1\right)}{{\left({p}_{i}-{Y}_{i}\varphi \right)}^{2}}\right]{p}_{i}{q}_{i}{x}_{ir}+{\displaystyle \sum E}\left[\frac{{Y}_{i}\left({m}_{i}-{Y}_{i}\right)}{{\left(1-{p}_{i}-{Y}_{i}\varphi \right)}^{2}}\right]{p}_{i}{q}_{i}{x}_{ir}\\ =\underset{i=1}{\overset{n}{{\displaystyle \sum}}}\frac{{m}_{i}\left({m}_{i}-1\right)\left(1-\left({m}_{i}-3\right)\varphi \right){p}_{i}^{2}{q}_{i}{x}_{ir}}{\left({p}_{i}+2\varphi \right)\left(1-{p}_{i}-{m}_{i}\varphi \right)+\varphi},\text{\hspace{0.17em}}\text{\hspace{0.17em}}r=1,2,\cdots ,k\end{array}$ (25)

${I}_{rs}=\underset{i=1}{\overset{n}{{\displaystyle \sum}}}\left[1-\frac{\left({m}_{i}-1\right){p}_{i}}{{p}_{i}-2\varphi}+\frac{\left(1+\varphi -{m}_{i}\varphi \right){p}_{i}}{{q}_{i}-{m}_{i}\varphi +\varphi}\right]{m}_{i}{p}_{i}{q}_{i}^{2}{x}_{ir}{x}_{is},$ (26)

where $r,s=1,2,3,\cdots ,k$.

Equations (24), (25), (26) are the elements of Fisher’s information matrix. From [12], and based on the large sample theory of the likelihood estimation, we can establish the asymptotic normality of $\stackrel{^}{\Lambda}=\left(\stackrel{^}{\beta},\stackrel{^}{\varphi}\right)$ ; that is

$\mathcal{L}\left[\sqrt{n}\left(\stackrel{^}{\Lambda}-\Lambda \right)\to {N}_{k+1}\left(0,\Sigma \right)\right]$

in law. The large sample variance covariance matrix is given by

$\Sigma =n{\left[\begin{array}{cc}{I}_{rs}& {I}_{r\varphi}\\ {I}_{\varphi r}& {I}_{\varphi \varphi}\end{array}\right]}^{-1}.$

In testing hypothesis about parameters in a logit model, one generally uses large sample tests. The choice is between the likelihood ratio test and other consistent tests which are asymptotically equivalent to the likelihood ratio test under the null hypothesis [8], in contrast to the likelihood-ratio test which requires fitting the model under both the null and alternative hypotheses). Now, to test the null hypothesis ${H}_{0}:\varphi =0$ versus ${H}_{1}:\varphi \ne 0$, the Wald statistic given in (27) is

$W=\frac{{\left(\stackrel{^}{\varphi}\right)}^{2}}{A{V}_{0}\left(\stackrel{^}{\varphi}\right)},$ (27)

In (27)
$A{V}_{0}\left(\stackrel{^}{\varphi}\right)$ is the asymptotic variance of
$\stackrel{^}{\varphi}$, evaluated under the null hypothesis *H*_{0}. Under *H*_{0}, the statistic *W* has the same asymptotic (for large samples)
${X}_{i}^{2}$ distribution as the likelihood ratio statistic. Equivalently,
${H}_{0}:\varphi =0$ is rejected whenever the value of

$\stackrel{^}{\varphi}/\sqrt{A{\stackrel{^}{V}}_{0}\left(\stackrel{^}{\varphi}\right)}>{Z}_{1-\alpha},$

where
${Z}_{1-\alpha}$ is the standard normal deviate for *α*-level of significance, and
$A{\stackrel{^}{V}}_{0}\left(\stackrel{^}{\varphi}\right)$ denotes the large sample variance of
$\stackrel{^}{\varphi}$, under *H*_{0}, and after all other parameters are replaced by their maximum likelihood estimates.

5. Applications of the QBD Regression

1) Clinical trial results

One group of 16 pregnant female rats was fed a control diet during pregnancy and lactation and a second group of 16 pregnant female rats was given a diet treated with a chemical. Weil [13] published clinical trial data on the number m of pups alive at 4 days and the number y of pups that died at the end of 21 days lactation period for each litter. The fractions ${y}_{i}/{m}_{i}$ for the two groups are given below:

Control: 0/13, 0/12, 0/9, 0/9, 0/8, 0/8, 1/13, 1/12,

1/10, 1/10, 1/9, 2/13, 1/5, 2/7, 3/10, 3/10.

Treated: 0/12, 0/11, 0/10, 0/9, 1/11, 1/10, 1/10, 1/9,

1/9, 1/5, 2/9, 3/7, 5/10, 3/6, 7/10, 7/7.

We apply the quasi-binomial regression model to the above data with 16 replications in each group and take

${\theta}_{i}=\underset{j=1}{\overset{2}{{\displaystyle \sum}}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{x}_{ij}{\beta}_{j},\text{\hspace{0.17em}}\text{\hspace{0.17em}}i=1,2,\cdots ,32$

where ${x}_{i1}=1$ and ${x}_{i2}=0$ when the subject is in the control group and ${x}_{i2}=1$ when it is in the treatment group.

The maximum likelihood estimates of $\left({\beta}_{1},{\beta}_{2},\varphi \right)$ were obtained by simultaneously solving the system of equations.

${\stackrel{\dot{}}{\mathcal{l}}}_{r}=0$ and ${\stackrel{\dot{}}{\mathcal{l}}}_{\varphi}=0$, given in (14) and (15), with the help of NLMIX procedure in SAS (version 9.4). ML estimates are

${\stackrel{^}{\beta}}_{1}=-2.5135\left(0.3435\right),\text{\hspace{0.17em}}{\stackrel{^}{\beta}}_{2}=0.6595\left(0.4307\right)$

and

$\stackrel{^}{\varphi}=0.0517\left(0.0114\right)$

The numbers in the brackets are the large sample standard deviations. Both ${\stackrel{^}{\beta}}_{1}$ and $\stackrel{^}{\varphi}$ are highly significant (p-value < 0.001).

2) Example 2: Multiple regression (risk factors associated with COVID 19 case fatality)

The novel coronavirus disease (COVID-19) pandemic affected every country in our world and imposed tremendous strains on the world economies and the health care systems.

During the 2901-2020 year over 5000 research papers have been published and the fundamental aim has been to understand the mechanism of spread of the virus and the main risk factors leading to associated mortality. Many of these reports on the COVID-19 pandemic suggested that the coronavirus was associated with more serious chronic diseases and mortality regardless of country and age. Other reports suggested that those with underlying comorbidities, including obesity, type 2 diabetes, heart, and kidney diseases are at high risk of infection and death. Therefore, there is a need to understand how common comorbidities and other factors are associated with the risk of death due to COVID-19 infection. Our investigation aims at exploring this relationship. Specifically, our fundamental aim is to explore the relationship between the aggregate numbers of deaths among total number of reported COVID-19 cases.

The WHO website [14] provided detailed account of the number of COVID-19 cases by country, which we accessed on December 2-2020. We included in the study the cumulative number of COVID-19 cases and the associated death counts by country as of December 2-2020. We excluded countries that had cumulative counts less than 10,000 cases. We denote the number of cases pe-country by *m*, and the corresponding deaths denoted by *y*. The data base has 112 countries, we divided them into regions according to the classification given in data source number [15]. The most referenced risk factors are:

1) *X*_{1} = log (percentage of obese persons in a country reported in the year (2018) [17].

2) *X*_{2} = log (population density) [18] [19] [20].

3) *X*_{3} = log (number of people with colorectal cancer in a country reported in the year (2017) [21].

4) *X*_{4} = log (Chronic Kidney Disease-case fatality in a country as reported in (2017) [15] [16].

Note that we used the log (factor) to stabilize the variance. The data are summarized in Table 1.

The histogram of y is given in Figure 1, showing the severe skewness in the distribution.

Figures 2-5 are the box plots of the risk factors. The plot shows that the distributions are evenly distributed among regions, except for *X*_{3}.

Table 1. Summary statistics of the COVID-19 cases (*m*), deaths among cases, and the four covariates.

Figure 1. Histogram of the number of deaths.

Figure 2. Box plot of *X*_{1} by region.

Figure 3. Box plot of *X*_{2} by region.

Figure 4. Box plot of *X*_{3} by region.

Figure 5. Box plot of *X*_{4} by region.

We estimated the average case-fatality rate as:

$\stackrel{^}{p}={\displaystyle \sum {y}_{i}}/{\displaystyle \sum {m}_{i}}=0.023$.

Moreover, the other quantities are given as:

${A}_{11}\left(p\right)=8.49\times {10}^{19}$, ${A}_{12}\left(p\right)=3.51\times {10}^{14}$, ${A}_{22}\left(p\right)=2772\times {10}^{6}$,

${T}_{1}\left(\stackrel{^}{p}\right)=-1.1\times {10}^{12}$, $Var\left[{T}_{1}\left(\stackrel{^}{p}\right)\right]=4.05\times {10}^{19}$.

Hence ${M}^{2}={\left[{T}_{1}\left(\stackrel{^}{p}\right)\right]}^{2}/\stackrel{^}{Var}\left[{T}_{1}\left(\stackrel{^}{p}\right)\right]=29932.22$, and we therefore reject the binomial hypothesis. We used the SAS NLMIXED procedure to fit the QB regression model. The results are shown in Table 2.

We note that the fitting algorithm produces variance covariance matrix of the estimated regression parameters (not shown here).

The Nonlinear Mixed Model procedure (NLMIXED) is an iterative algorithm and its convergence, which can be slow, depends heavily on the starting.

Table 2. Results of the quasi-binomial regression for the COVID-19 case fatality data.

6. Discussion

For observed data sets which exhibit variation greater than what is expected under the hypothesized model, the researchers often try to determine the sources of this phenomenon which is known as over-dispersion. There are three broad categories of such sources of over dispersion: 1) genuine or significant over-dispersion or under-dispersion which may be accounted for by generalizations of the known distribution, 2) the apparent over-dispersion is due to some outliers, which may be detected by residual analysis by some other diagnostic method, 3) poor choice of some of the explanatory variables. Therefore, it seems appropriate that one should apply a model which includes a dispersion parameter as well as a reasonable number of carefully chosen covariates and variates. The fitting of the QBD regression model can be tricky, and one may adopt one of the algorithms described in [22] and [23].

Acknowledgements

The authors thank anonymous reviewers for their constructive comments.

Appendix: Flow Chart for the Manuscripts

Conflicts of Interest

The authors declare no conflicts of interest.

[1] | Altham, P.M.E. (1978) Two Generalizations of the Binomial Distribution. Applied Statistics, 27, 162-167. |

[2] | Kupper, L.L. and Haseman, J.K. (1978) The Use of a Correlated Binomial Model for the Analysis of Certain Toxicological Experiments. Biometrics, 34, 67-76. |

[3] | Williams, D.A. (1975) The Analysis of Binary Responses from Toxicological Experiments Involving Reproduction and Teratogenicity. Biometrics, 31, 949-952. |

[4] |
Breslow, N.E. and Clayton, D.G. (1993) Approximate Inference in Generalized Linear Mixed Models. Journal of the American Statistical Association, 88, 9-25.
https://doi.org/10.1080/01621459.1993.10594284 |

[5] | Consul, P.C. (1974) A Simple Urn Model Dependent upon Predetermined Strategy. Sankhyā: The Indian Journal of Statistics, Series B, 36, 391-399. |

[6] | Shenton, L.R. (2006) Quasi Binomial Distribution. Wiley StatsRef: Statistics Reference Online. |

[7] | Neyman, J. (1959) Optimal Asymptotic Tests of Composite Statistical Hypotheses. In: Grenander, V., Ed., Probability and Statistics, John Wiley & Sons, New York, 13-34. |

[8] |
Moran, P.A. (1970) On Asymptotically Optimal Tests of Composite Hypotheses. Biometrika, 57, 47-55. https://doi.org/10.1093/biomet/57.1.47 |

[9] |
Consul, P.C. (1990) On Some Properties and Applications of Quasi-Binomial Distribution. Communications in Statistics—Theory and Methods, 19, 477-504.
https://doi.org/10.1080/03610929008830214 |

[10] | Cox, D.R. and Hinkley, D.V. (1974) Theoretical Statistics. Chapman and Hall, London. |

[11] | Paul, S.R. (1982) Analysis of Proportions of Affected Fetuses in Teratological Experiments. Biometrics, 38, 361-370. |

[12] | Rao, C.R. (1973) Linear Statistical Inference and Applications. John Wiley & Sons Inc., New York. |

[13] |
Weil, C.S. (1970) Selection of the Valid Number of Sampling Units and a Consideration of Their Combination in Toxicological Studies Involving Reproduction, Teratogenesis or Carcinogenesis. Food and Cosmetic Toxicology, 8, 177-182.
https://doi.org/10.1016/S0015-6264(70)80337-6 |

[14] |
WHO Coronavirus (COVID-19) Dashboard. https://covid19.who.int/table |

[15] |
Chueh, T.-I., Zheng, C.-M., Hou, Y.-C. and Lu, K.-C. (2020) Novel Evidence of Acute Kidney Injury in COVID-19. Journal of Clinical medicine, 9, 3547.
https://doi.org/10.3390/jcm9113547 |

[16] |
GBD Chronic Kidney Disease Collaboration (2020) Global, Regional, and National Burden of Chronic Kidney Disease, 1990-2017: A Systematic Analysis for the Global Burden of Disease Study 2017. The Lancet, 395, 709-733.
https://doi.org/10.1016/s0140-6736(20)30045-3 |

[17] |
Diabetes Prevalence (% of Population Ages 20 to 79)—Country Ranking.
https://www.indexmundi.com/facts/indicators/SH.STA.DIAB.ZS/rankings |

[18] |
https://openknowledge.worldbank.org/bitstream/handle/10986/32383/9781464814914.pdf |

[19] |
Rashed, E.A., Kodera, S., Gomez-Tames, J. and Hirata, A. (2020) Correlation between COVID-19 Morbidity and Mortality Rates in Japan and Local Population Density, Temperature, and Absolute Humidity. International Journal of Environmental Research and Public Health, 17, Article No. 5447.
https://doi.org/10.3390/ijerph17155477 |

[20] |
Population Density and Population Counts. https://data.worldbank.org/ |

[21] |
GBD 2017 Colorectal Cancer Collaborators (2019) The Global, Regional, and National Burden of Colorectal Cancer and Its Attributable Risk Factors in 195 Countries and Territories, 1990-2017: A Systematic Review for the Global Burden of Disease Study 2017. The Lancet Gastroenterology and Hepatology, 4, 913-933.
https://doi.org/10.1016/S2468-1253(19)30345-0 |

[22] |
Boateng, E. and Abaye, D. (2019) A Review of the Logistic Regression Model with Emphasis on Medical Research. Journal of Data Analysis and Information Processing, 7, 190-207. https://doi.org/10.4236/jdaip.2019.74012 |

[23] |
Deng, J. and Lu, Q.J. (2018) Fuzzy Regression Model Based on Fuzzy Distance Measure. Journal of Data Analysis and Information Processing, 6, 126-140.
https://doi.org/10.4236/jdaip.2018.63008 |

Journals Menu

Contact us

customer@scirp.org | |

+86 18163351462(WhatsApp) | |

1655362766 | |

Paper Publishing WeChat |

Copyright © 2022 by authors and Scientific Research Publishing Inc.

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.