Generalized Method of Moments and Generalized Estimating Functions Based on Probability Generating Function for Count Models

Abstract

Generalized method of moments based on probability generating function is considered. Estimation and model testing are unified using this approach which also leads to distribution free chi-square tests. The estimation methods developed are also related to estimation methods based on generalized estimating equations but with the advantage of having statistics for model testing. The methods proposed overcome numerical problems often encountered when the probability mass functions have no closed forms which prevent the use of maximum likelihood (ML) procedures and in general, ML procedures do not lead to distribution free model testing statistics.

Share and Cite:

Luong, A. (2020) Generalized Method of Moments and Generalized Estimating Functions Based on Probability Generating Function for Count Models. Open Journal of Statistics, 10, 516-539. doi: 10.4236/ojs.2020.103031.

1. Introduction

Count data are often encountered in many fields of applications which include actuarial sciences and fitting discrete count models are of interests. Classical methods such as maximum likelihood (ML) procedures often require the probability function of the model to have closed-form and furthermore the inferences techniques do not lead to distribution free statistics when using the Pearson statistics. In fact, if a model does not fit the data, better models can be created using compound procedure, stop sum procedure or mixing procedure and the new models might provide a better fit as they can take into account modeling processes which were omitted earlier.

For discussions on these procedures see the books by Johnson et al. [1], Klugman et al. [2] but for these better models often they do not have closed-form probability mass functions but their probability generating functions often remain simple and have closed-form expressions.

For example, if count data display long tailed behavior so that the Poisson model with probability generating function P θ ( s ) = e θ ( s 1 ) , θ > 0 does not provide a good fit, the positive discrete stable (DPS) distribution can be created and be used as an alternative to the Poisson distribution. The discrete positive stable distribution (DPS) does not have closed-form or simple form for probability mass function but its probability generating function is simple and given by

P δ ( s ) = e θ ( s 1 ) α , δ = ( θ , α ) , α ( 0 , 1 ] , θ > 0

see Christoph and Schreiber [3] for this distribution. In their paper, expression (6) gives the representation of the probability mass function of the DPS distribution using series,

p ( x = k ; δ ) = ( 1 ) k j = 0 ( j α k ) ( θ ) j j ! , k = 0 , 1 ,

and expression (8) gives the recursive formula to compute p ( x = k ; δ ) using the previous terms

p ( x = 0 ; δ ) , , p ( x = k 1 ; δ )

with

( k + 1 ) p ( x = k + 1 ; δ ) = θ m = 0 k p ( x = k m ; δ ) ( m + 1 ) ( 1 ) m ( α m + 1 ) , k = 0 , 1 ,

The probability mass function appears to be complicated and for model validation there is a need for a statistic for model testing. By having these issues, it will make maximum likelihood (ML) procedures difficult to implement.

GMM procedures based on probability generating function appear to be a natural way to introduce alternatives to ML procedures, bypassing the use of the probability mass function explicitly and focus uniquely on the probability generating function. In this vein, the procedures proposed in this paper make use of GMM and generalized estimating equation theory and they are less simulation intensive oriented than inference techniques given by the paper by Luong et al. [4].

We shall use general GMM methodology but adapted it to situations where moment conditions are based on probability generating function so that estimation and model testing can be carried out in a unified way for discrete count models. The choice of moments of the developed GMM procedures makes use of estimating function theory which allows us the use a number of points based on probability generating which tends to infinity as the sample size n . Furthermore, we also related GMM estimation with the approach using generalized estimating equations (GEE) based on a set of elementary or basic unbiased estimating function but unlike GEE procedures, GMM procedures also provide distribution free chi-square statistics for model testing but the theory of estimating function is useful as it provides insight on the choice of sample moments for GMM estimation. In another word, the proposed methods blend classical GMM procedures and inference techniques based on estimating equation which in general will allow flexibility, efficiency and model testing yet being relatively simple to implement and might be of interests for practitioners. Consequently, the new methods differ from proposed GMM procedures in the literature from the following points:

1) GMM procedures as proposed by Doray et al. [5] only make use of a finite number of points of the probability generating function, our methods aim at achieving higher efficiency yet remain simple to implement and it is done by linking to the theory of estimating function, it can accommodate the use of a number of points from the probability generating function instead of being fixed, it goes to infinity as n .

2) The new GMM procedures remain simpler to implement than GMM procedures using a continuum moment conditions in general as proposed by Carrasco and Florens [6] or methods on adapting GMM procedures using a continuum of moment conditions for characteristic function proposed by Carrasco and Kotchoni [7] to probability generating function. Practitioners might find the sophisticated methods based on a continuum moment conditions difficult to implement.

The paper is organized as follows. In Section 2, we review available results from general GMM theory, despite the results are not new once the moment conditions are defined but they make the paper more self-contained as these results will be adapted subsequently with moments conditions extracted from the probability generating function when count models are considered. In Section 3, GMM estimation and related GEE estimation for count models are considered. The chi-square statistics are also given in Section 3.2.2. In Section 3.2.3, we consider GMM procedures based on optimum orthogonal estimating functions. In Section 4 we illustrate the implementation of the GMM methodology and preliminary results show that the methods are simple to implement and have the potentials of being very efficient. The new methods display flexibility as they can accommodate the changes to the sample moments for better efficiencies if needed and it can be done within the framework of the inference methods developed.

2. Generalized Method of Moments (GMM) Methodology

The inferences techniques based on probability generating functions developed in this paper make use of results of Generalized Method of Moments (GMM) theory which are well established once the moment conditions are specified, see Martin et al. [8] (p 352-384, also see Hamilton [9]. In this section, we shall briefly review GMM methodology for estimation and moment restrictions testing to make the paper easier to follow and connect to the problem on how to select moment conditions for discrete distributions based on probability generating functions for applying GMM methods.

The estimating equations of GMM methods will also be linked to the theory of estimating equations and generalized estimating equations (GEE) as developed by Godambe and Thompson [10], Morton [11], Liang and Zeger [12].

2.1. Generalized Estimating Equations (GEE) and GMM Estimation

For data, we shall assume that we have n independent observations y 1 , , y n , these observations need not be identically distributed but each y i will follow a distribution which depends on the same vector of parameters θ = ( θ 1 , , θ p ) , θ Ω , Ω is compact and R p . The true vector of parameters is denoted by θ 0 .

For the time being, assume that we have identified n unbiased basic estimating functions or elementary estimating functions denoted by h i = h i ( y i ; θ ) , i = 1 , , n with the property

E θ ( h i ( y i ; θ ) ) = 0 for i = 1 , , n . (1)

The optimum estimating functions based on linear combinations of { h i ( y i ; θ ) , i = 1 , , n } for estimating θ 0 is given by

g ( r ) ( θ ) = i = 1 n h i ( y i ; θ ) E θ ( h i θ r ) E θ ( ( h i ) 2 ) , r = 1 , , p (2)

and E θ ( ( h i ) 2 ) is the variance of h i ( y i ; θ ) .

The vector of estimators θ ^ o p based on the optimum estimating equations are given as solutions of the system of equations g ( r ) ( θ ) = 0 , r = 1 , , p . This result is given by Godambe and Thompson [10] (page 4) and Morton [11] (page 229-230).

In applications, often we restrict our attention to h i ( y i ; θ ) with some common functional form so that we also use the notations h i ( y i ; θ ) = h ( y i ; θ ) , i = 1 , , n and more precisely h ( y i ; θ ) = h ( y i ; s i , θ ) , s i is a constant.

With this notation which is commonly used in the literature, notice that the random variables given by h ( y i ; θ ) , i = 1 , , n need not be identically distributed.

Also, since estimating equations are defined up to a constant which does not depend on θ , the related estimating functions used can be re-expressed equivalently as

g ( r ) ( θ ) = 1 n i = 1 n h ( y i ; θ ) E θ ( h ( y i ; θ ) θ r ) E θ ( ( h ( y i ; θ ) ) 2 ) , r = 1 , , p (3)

and the vector of estimators based on the optimum equations are given as solutions of the system of equations g ( r ) ( θ ) = 0 , r = 1 , , p , using expression (3). Using vector notations, the vector of optimum estimating functions based on

expression (3) can be expressed as g ( θ ) = ( g 1 ( θ ) g r ( θ ) ) , g ( θ ) = 1 n i = 1 n h ( y i ; θ ) E θ ( h ( y i ; θ ) θ ) E θ ( ( h ( y i ; θ ) ) 2 ) and the vector of estimators θ ^ o p based on g ( θ ) are solutions of g ( θ ) = 0 and from this observation it is clear that the factor 1 n can be omitted when defining estimating functions or equations.

Now suppose that we have vector m ( y i ; θ ) = ( m 1 ( y i ; θ ) m k ( y i ; θ ) ) with the property

E θ ( m ( y i ; θ ) ) = ( E θ ( m 1 ( y i ; θ ) ) E θ ( m k ( y i ; θ ) ) ) = 0 for i = 1 , 2 , ,

the optimum estimating functions for estimating θ based on linear combinations of the elements of the set { m ( y i ; θ ) , i = 1 , , n } are also called generalized optimum estimating functions, see Morton [11] (p 229-230), also see expression (6) as given by Liang and Zeger [12] (page 15) are given by

i = 1 n C i ( θ ) V i 1 ( θ ) m ( y i ; θ ) (4)

and the estimators are given by the vector θ ^ o p obtained by solving

i = 1 n C i ( θ ) V i 1 ( θ ) m ( y i ; θ ) = 0 (5)

where V i ( θ ) is the covariance matrix of m ( y i ; θ ) under θ and its inverse V i 1 ( θ ) , V i ( θ ) is also referred to as a working matrix in the literature of esti-

mating equation theory and C i ( θ ) = ( E θ ( m 1 ( y i , θ ) θ 1 ) E θ ( m k ( y i , θ ) θ 1 ) E θ ( m 1 ( y i , θ ) θ p ) E θ ( m k ( y i , θ ) θ p ) ) which is a p by k matrix.

Clearly expression (4) is more general than expression (3) and is reduced to expression (3) when m ( y i ; θ ) is a scalar instead of a vector.

For the studies of estimating functions, Godambe and Thompson [10] emphasized efficiency of estimating equations rather efficiency of the vector of optimum estimators θ ^ o p obtained by solving estimating equations.

For applications, often we need the asymptotic covariance matrix of θ ^ o p . For this purpose, we use the set up for the study of generalized estimating equations (GEE) as considered by Liang and Zeger [12] (p 15-16). Using a Taylor’s expansion and results of their Theorem 2 (p 16) we can obtain the asymptotic covariance matrix of θ ^ o p .

n ( θ ^ o p θ 0 ) L N ( 0 , Σ )

with

Σ = lim n n ( i = 1 n C i ( θ 0 ) V i 1 ( θ 0 ) C i ( θ 0 ) ) 1 ,

with convergence in probability denoted by p and convergence in distribution denoted by L .

Therefore, the asymptotic covariance for θ ^ o p is simply lim n ( i = 1 n C i ( θ 0 ) V i 1 ( θ 0 ) C i ( θ 0 ) ) 1 which can be estimated. A Fisher scoring algorithm as given by expression (6) as described by Liang and Zeger [12] (p 16) can be used to obtain the estimators numerically θ ^ o p . The algorithm gives the j + 1-th iteration based on the previous j-th iteration as

θ ^ o p ( j + 1 ) = θ ^ o p ( j ) ( i = 1 n C i ( θ ^ o p ( j ) ) V i 1 ( θ ^ o p ( j ) ) C i ( θ ^ o p ( j ) ) ) 1 m ( y i ; θ ^ o p ( j ) ) .

Other numerical techniques to obtain θ ^ o p can be used. For example, we can consider solving the system of equations given by expression (4) and expression (5) as g ( r ) ( θ ) = 0 , r = 1 , , p and θ ^ o p can be obtained by minimizing r = 1 p ( g ( r ) ( θ ) ) 2 , techniques for minimization can be used to obtain θ ^ o p numerically.

Now we turn our attention to GMM estimation methodology and we observe that the set of estimating equations using expression (2) can be reobtained using a GMM estimation set up. GMM estimation is based on the use of a k moment conditions specified by a vector function

m ( y i ; θ ) = ( m 1 ( y i ; θ ) m k ( y i ; θ ) )

with its expectation with the property

E θ ( m ( y i ; θ ) ) = ( E θ ( m 1 ( y i ; θ ) ) E θ ( m k ( y i ; θ ) ) ) = 0 for i = 1 , 2 , (6)

The sample moments being the counterparts of

( E θ ( m 1 ( y i ; θ ) ) E θ ( m k ( y i ; θ ) ) )

are defined as g ( r ) ( θ ) = 1 n i = 1 n m r ( y i ; θ ) , r = 1 , , k and define the vector of sample moments as

g ( θ ) = ( g ( 1 ) ( θ ) g ( k ) ( θ ) ) = 1 n i = 1 n m ( y i ; θ ) . (7)

Now we need a positive definite symmetric matrix or a positive definite matrix symmetric with probability one which is denoted by S ^ 1 to define a quadratic form using g ( r ) ( θ ) , r = 1 , , k , S ^ 1 will be defined subsequently and this allows the objective function

Q ( θ ) = g ( θ ) S ^ 1 g ( θ )

to be formed for GMM estimation and the GMM estimators are given by the vector θ ^ which minimizes Q ( θ ) .

We shall define the matrix S first then its estimate is S ^ from which we can obtain its inverse S ^ 1 . In fact S can be viewed as the limit as n of

the covariance matrix of the vector 1 n i = 1 n m ( y i ; θ 0 ) = n g ( θ 0 ) and the covariance matrix of 1 n i = 1 n m ( y i ; θ 0 ) can be seen as given by 1 n i = 1 n E θ 0 ( [ m ( y i ; θ 0 ) ] [ m ( y i ; θ 0 ) ] ) , then S and its estimate S ^ can be defined respectively as

S = lim n 1 n i = 1 n E θ 0 ( [ m ( y i ; θ 0 ) ] [ m ( y i ; θ 0 ) ] )

and with a preliminary consistent estimate θ ^ ( 0 ) for θ 0 then we can define

S ^ = 1 n i = 1 n E θ ^ ( 0 ) ( [ m ( y i ; θ ^ ( 0 ) ) ] [ m ( y i ; θ ^ ( 0 ) ) ] ) or

S ^ = 1 n i = 1 n [ m ( y i ; θ ^ ( 0 ) ) ] [ m ( y i ; θ ^ ( 0 ) ) ]

and S ^ is positive definite with probability one and clearly S ^ is symmetric, its inverse is S ^ 1 which exists with probability one. Despite that these two expressions for S ^ are asymptotically equivalent but for numerical implementations of the methods in finite samples, the matrix

S ^ = 1 n i = 1 n E θ ^ ( 0 ) ( [ m ( y i ; θ ^ ( 0 ) ) ] [ m ( y i ; θ ^ ( 0 ) ) ] ) has more chance to be invertible.

Under suitable differentiability assumptions imposed on the vector function g ( θ ) , the GMM estimators given by θ ^ is consistent and has an asymptotic multivariate normal distribution, i.e.,

θ ^ p θ 0

and

n ( θ ^ θ 0 ) L N ( 0 , V ) .

The asymptotic covariance of θ ^ is simply A c o v ( θ ^ ) = 1 n V and V depends on θ 0 so we also use the notation, V = V ( θ 0 ) and V = ( D ( θ 0 ) S 1 D ( θ 0 ) ) 1 with D ( θ 0 ) = lim n [ E θ 0 ( g ( 1 ) ( θ 0 ) θ 1 ) E θ 0 ( g ( k ) ( θ 0 ) θ 1 ) E θ 0 ( g ( 1 ) ( θ 0 ) θ p ) E θ 0 ( g ( k ) ( θ 0 ) θ p ) ] and D ( θ 0 ) is a p by k matrix, its transpose is D ( θ 0 ) . Since V = V ( θ 0 ) , an estimate of V ( θ 0 ) is

V ^ = ( D ( θ ^ ) S ^ 1 D ( θ ^ ) ) 1 , D ( θ ^ ) = g ( θ ^ ) θ .

Using V ^ , the asymptotic covariance matrix of θ ^ can be estimated.

We also notice that we can recover optimum estimating equations estimators using the following GMM estimation set-up by letting k = p , i.e., the number of sample moments is equal to the number of parameters to be estimated and

g ( t ) ( θ ) = 1 n i = 1 n m r ( y i ; θ ) , r = 1 , , p

with

m r ( y i ; θ ) = h ( y i ; θ ) E θ ( h ( y i ; θ ) θ r ) E θ ( ( h ( y i ; θ ) ) 2 ) .

Minimizing the corresponding GMM objective function yields the vector of GMM estimators which are given by the following system of equations since S ^ 1 is positive definite with probability one,

g ( r ) ( θ ) = 1 n i = 1 n h ( y i ; θ ) E θ ( h ( y i ; θ ) θ r ) E θ ( ( h ( y i ; θ ) ) 2 ) = 0 , r = 1 , , p (8)

which is the same system of equations for obtaining the optimum estimating equations estimators as discussed. Using vector notations, the vector of optimum

estimating functions is simply g ( θ ) = 1 n i = 1 n h ( y i ; θ ) E θ ( h ( y i ; θ ) θ ) E θ ( ( h ( y i ; θ ) ) 2 ) and the related estimators are obtained by solving g ( θ ) = 0 .

The estimating equations based on of GMM procedures are based on partial derivatives of Q ( θ ) and can be seen as equivalent to

i = 1 n D ( θ ) S ^ 1 m ( y i ; θ ) = 0 . (9)

Observe that the vector of estimating functions is also formed based on linear combinations of elements of { m ( y i ; θ ) , i = 1 , , n } which is similar to the vector of optimum estimating functions but it might not be optimum as the matrix D ( θ ) and the matrix S , S ^ 1 no longer depends on i. With m ( y i ; θ ) , i = 1 , , n not only being independent but they are also identically distributed then we have the equivalence of the two methods. We also notice that S ^ used for GMM estimation plays a similar role as the working matrix V i ( θ ) for GEE estimation but it is often simpler to obtain S ^ than V i ( θ ) . Often, more derivations are needed to obtain V i ( θ ) .

Based on expression (7) and the observation just made concerning expression (8), we shall define m ( y i ; θ ) for GMM slightly different than m ( y i ; θ ) used for generalized estimating functions (GEE) by letting for GMM estimation

m r ( y i ; θ ) = g ( r ) ( θ ) = 1 n i = 1 n h ( y i ; θ ) E θ ( h ( y i ; θ ) θ r ) E θ ( ( h ( y i ; θ ) ) 2 ) , r = 1 , , p

for the first p components of the vector m ( y i ; θ ) depending on the models being studied.

We might also want to consider including other m r ( y i ; θ ) for r > p depending on the model being studied for the sake of efficiency, i.e., this leads to define

g ( θ ) = ( g 1 ( θ ) g 2 ( θ ) )

with g 1 ( θ ) = 1 n i = 1 n h ( y i ; θ ) E θ ( h ( y i ; θ ) θ ) E θ ( ( h ( y i ; θ ) ) 2 ) which is the vector of optimum

estimating function based on elements of the set { h ( y i ; θ ) , i = 1 , , n } and g 2 ( θ ) with its components depend upon m r ( y i ; θ ) for r > p to be defined based on the model under investigation, and define the GMM objective function as

Q ( θ ) = g ( θ ) S ^ 1 g ( θ ) ,

see Section 3 for more details for the choice of g ( θ ) = ( g 1 ( θ ) g 2 ( θ ) ) for GMM methods with models based on probability generating functions.

One advantage of the GMM approach over generalized estimating equations (GEE) approach is with GMM approach, we have an objective function to be minimized and it leads to construction of chi-square tests for moment restrictions meanwhile there is no such equivalent test statistic if we use the generalized estimating equations approach. Furthermore, we shall see in Section 3 when applied to discrete distributions with moment conditions extracted from probability generating function, testing for moment restrictions can be viewed as testing goodness-of-fit for the count model being used. Consequently, estimation and model testing can be treated in a unified way using this approach.

As mentioned earlier, the GMM objective function evaluated at θ ^ can be used to construct a test statistic which follows an asymptotic chi-square distribution for testing the null hypothesis which specify the validity of the vector moment conditions, i.e.,

H 0 : E θ ( m ( y i ; θ ) ) = ( E θ ( m 1 ( y i ; θ ) ) E θ ( m k ( y i ; θ ) ) ) = 0 for i = 1 , (10)

but we need k > p , i.e., the number of sample moments must exceed the number of parameters to be estimated.

2.2. Testing the Validity of Moment Restrictions

We notice that since g ( θ 0 ) p 0 and the vector of GMM estimators is consistent with θ ^ p θ 0 and in general g ( θ ^ ) p 0 , the following statistics can be constructed and will have an asymptotic chi-square distributions. These statistics are also known as Hansen’s statistics after Hansen’s seminal works, see Hansen [13] and they can be used for testing the validity of moment restrictions.

For testing the simple hypothesis H 0 : E θ 0 ( m ( y i ; θ 0 ) ) = ( E θ 0 ( m 1 ( y i ; θ 0 ) ) E θ 0 ( m k ( y i ; θ 0 ) ) ) = 0 for i = 1 , , k ; θ 0 is specified, the Hansen’s statistic is given as

n Q ( θ 0 ) = n g ( θ 0 ) S ^ 1 g ( θ 0 )

and the asymptotic distribution of the statistic is chi-square with k degree of freedom, i.e., n Q ( θ 0 ) L χ k 2 under H 0 .

For testing the composite hypothesis

H 0 : E θ ( m ( y i ; θ ) ) = ( E θ ( m 1 ( y i ; θ ) ) E θ ( m k ( y i ; θ ) ) ) = 0 for i = 1 , , k ; θ Ω

We need to obtain θ ^ first by minimizing Q ( θ ) then the Hansen’s statistic is given as

n Q ( θ ^ ) = n g ( θ ^ ) S ^ 1 g ( θ ^ )

and the asymptotic distribution of the statistic is chi-square with k p degree of freedom, i.e., n Q ( θ 0 ) L χ k p 2 under H 0 , assuming k > p .

These statistics will be used subsequently with moment conditions extracted from the model probability generating function in Section 3. We shall show in the next sections that these statistics are consistent test statistics in general for model testing with the discrete model specified by its probability generating function. These statistics are also distribution free. The distribution free property is not enjoyed by goodness-of-fit test statistics for model testing based on the empirical probability function which is defined as

P n ( s ) = 1 n i = 1 n s X i (11)

with X 1 , , X n being independent and identically random variables from a discrete model specified by the model probability generating function P θ ( s ) = E θ ( s X ) which are given by Rueda and O’Reilly [14], Marcheselli et al. [15] as the null distributions of the statistics depend on the unknown parameters. In addition, the procedures as proposed by Doray et al. [5] only make use of k fixed points s 1 , , s k to generate moment conditions regardless of the sample size n.

The procedures proposed in this paper are different as the number of points selected from the probability generating function goes to infinity as n .

3. GEE and GMM Methods with Moment Conditions from Probability Generating Function

In this section, we shall give attention to count models and we shall assume that we have a random sample of n independent and identically distributed observations X 1 , , X n which follow the same distribution as X and X follows a nonnegative integer discrete distribution with probability mass function p ( x ; θ ) with no closed form but with model probability generating function P θ ( s ) = E θ ( s X ) with closed form and relatively simple to handle, P θ ( s ) is well defined on the domain of s [ 1 , 1 ] .

It is well known that in general, the probability mass function is uniquely characterized by its corresponding probability generating function. Subsequently, two versions of GMM objective functions will be introduced based on estimating function theory. The first version is based on using points of s [ 0 , 1 ] to form moment conditions which are commonly used in the literature and it is given in Section 3.2.1 and Section 3.2.2, the second version is based on s [ 1 , 1 ] and it is given in Section 3.2. 3.

Optimum estimating functions can be used to obtain estimators but we emphasize here the GMM approach as tests for moment restrictions with asymptotic chi-square distribution free can also be obtained which can be interpreted as goodness-of-fit tests for the parametric family used. However, optimum estimating functions theory is very useful for identifying sample moments for efficiency of GMM procedures.

3.1. Generalized Estimating Functions (GEE)

First, we shall define the basic unbiased estimating functions { h ( x i ; θ ) } , i.e., with the property E θ ( h ( x i ; s i , θ ) ) = 0 , then we shall form the optimum estimating functions based on linear combination of these elementary estimating functions. Since the basic elementary estimating functions are unbiased estimating function, the optimum estimating functions will be unbiased.

For each observation X i , we shall associate the value

s i = i 1 / 2 n for i = 1 , , n .

As n , the set { s i ; i = 1 , , n } will become dense in [ 0 , 1 ] and define the elementary estimating functions as

h ( x i ; θ ) = h ( x i ; s i , θ ) = s i X i P θ ( s i ) ; i = 1 , , n

and clearly E θ ( h ( x i ; s i , θ ) ) = 0 .

Since h ( x i ; s i , θ ) is independent of h ( x j ; s j , θ ) for i j , we have the property

E θ ( h ( x i ; s i , θ ) h ( x j ; s j , θ ) ) = 0 , for i j . (12)

The elements of the set { h ( x i ; θ ) , i = 1 , , n } are said to be mutually orthogonal if elements of the set have the property as defined by expression (12), see Godambe and Thompson [10] (page 139). Therefore, using Godambe and Thompson [10] (page 139) optimality criteria the optimum estimating functions for estimating θ 0 based on linear combination of the basic estimating functions which are orthogonal are given by

g ( r ) ( θ ) = 1 n i = 1 n h ( x i ; s i , θ ) E θ ( h ( x i ; s i , θ ) θ r ) E θ ( ( { h ( x i ; s i , θ ) } ) 2 ) , (13)

and clearly E θ ( g ( r ) ( θ ) ) = 0 , r = 1 , , p , the optimum estimating functions are also unbiased.

We define the vector

β i ( θ ) = E θ ( h ( x i ; s i , θ ) θ ) E θ ( ( { h ( x i ; s i , θ ) } ) 2 ) .

Since E θ ( h ( x i ; s i , θ ) θ r ) = P θ ( s i ) θ r and letting v θ ( h ( x i ; s i , θ ) ) be the variance of h ( x i ; s i , θ ) , so

E θ ( h ( x i ; s i , θ ) h ( x i ; s i , θ ) ) = v θ ( h ( x i ; s i , θ ) )

and

v θ ( h ( x i ; s i , θ ) ) = E θ ( s i X i s i X i ) ( E θ ( s i X i ) ) 2 = P θ ( s i 2 ) ( P θ ( s i ) ) 2 .

This implies

β i ( θ ) = E θ ( h ( x i ; s i , θ ) θ ) E θ ( ( { h ( x i ; s i , θ ) } ) 2 ) = P θ ( s i ) θ P θ ( s i 2 ) ( P θ ( s i ) ) 2 . (15)

Therefore, equivalently the vector of optimum estimating function is given by

1 n i = 1 n h ( x i ; s i , θ ) β i ( θ ) . (16)

For GEE estimation as given by expression (4) and expression (5), we need to specify the vector m ( x i ; θ ) . Let us partition m ( x i ; θ ) into two components with

m ( x i ; θ ) = ( m 1 ( x i ; θ ) m 2 ( x i ; θ ) ) , m 1 ( x i ; θ ) = s i X i P θ ( s i ) , i = 1 , , n . (17)

We select two points t 1 and t 2 , for example by letting t 1 = 0.50 and t 2 = 0.75 and therefore we can form two sets of elementary basic unbiased estimating functions using these two points which are given by

{ t 1 X i P θ ( t 1 ) , i = 1 , , n } and { t 2 X i P θ ( t 2 ) , i = 1 , , n } . (18)

These two sets of elementary unbiased estimating function are selected because as we shall see when used to form moment conditions for the GMM objective function, they allow the construction of consistent chi-square tests.

Furthermore, with the probability generating function we can derive the expectation of X which is denoted by μ ( θ ) = E θ ( X ) and another set of elementary unbiased estimating function can be created which is given by { X i μ ( θ ) , i = 1 , , n } and since the sample mean if incorporated into estimating equations in general might help to improve the efficiency of the estimators, this set of estimating functions are also being considered and used for forming the vector of generalized estimating functions. By making use of these three sets of elementary unbiased estimating functions lead us to define

m 2 ( x i ; θ ) = ( t 1 X i P θ ( t 1 ) t 2 X i P θ ( t 2 ) X i μ ( θ ) ) , i = 1 , , n .

provided for the model μ ( θ ) exists and note that μ ( θ ) can be obtained from the derivative of the probability generating function, in fact μ ( θ ) = P θ ( 1 ) , P θ ( s ) = d P θ ( t ) d t .

If μ ( θ ) does not exist, the last component of m 2 ( x i ; θ ) is replaced by X i t 3 X i 1 P θ ( t 3 ) with t 3 close to 1, say t 3 = 0.95 for example and see section 4 for an illustration and for finding the working matrix V i ( θ ) . For estimators to have a multivariate asymptotic normal distribution, we also need the existence of the common variance of X i , i = 1 , , n under the model.

Having specified the vector

m ( x i ; θ ) = ( m 1 ( x i ; θ ) m 2 ( x i ; θ ) ) ,

GEE estimation can be performed using results and procedures of Section 2.1, the vector of GEE estimators θ ^ o p is obtained by solving the system of equations as given by expression (5). Observe that with the notations being introduced, m 1 ( x i ; θ ) denote a function which also depends on s i , i.e., m 1 ( x i ; θ ) = m 1 ( x i ; s i , θ ) and clearly m 1 ( x i ; θ ) , i = 1 , , n are not identically distributed but m 2 ( x i ; θ ) , i = 1 , , n are identically distributed vectors of random variables. Therefore, GEE estimators are no longer asymptotically equivalent to GMM estimators using the same vectors m ( x i ; θ ) . With the notations being used, GEE estimators and GMM estimators are asymptotically equivalent only if m ( x i ; θ ) , i = 1 , , n have a common multivariate distribution.

3.2. GMM Methodology

Before defining the sample moment vector m ( y i ; θ ) for GMM methods, let us for the time being turn our attention on how to obtain a preliminary consistent estimate θ ^ ( 0 ) in general, Such a preliminary estimate θ ^ ( 0 ) is needed for numerical algorithms to implement GMM procedures and to define the matrix S ^ 1 which is used to define the GMM objective function. The nonlinear least-squares (NLS) estimators can be used to obtain a preliminary consistent estimate θ ^ ( 0 ) with θ ^ ( 0 ) being the vector which minimizes

1 n i = 1 n ( h ( x i ; s i , θ ) ) 2 .

Note that the estimating functions of the nonlinear least-squares methods are

1 n i = 1 n h ( x i ; s i , θ ) h ( x i ; s i , θ ) θ r , r = 1 , , p ,

and they have some resemblance to the optimum ones as they are also based on linear combinations of h ( x j ; s j , θ ) , j = 1 , , n but they are not optimum.

3.2.1. GMM Objective Function

Now we turn our attention to defining the vector

m ( x i ; θ ) = ( m 1 ( x i ; θ ) m 2 ( x i ; θ ) ) .

we have seen GMM estimators are no longer equivalent to GEE estimators if we define m ( x i ; θ ) as for GEE methods, some modifications appear to be necessary and to ensure that GMM estimators have comparable efficiencies to the ones obtained by using optimum estimating functions based on { h ( x i ; θ ) , i = 1 , , n } , we shall let

m 1 ( x i ; θ ) = h ( x i ; s i , θ ) β i ( θ ) , i = 1 , , n

with the corresponding sample moment, g 1 ( θ ) = 1 n i = 1 n h ( x i ; s i , θ ) β i ( θ ) , g 1 ( θ ) is the vector of optimum function based on

{ h ( x i ; s i , θ ) , i = 1 , , n }

and keeping m 2 ( x i ; θ ) , i = 1 , , n as for GEE estimation, so the corresponding sample moment vector for GMM estimation is

g ( θ ) = ( g 1 ( θ ) g 2 ( θ ) ) (19)

with g 1 ( θ ) being just defined and

g 2 ( θ ) = 1 n ( i = 1 n ( t 1 X i P θ ( t 1 ) ) i = 1 n ( t 2 X i P θ ( t 2 ) ) i = 1 n ( X i μ ( θ ) ) )

if μ ( θ ) exists, otherwise let g 2 ( θ ) = 1 n ( i = 1 n ( t 1 X i P θ ( t 1 ) ) i = 1 n ( t 2 X i P θ ( t 2 ) ) X i t 3 X i 1 P θ ( t 3 ) ) , t 3 is chosen to close to 1 but t 3 < 1 . The GMM objective function can be constructed and given by

Q ( θ ) = g ( θ ) S ^ 1 g ( θ ) .

3.2.2. Model Testing Using GMM Objective Function

Now we shall turn our attention to the problem of testing a model specified by its probability generating function. Let X 1 , , X n be the random sample drawn from the nonnegative integer discrete distribution with probability generating function P 0 ( t ) and we want to test the following simple null hypothesis which specifies P 0 ( t ) = P θ 0 ( t ) , θ 0 is specified, i.e.

H 0 : P 0 ( t ) = P θ 0 ( t )

and clearly if H 0 : P 0 ( t ) = P θ 0 ( t ) is true we have E θ 0 ( g ( θ 0 ) ) = 0 .

The following chi-square statistics

n Q ( θ 0 ) L χ r 2 with r = k

For practical applications, the chi-square tests are in general consistent to detect common departure that we are interested as we shall see that if P 0 ( t ) P θ 0 ( t ) , the test will allow us to reject H 0 : P 0 ( t ) = P θ 0 ( t ) in general as n . Indeed, we have this property via the chi-square statistic, because if P 0 ( t ) P θ 0 ( t ) the chi-square statistic will converge to infinity.

In order not to have this property, we must have

P 0 ( t ) P θ 0 ( t ) but g ( θ 0 ) p 0 .

If g ( θ 0 ) p 0 then two of its components given by

1 n i = 1 n ( t 1 X i P θ 0 ( t 1 ) ) , 1 n i = 1 n ( t 2 X i P θ 0 ( t 2 ) ) must simultaneously converge to 0 in probability, i.e.,

1 n i = 1 n ( t 1 X i P θ 0 ( t 1 ) ) p 0 and 1 n i = 1 n ( t 2 X i P θ 0 ( t 2 ) ) p 0 . (20)

We shall show that in general for P 0 ( t ) encountered for applications it cannot happen.

Suppose that

1 n i = 1 n ( t 1 X i P θ 0 ( t 1 ) ) p 0 , this implies P 0 ( t 1 ) = P θ 0 ( t 1 )

and similarly

1 n i = 1 n ( t 2 X i P θ 0 ( t 2 ) ) p 0 , this implies P 0 ( t 2 ) = P θ 0 ( t 2 ) .

Observe that in general for probability generating function P ( s ) used for applications, the function P ( s ) is convex for 0 < s < 1 and P ( t ) = 1 when t = 1 , i.e., P ( 1 ) = 1 , see Resnick [16] (p 22-23).

Furthermore, for P 0 ( t ) P θ 0 ( t ) encountered in applications, we also have in general P 0 ( t ) P θ 0 ( t ) for some t ( 0 , 1 ) . This also means in general, there is only one point a with 0 < a < 1 at most where P 0 ( t ) crosses P θ 0 ( t ) since P 0 ( t ) and P θ 0 ( t ) are both strictly convex functions and P 0 ( 1 ) = P θ 0 ( 1 ) = 1 . Therefore, we cannot have simultaneously convergence as given by expression (19) and the chi-square test is consistent in general as it can detect common departure from H 0 : P 0 ( t ) = P θ 0 ( t ) as n .

For testing the composite H 0 : P 0 ( t ) { P θ ( t ) } , we need to estimate θ 0 by θ ^ by minimizing Q ( θ ) = g ( θ ) S ^ 1 g ( θ ) first and subsequently use θ ^ to compute the following chi-square statistic n Q ( θ ^ ) and n Q ( θ ^ ) L χ r 2 with r = 3 .

These chi-square statistics are distribution free as there is no unknown parameter in these chi-square distributions for the statistics used. These goodness-of-fit tests are simpler to implement than the ones based on matching sample probability generating function with its model counterpart using a continuum of moment conditions as given by Theorem 10 of Carrasco and Florens [6] (p 812-813). Note that maximum likelihood estimators if used concomitantly with the common classical Pearson statistics often have complicated distributions and the statistics are no longer distribution free, see Chernoff and Lehmann [17], Luong and Thompson [18] and these classical Pearson’s test statistics are not consistent in general.

3.2.3. Further Extensions: The Use of Orthogonal Estimating Functions

Notice that beside the set of basic estimating functions

{ h ( x i ; s i ) = s X i P θ ( s i ) , i = 1 , , n }

as defined earlier we also have another set of basic estimating functions given by { l ( x i ; s i , θ ) , i = 1 , , n } with l ( x i ; s i , θ ) = ( s i ) X i P θ ( s i ) .

Consequently, if in addition of the first set of estimating functions, we also want to incorporate the second set of basic estimating functions for building g ( θ ) then we can use optimum orthogonal estimating functions and instead of the first p components of the vector g ( θ ) are given by the vector

1 n i = 1 n h ( x i ; s i , θ ) β i ( θ )

which is the vector optimum estimating functions based on the set of basic estimating functions { h ( x i ; s i , θ ) , i = 1 , , n } , we shall use a more general vector of optimum estimating functions which can incorporate a larger set of basic estimating functions as described below.

Observe that we also have another set of estimating functions given by { l ( x i ; s i , θ ) , i = 1 , , n } with l ( x i ; s i , θ ) = ( s i ) X i P θ ( s i ) and clearly { l ( x i ; s i , θ ) , i = 1 , , n } form a mutually orthogonal basic estimating function but together combining the two sets of basic estimating functions to form the set

{ h ( x i ; s i , θ ) , l ( x i ; s i , θ ) , i = 1 , , n } ,

the basic estimating functions of the combined set are not mutually orthogonal because E θ ( h ( x i ; s i , θ ) l ( x i ; s i , θ ) ) is not equal to 0. Using Gram-Schmidt orthogonalizing procedure we can replace l ( x i ; s i , θ ) by

l 0 ( x i ; s i , θ ) = l ( x i ; s i , θ ) α i ( θ ) h ( x i ; s i , θ ) , i = 1 , , n ,

α i ( θ ) = E θ ( h ( x i ; s i , θ ) l ( x i ; s i , θ ) ) E θ ( h ( x i ; s i , θ ) h ( x i ; s i , θ ) )

which can also be represented as

α i ( θ ) = P θ ( s i 2 ) P θ ( s i ) P θ ( s i ) P θ ( s i 2 ) ( P θ ( s i ) ) 2 (21)

Since

E θ ( h ( x i ; s i , θ ) l ( x i ; s i , θ ) ) = E θ ( s i X i ( s i ) X i ) E θ ( s i X i ) E θ ( ( s i ) X i ) = P θ ( s i 2 ) P θ ( s i ) P θ ( s i )

and E θ ( h ( x i ; s i , θ ) h ( x i ; s i , θ ) ) is simply the variance v θ ( h ( x i ; s i , θ ) ) of h ( x i ; s i , θ ) since the basic estimating functions are unbiased,

v θ ( h ( x i ; s i , θ ) ) = P θ ( s i 2 ) ( P θ ( s i ) ) 2 .

Now, it is easy to see that that set

{ h ( x i ; s i , θ ) , l 0 ( x i ; s i , θ ) , i = 1 , , n }

is a set of mutually orthogonal of basic or elementary estimating functions, see Definition 2.2 and Theorem 2.1 as given by Godambe and Thompson [10] (p 139-140).

Li and Turtle [19] (p 177) also use a similar orthogonalization procedure and Theorem 2.1 for creating optimum estimating functions for ARCH model.

The first p components of the vector of sample moment functions g ( θ ) are simply the optimum estimating functions based on linear combinations of basic estimating functions of the set { h ( x i ; s i , θ ) , l 0 ( x i ; s i , θ ) , i = 1 , , n } and again using Theorem 2.1 by Godambe and Thompson [10], the vector of optimum estimating functions is given by

1 n i = 1 n ( h ( x i ; s i , θ ) β i ( θ ) + l 0 ( x i ; s i , θ ) γ i ( θ ) ) (22)

with

γ i ( θ ) = E θ ( l 0 ( x i ; s i , θ ) θ ) E θ ( ( l 0 ( x i ; s i , θ ) ) 2 )

and β i ( θ ) is as defined by expression (15).

Now we shall display the expression for γ i ( θ ) , first note that

E θ ( l 0 ( x i ; s i , θ ) θ ) = P θ ( s i ) θ + α i ( θ ) P θ ( s i ) θ

and E θ ( ( l 0 ( x i ; s i , θ ) ) 2 ) = v θ ( l 0 ( x i ; s i , θ ) ) , so that

v θ ( l 0 ( x i ; s i , θ ) ) = v θ ( l ( x i ; s i , θ ) ) + α i 2 ( θ ) v θ ( h ( x i ; s i , θ ) ) 2 α i ( θ ) c o v θ ( h ( x i ; s i , θ ) , l ( x i ; s i , θ ) )

with the variance

v θ ( l ( x i ; s i , θ ) ) = E θ ( ( s i ) X i ( s i ) X i ) E θ ( ( s i ) X i ) E θ ( ( s i ) X i ) = P θ ( s i 2 ) ( P θ ( s i ) ) 2

α i ( θ ) is as given by expression (21) and the covariance

c o v θ ( h ( x i ; s i , θ ) , l ( x i ; s i , θ ) ) = P θ ( s i 2 ) P θ ( s i ) P θ ( s i )

The expression for γ i ( θ ) can be displayed fully and it is given by

P θ ( s i ) θ + α i ( θ ) P θ ( s i ) θ P θ ( s i 2 ) ( P θ ( s i ) ) 2 + α i 2 ( θ ) ( P θ ( s i 2 ) ( P θ ( s i ) ) 2 ) 2 α i ( θ ) ( P θ ( s i 2 ) P θ ( s i ) P θ ( s i ) ) .

with this vector of optimum estimating functions, the sample moments function g ( θ ) for forming the corresponding GMM objective function can be defined and given below.

Let

g 1 ( θ ) = ( 1 n i = 1 n ( h ( x i ; s i , θ ) β i ( θ ) + l 0 ( x i ; s i , θ ) γ i ( θ ) ) )

and keeping g 2 ( θ ) as the component vector of g ( θ ) as specified by expression (19), so g ( θ ) = ( g 1 ( θ ) g 2 ( θ ) ) and the choice of g ( θ ) = ( g 1 ( θ ) g 2 ( θ ) ) with the use

of optimum orthogonal estimating functions constructed using two set of basic estimating functions for g 1 ( θ ) is to be preferred for improving the efficiency

for estimation for some models if g ( θ ) = ( g 1 ( θ ) g 2 ( θ ) ) as defined by expression

(19) in Section 3.1.1 does not give satisfactory results for efficiency for GMM estimation. Model testing procedures using this GMM objective function are identical to procedures for GMM objective function used earlier.

We might also want to enlarge the vector of g 2 ( θ ) by adding more components but more components also tend to create numerical difficulties because the matrix S ^ will be nearly singular and the numerical inversion of such a matrix is often problematic.

Finally, we note that the GMM methods developed although are primarily for discrete distributions, the methods can also accommodate nonnegative continuous defined using Laplace transforms as discussed in Luong [20] as Laplace transforms are related to probability generating functions.

4. An Example and Numerical Illustrations

We shall use an example to illustrate the procedures, let us consider a random sample of observations X 1 , , X n is drawn from the Poisson distribution with probability generating function P θ ( s ) = e θ ( s 1 ) , θ > 0 . For this model θ is scalar. We would like to use GMM methods here as despite that maximum likelihood estimator for θ is available and given by θ ^ M L = X ¯ , using θ ^ M L does not lead to tractable distribution free goodness of fit test statistics with the use of Pearson type statistics as mentioned earlier.

For this model, the coefficient

β i ( θ ) = E θ ( h ( x i ; s i , θ ) θ ) E θ ( ( { h ( x i ; s i , θ ) } ) 2 ) = ( 1 s i ) e θ ( s i 1 ) e θ ( s i 2 1 ) e 2 θ ( s i 1 ) ,

h ( x i ; s i ) = s X i P θ ( s i ) , i = 1 , , n .

We consider the case with the sample moment vector given by

g ( θ ) = ( g 1 ( θ ) g 2 ( θ ) ) , g 1 ( θ ) = 1 n i = 1 n ( s i e θ ( s i 1 ) ) β i ( θ ) , s i = i 1 / 2 n

g 2 ( θ ) = ( 1 n i = 1 n ( t 1 X i P θ ( t 1 ) ) 1 n i = 1 n ( t 2 X i P θ ( t 2 ) ) 1 n i = 1 n ( X i μ ( θ ) ) ) , μ ( θ ) = θ , t 1 = 0.5 , t 2 = 0.75

The vector

m ( x i ; s i , θ ) = ( m 1 ( x i ; s i , θ ) m 4 ( x i ; s i , θ ) )

will have four components with the components given respectively by

m 1 ( x i ; s i , θ ) = ( s i X i e θ ( s i 1 ) ) β i ( θ ) ,

m 2 ( x i ; s i , θ ) = ( t 1 X i P θ ( t 1 ) ) ,

m 3 ( x i ; s i , θ ) = ( t 2 X i P θ ( t 2 ) ) ,

m 4 ( x i ; s i , θ ) = ( X i μ ( θ ) ) .

We can use θ ^ ( 0 ) = θ ^ M L as θ ^ M L is simple to obtain here and can be used as a preliminary consistent estimate. Now we can let

S ^ = 1 n i = 1 n ( m ( x i ; s i , θ ^ ( 0 ) ) ) ( m ( x i ; s i , θ ^ ( 0 ) ) ) (23)

or

S ^ = 1 n i = 1 n E θ ^ ( 0 ) ( [ m ( y i ; θ ^ ( 0 ) ) ] [ m ( y i ; θ ^ ( 0 ) ) ] ) . (24)

The elements of S ^ as given by expression (24) can be computed using only the probability generating function of the model since we have

E θ ( X t X ) = t P θ ( t ) and E θ ( t 1 X t 2 X ) = P θ ( t 1 t 2 ) ,

E θ ( X t X s X ) = ( t + s ) P θ ( t + s ) ,

the variance of X is

v θ ( X ) = P θ ( 1 ) + P θ ( 1 ) ( P θ ( 1 ) ) 2 , P θ ( t ) = d 2 P θ ( t ) d t 2 .

For the Poisson model,

v θ ( X ) = θ .

S ^ as given by expression (24) tends to be invertible with less numerical difficulties.

The GMM objective function is given by

Q ( θ ) = g ( θ ) S ^ 1 g ( θ ) ,

minimizing it allows us to obtain the corresponding GMM estimators θ ^ . In order to obtain an estimated asymptotic variance for θ ^ , we can define

D ^ = ( 1 n i = 1 n E θ ( m 1 ( x i ; s i , θ ) θ ) , , 1 n i = 1 n E θ ( m 4 ( x i ; s i , θ ) θ ) )

evaluated at θ = θ ^ .

The asymptotic variance for θ ^ can be estimated as 1 n D ^ S ^ 1 D ^ and the chi-

square statistic for testing the composite hypothesis H 0 : P 0 ( s ) { P θ ( s ) } is given by n Q ( θ ^ ) and n Q ( θ ^ ) L χ 3 2 .

For testing the feasibility of GMM methods with this example, limited simulation studies are conducted. GMM methods can be implemented without numerical difficulties for θ 10 .

For values of θ with 10 < θ 100 , if expression (23) is used for S ^ , the matrix S ^ tends to be nearly singular and the elements of S ^ need to be computed with higher accuracies in order to be able to invert S ^ . We found that software like Maple or Mathematica is more able to compute with higher accuracy than R.

Often by using a spectral decomposition of S ^ , we can obtain S ^ 1 numerically although directly asking for the inverse using R, it might just give the message, matrix is nearly singular and does not return the inverse. As can be seen by using the spectral representation of S ^ ,

S ^ = P Λ P

with P being an orthonormal matrix, P P = I , P = P 1 and Λ is a diagonal matrix with diagonal elements consist of eigenvalues of S ^ and these eigenvalues need to be computed with high accuracies and they also must be positive numerically, so by keeping more digits to compute the eigenvalues of S ^ then in general, Λ 1 can be obtained and computed as

S ^ 1 = P Λ 1 P .

if expression (24) is used instead of expression (23) for S ^ with software which keeps more accuracy on computing numbers then we encounter less numerical problems to invert S ^ . For models which S ^ 1 is difficult to obtain, an empirical likelihood (EL) approach based on the same sample moments can be used and have the same efficiency as GMM methods but the numerical computations for implementing EL methods are also more involved, see Luong [20] on the use of penalty function for obtaining EL estimators.

We simulate M = 100 samples of size n = 100 from the Poisson distribution with θ = 1 , 2 , 3 , 4 , 5 , 10 , 100 and obtain respectively the GMM estimate, the NLS estimate and the ML estimate. The NLS estimate is the non linear least-squares estimate as mentioned in the beginning of Section 3.2.

For comparison of relative efficiencies of these methods we estimate the ratios

MSE ( GMM ) MSE ( ML ) and MSE ( NLS ) MSE ( ML ) where MSE(GMM), MSE(NLS), MSE(ML)

are respectively the estimates of mean square error of GMM estimator, NLS estimator and ML estimator using simulated samples. The efficiency of GMM estimator is practically identical to the efficiency of ML estimator but the efficiency on NLS estimator is much lower and getting worse as θ increases in comparison with ML estimator. The results are displayed in TableA1.

In order to test whether the chi-square test has power to detect departure from the model used here we use the negative binomial with mean equals to θ

and variance equals to θ + θ 2 α as departure from the Poisson model with

α = 1 , 2 , 3 , 4 , 5 , 10 , 100 and simulate M = 100 samples of size n = 100 and the model used is Poisson with mean θ . We can estimate the power of the tests at these alternative and results are displayed in TableA2. The level used for the chi-square tests is α = 0.05 with the critical point being the 0.95th percentile of a chi-square distribution with 3 degree of freedom, χ 0.95 2 ( 3 ) = 7.814 . The results obtained are also encouraging and show that the chi-square tests have considerable power to detect departures. As n becomes large the estimate power also decreases as expected since as n , the negative binomial distribution also tends to the Poisson distribution. Larger scale simulation studies with more parametric families are needed to confirm the efficiencies of the proposed methods.

5. Conclusion

At this point, we can conclude that the methods appear to be relatively simple to implement and have the potentials to be efficient for some count models and have the advantage of only using of probability generating function instead of probability mass function, allowing inferences to be made for a much larger class of parametric families without relying on extensive use of simulations. The proposed GMM methodology also combines traditional GMM methodology with generalized estimating function methodology and both of these methodologies are well-known alternatives to ML methodology. There is a lack of statistics for model testing when using generalized estimating function methodology and it is overcome by the proposed procedures.

Acknowledgements

The helpful and constructive comments of a referee which lead to an improvement of the presentation of the paper and support from the editorial staff of Open Journal of Statistics to process the paper are all gratefully acknowledged.

Appendix

Table A1. Estimate relative efficiency comparisons between GMM, NLS and ML estimators.

M = 100 simulated samples are used and each with sample size n = 100.

Table A2. Estimate power of the chi-square tests using the Poisson model with parameter θ.

M = 100 simulated samples of size n = 100 for each sample are drawn from a negative binomial distribution with mean = 5 and variance = 5 2 α .

Conflicts of Interest

The authors declare no conflicts of interest.

References

[1] Johnson, N.L., Kotz, S. and Kemp, A.W. (1992) Univariate Discrete Distributions. Second Edition, Wiley, New York.
[2] Klugman, S.A., Panjer, H.H. and Willmot, G.E. (2019) Loss Models: From Data to Decisions. Fifth Edition, Wiley, New York.
[3] Christoph, G. and Schreiber, K. (1998) Discrete Stable Random Variables. Statistics and Probability Letters, 37, 243-247.
https://doi.org/10.1016/S0167-7152(97)00123-5
[4] Luong, A., Bilodeau, C. and Blier-Wong, C. (2018) Simulated Minimum Hellinger Distance Inference Methods for Count Data. Open Journal of Statistics, 8, 187-219.
https://doi.org/10.4236/ojs.2018.81012
[5] Doray, L.G., Jiang, S.M. and Luong, A. (2009) Some Simple Method of Estimation for the Parameters of the Discrete Stable Distribution with the Probability Generating Function. Communications in Statistics—Simulation and Computation, 38, 2004-2017.
https://doi.org/10.1080/03610910903202089
[6] Carrasco, M. and Florens, J.-P. (2000) Generalization of GMM to a Continuum of Moment Conditions. Econometric Theory, 16, 797-834.
https://doi.org/10.1017/S0266466600166010
[7] Carrasco, M. and Kotchoni, R. (2017) Efficient Estimation Using Characteristic Function. Econometric Theory, 33, 479-526.
https://doi.org/10.1017/S0266466616000025
[8] Martin, V., Hurn, S. and Harris, D. (2013) Econometric Modelling with Time Series: Specification, Estimation and Testing. Cambridge University Press, Cambridge.
[9] Hamilton, J.D. (1994) Time Series Analysis. Princeton University Press, Princeton.
[10] Godambe, V.P. and Thompson, M.E. (1989) An Extension of Quasi-Likelihood Estimation. Journal of Statistical Planning and Inference, 22, 137-152.
https://doi.org/10.1016/0378-3758(89)90106-7
[11] Morton, R. (1981) Efficiency of Estimating Equations and the Use of Pivots. Biometrika, 68, 227-233.
https://doi.org/10.1093/biomet/68.1.227
[12] Liang, K.Y. and Zeger, S.L. (1986) Longitudinal Data Analysis Using Generalized Linear Models. Biometrika, 73, 13-22.
https://doi.org/10.1093/biomet/73.1.13
[13] Hansen, L. (1982) Large Sample Properties of Generalized Method of Moment. Econometrica, 50, 1029-1054.
https://doi.org/10.2307/1912775
[14] Rueda, R. and O’Reilly, F. (1999) Tests of Fit for Discrete Distributions Based on the Probability Generating Function. Communication in Statistics—Simulation and Computation, 28, 259-274.
https://doi.org/10.1080/03610919908813547
[15] Marcheselli, M., Baccini, A. and Barabes, L. (2008) Parameter Estimation for the Discrete Stable Family. Communications in Statistics, 37, 815-830.
https://doi.org/10.1080/03610920701570298
[16] Resnick, S. (1992) Adventures in Stochastic Processes. Birkhauser, Boston.
[17] Chernoff, H. and Lehmann, E.L. (1954) The Use of Maximum Likelihood Estimates in Chi-Square Tests for Goodness of Fit. Annals of Mathematical Statistics, 25, 579-586.
https://doi.org/10.1214/aoms/1177728726
[18] Luong, A. and Thompson, M.E. (1987) Minimum Distance Methods Based on Quadratic Distance for Transforms. Canadian Journal of Statistics, 15, 239-251.
https://doi.org/10.2307/3314914
[19] Li, D.X. and Turtle, H.J. (2000) Semi-Parametric ARCH Models: An Estimating Function Approach. Journal of Business and Economic Statistics, 18, 174-186.
https://doi.org/10.1080/07350015.2000.10524860
[20] Luong, A. (2017) Maximum Entropy Empirical Likelihood Methods Based on Laplace Transforms for Nonnegative Continuous Distributions with Actuarial Applications. Open Journal of Statistics, 7, 459-482.
https://doi.org/10.4236/ojs.2017.73033

Copyright © 2024 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.