Simulated Minimum Quadratic Distance Methods Using Grouped Data for Some Bivariate Continuous Models ()
1. Introduction
In actuarial science or biostatistics, we often encounter bivariate data which are already grouped into cells forming a contingency table and we would like to make inferences for a continuous bivariate model used to model the complete data, see Partrat [1] (p. 225), Gibbons and Chakraborti [2] (pp. 511-512) for examples.
The bivariate distributions if they have closed form expressions then there is no difficulty in general to fit these distributions using maximum likelihood or minimum chi-square methods based on grouped data for examples but many useful distributions might only be computable numerically as they are expressible only using an integral representation and if the quadrature numerical methods often fail then it appears to be natural to develop simulated methods of inferences for these distributions. We would like to have methods which offer a unified approach to estimation and model testing as well beside they should be able to handle the situation where the lack of closed form expressions for the model survival distributions might create numerical difficulties. We shall see subsequently that new distributions created using the bivariate survival mixture operator (BSPM) introduced by Marshall and Olkin [3] (pp. 834-836) and by the trivariate reduction techniques often lead to distributions with no closed form expressions for survival functions but it is easy to draw samples from such distributions. Since we focus on nonnegative bivariate distributions in actuarial science, it is natural to use survival functions instead of just using distribution functions alone. The BSPM operator will be introduced and we shall see a few examples to illustrate the numerical difficulties we might encounter when fitting these distributions.
1.1. Bivariate Survival Power Mixture Operator
Marshall and Olkin [3] in their seminal paper have introduced the following operator to create a new bivariate survival function
from two univariate survival functions
and a mixing distribution
for a nonnegative mixing random variable
. We shall call their operator bivariate survival power mixture operator and use the acronym BSPM and we shall see how this operator works to create new bivariate survival functions. The new survival function created can be expressed as an integral given by
.
Since there is an integral representation, the new survival function might still be computable numerically depending on the expressions for
and
.An algorithm to simulate a sample from
has also been given by Marshall and Olkin [3] (p. 840).
Later on in section (1.2) we shall examine another way to create new survival functions. Unlike new distributions created using the BSPM operator, new distributions created using means of trivariate reduction techniques often do not even have an integral representation despite the functions used are simple for examples linear functions. We shall discuss further trivariate reduction techniques in section 1.2 and consider first a few examples of new distributions cretead using the BSPM operator subsequently.
1.2. Some Examples of New Bivariate Distributions Created
Example 1
We let
which is the survival function of a Weibull distribution and similarly let
. For the mixing random variable θ let θ follows a Pareto type II distribution which is also called Lomax distribution with density function given by
with the domain given by
and the parameters α and δ are positive. Note that θ has no closed form Laplace transform (LT).
The new bivariate distribution created using the BSPM operator is
.
For most of the univariate distributions used in the paper, see Appendix A given by Klugman et al. [4] (pp. 459-482).
Observe that if we specify a Gamma distribution for θ instead of a Lomax distribution as discussed earlier where the density function of θ is given by
and its Laplace transform is given by
, the newly created bivariate survival distribution can be expressed as
and since
has a closed form expression
has closed form expression which is given by
where
and
are respectively the cumulative hazard rate functions of
and
with
,
.
By using the usual conditioning argument, by conditioning on θ often we can obtain the first two moments of the vector
and even higher positive integer moments can be obtained without having a closed form for
; see the conditioning argument for the univariate case given by Klugman et al. [4] (pp. 62-65). If complete data are available then the parameters of some bivariate distribution created using the BSPM operator can be estimated using the methods of moment (MM).
In this paper we emphasize grouped data. In general, with grouped data MM estimators cannot be obtained and furthermore with four or five parameters in the bivariate model, high order moments must be used and as a result the MM estimators are not robust in general. We consider the situation where the data have been grouped into a contingency table so that we must analyse data in this from or the complete data is available but we must group them to perform chi-square tests for model testing. If complete data are available then we have choices to group the data; in this situation we hope to be able to propose a way to group them so that inference methods based on such a grouping rule will have have high efficiencies, see the discussions in section (5) by Klugman and Parsa [5] (pp. 146-147) on the difficulties on grouping data to perfom goodness of fit tests. We use the notion of complete data to describe the situation where we have bivariate observations.
which are independent and identically distributed(iid)from a bivariate distribution specified by a bivariate survival function
; this includes a situation where the original observations have been left truncated by
and
where the values
and
are known;
and
are the amount of deductibles in actuarial science for example. We can view
and we only need
for fitting with
being specified as well; see Klugman and Parsa [5] (p. 142) for these models in actuarial science. Furthermore, in our set up we emphasize survival function
but clearly the bivariate distribution function
can be obtained from
using the relation
,
and
are the two marginal distributions of
.For nonnegative parametric families where the bivariate distribution functions are commonly used for specifying the families, it is not difficult to convert them to bivariate survival functions and consequently, MQD methods are still applicable; only some minor modifications are needed.
Example 2
In this example, we let
and
to be Burr survival functions, see Hogg et al. [6] (p. 201) for the Burr distribution with
and the parameters
are positive and similarly, let
.
For the distribution of θ, we specify a Weibull distribution with density function given by
The new distribution created using the BSPM operator will have bivariate survival function given by
.
The bivariate survival function has no closed form expression. For bivariate distributions without a closed form expression for their survival functions and only representable using an integral representation on an unbounded domain,
can be evaluated numerically or not depends on the integrand and the numerical quadrature method used. In the same vein, we can mention the class of bivariate contingency tables studied by Mardia [7] , Mardia [8] , Plackett [9] as these distributions have numerical tractable bivariate distribution functions but no closed form expression for the bivariate distributions. In this paper, we emphasize statistical aspects and do not go into details in the question of dependence of the two components of the new bivariate survival function. Marshall and Olkin [3] , Marshall and Olkin [10] have discussed some of the issues of dependence and infinitely divisibility for distributions created using mixture procedures.
For
which is not numerical tractable we propose simulated minimum quadratic distance (SMQD) methods and providing that we can draw simulated samples from
, estimation of
is still possible and minimum quadratic distance methods offer a unified approach for
with a closed form expression or without a closed form expression for grouped data without choice and with choices. MQD methods version D will be suitable for numerical tractable
and a corresponding simulated version, version S will be suitable if
is not numerically tractable. For version S,
will be replaced by a sample bivariate survival distribution
based on a simulated sample of size
drawn from
with
converges in probability to
for each point
fixed,
i.e.,
,
is defined similarly as
, see expression (1).
Data under the form of contingency tables are often encountered in actuarial science and biostatistics where bivariate observations are grouped into a two dimensions array or matrix and only the numbers of observations or proportions of the original sample which belong to the elements of such a matrix are recorded. The original data set is lost. Obviously, when the original complete data set is available, we can always group them into a contingency table but once grouped it is impossible to convert grouped data to complete data. We focus on the situations where the complete data are observations
which are independent and identically distributed as
which follows a bivariate absolutely continuous distribution with domain given by the nonnegative quadrant and subsequently they are grouped and we develop statistical inference techniques using grouped data.
1.3. New Bivariate Distributions Created by Trivariate Reductions
A version of bivariate gamma distribution as introduced by Mathai and Moschopolus [11] (pp. 137-138) can be introduced with the use of two linear functions
and
, these functions are known and their arguments as given by
are independent univariate random variables. It is simple to simulate a pair of observation
as we have the following equalities in distributionng for the pair of observations
,
and the functional forms for
and
are given.
Let
be gamma random variables with their respective density functions given by
and
,
.
The bivariate density function
has no closed form expression and very complicated. It has five parameters which can be represented by the vector
, see section (5) as given by Mathai and Moschopoulos [11] (pp. 145-148). Mathai and Moschopolus [11] also give methods of moment estimators (MM) in section (7) of their paper. We shall consider their estimators and compare with the simulated quadratic distance estimators in section (4). The bivariate gamma distribution as introduced by Furman [12] can also be obtained similarly using another pair of linear functions
and
. For the use of nonlinear functions
and
to create bivariate distributions, see Chapter 15 given by Hutchinson and Lai [13] (pp. 218-224).
1.4. Contingency Tables
Contingency table data can be viewed as a special form of two -dimensional grouped data. We will give some more details about this form of grouped data.Assume that we have a sample
which are independent and identically distributed as
which follows a non-negative continuous bivariate distribution with model survival function given by
.
The vector of parameters is
, the true vector of parameters is denoted by
. We do not observe the original sample but observations are grouped and put into a contingency table and only the number which fall into each cells of the contingency table are recorded or equivalently the sample proportions which fall into these cells are recorded. The grouping in two dimensional cells generalize the grouping of univariate into disjoint intervals of the nonnegative real line in one dimension. Contingency tables data are often encountered in actuarial science and biostatistics, see Partrat [1] (p. 225), Gibbons and Chakraborti [2] (pp. 511-512). We shall give a brief description below.
Let the nonnegative axis X be partitioned into disjoints interval
with
and similarly, the axis Y be partitioned into disjoints interval
with
.
The nonnegative quadrant can be partitioned into nonoverlapping cells of the form
.
The contingency table
is formed which can be viewed as a matrix with elements given by
We can define the empirical bivariate survival function as
(1)
and we have
.
The sample proportion or empirical probability for one observation which falls into cell
can be obtained using
(2)
and the corresponding model probability is
. (3)
Note that
(4)
and similarly,
. (5)
so they can be discarded without affecting the efficiency of inference methods and this is precisely the approach quadratic distance methods use by discarding redundant elements and create a basis with only linearly independent elements but the basis will span the same linear space. Consequently, we gain in numerical effciciency and at the same time retaining the same efficiency.
1.5. Efficient Modified Minimum Chi-Square Methods
Using the contingency table data, the modified minimum chi-square estimators which are as efficient as the likelihood estimators using the grouped data is obtained by minimizing the objective function given by
and since
, the minimum chi-square estimators given by the vector
which minimizes the expression given above have the same asymptotic efficiency as the vector
which minimize
and under differentiability assumptions given by the roots of the system of equation
.
Note that the quasi-score functions generated belong to the linear space spanned by
. (6)
But since we have the property given by expressions (4-5), the same linear space is spanned by
. (7)
Therefore, an equivalent method but possibly numerically more efficient, remember we need to evaluate
numerically or by simulation is to minimize a quadratic form using the elements of the basis given by expression (7) with an optimum matrix which is no longer diagonal as in the minimum chi-square objective function but it turns out to be quite simple and can be estimated empirically as
are relatively simple and well defined for
.Furthermore, for performing minimum chi square methods in practice if the cells do not have more than 5 elements, they need to be regrouped into larger cells, this will reduce the efficiency of the minimum chi-square methods in practice. We do not need as many regrouping operations with the proposed methods which are quadratic distance methods. For the equivalent efficiency of the new proposed methods, see the projection argument in Luong [14] (pp. 463-468) for quadratic distance methods.
Also using expression (7) is equivalent to use overlapping cells of the form
. (8)
The objective function of the proposed quadratic form will be given below. It is a natural extension of the objective function used in the univariate case. Define a vector with components being elements of the basis so that we only need one subscript by collapsing the matrix given by expression (7) into a vector by putting the first row of the matrix as the first batch of elements of the vector and the second row being the second batch of elements so forth so on, i.e., let
. (9)
and its model counterpart is
. (10)
The number of components of
is M with the assumption
.
Efficient quadratic distance methods can be constructed using the inverse of the covariance matrix of
as the weight matrix for the quadratic form but such an optimum matrix is only defined up to a constant, so the inverse of the matrix of the following vector
. (11)
can also be considered as optimum. It can also be replaced by a consistent estimate and since the elements of the basis can be identified as the corresponding
elements of the vector
with the observations being independent and identically distributed, choosing a basis is equivalent to choose the vector
.
It is not difficult to see that the infinite basis of the form
is complete and the projected score functions will give estimators which have the same efficiency as the maximum likelihood estimators as the score functions belong to the space spanned by the infinite basis, see Carrasco and Florens [15] for a similar property for the univariate case; also see Luong [14] (pp. 461-468) for the notion of MQD estimators as quasilikelihood estimators based on the projected score functions on a finite basis. The MQD methods which make use of such a basis for the bivariate case will be introduced below and they are similar to the univariate case. Note if the data has been grouped, this means that we have no choice of points
as they are already predetermined by the way data are grouped into cells.
Beside the predetermined grouped scenario, we shall also examine the question: if we have complete data and we would like to choose a finite basis with M elements and since these elements are identified as points, the same question can be phrased as how we should choose M points or equivalently, how we should group the data into cells?
The question on how to choose cells appear to be already difficult for the minimum chi square methods with univariate data, see Greenwood and Nikulin [16] (pp. 194-208), we shall propose a solution based on quasi-Monte Carlo (QMC) methods via a Halton sequences and two empirical quantiles from the marginal distributions to create an artificial sample with values on the nonnegative quadrant. The selected points used to construct quadratic distances are based on these artificial sample points. Since these points are random it is similar to the use random cells for minimum chi-square methods and naturally we would like to introduce the notion of an adaptive basis used to achieve high efficiency and it will also unify quadratic distance and minimum chi-square methods. An adaptive basis is data dependent and therefore carry informations about the true vector of parameters
despite that
is unknown and consequently the projected scores functions on such a basis will lead to inference with better efficiency than without using the informations obtainable from data concerning
, in general.
We shall discuss on how to construct such an adaptive basis when complete data is available in section (3) where we would like to give some light on the question on how to form a finite basis so that inference methods using the basis will have high efficiencies for a restricted parameter space which appear to be reasonable for the applications. It appears that such an adaptive basis which is formed from a complete basis is very practical for applications. Using an infinite basis such as in the case of generalized method of moments (GMM) with univariate observations based on a continuum moment conditions as introduced by Carrasco and Florens [15] appears to be complicated for practitioners. Consequently, only finite bases are considered in this paper.
Using the remarks given by Luong ( [14] (p. 472), we need to work with a restricted parameter space for a basis with finite number of elements so that statistical inferences using MQD methods or GMM methods can have high efficiencies. An adaptive basis only has a finite number of elements so that numerically it is not so complicated to use such a basis to construct MQD or SMQD methods. The elements of the basis will be adapted to
and efficiencies of the SMQD methods using such a basis come from the fact that the elements of the basis are chosen accordingly and adjustable depending on the value of the vector of true parameters
despite that
is unknown.
In general, it is not obvious on how to construct an adaptive basis from a complete basis, for example it is a difficult task to construct an adaptive basis from the complete basis of polynomials
so that the MQD methods using such a basis might have good efficiencies. Howewer, it is natural to construct an adaptive basis with only finite number of elements extracted from a complete basis as given by expression (11). This will be further developed in section (3) of the paper. The notion of an adaptive basis constructed from a complete basis used to project the score functions appears to be relatively new despite implicitly it has been used in the minimum chi-square methods without using explicitly this notion, see Moore and Spruill [17] and Pollard [18] (pp. 317-318).
In the literature, attentions seem to be given to complete bases. Adaptive bases will be further developed and discussed in section (3) and used to develop MQD methods with a deterministic version (version D) and a simulated version (version S) or SMQD methods.
For the deterministic version (version D) with
considered to be fixed, asymptotic properties of the methods are similar to the univariate case as given Luong and Thompson [19] , Duchesne et al. [20] ; also see related results of generalized methods of moment (GMM) in Newey and McFadden [21] (p. 2148). For the simulated version where we replace
by a sample survival function using a simulated sample of size U drawn from
, i.e.,
. We can make use of results given by Theorem (3.1) and Theorem (3.3) given by Pakes and Pollard [22] (pp. 1038-1043) to establish asymptotic properties for MQD and SMQD methods.
It might be worth to mention in practice without an infinite and complete basis and only using a finite basis which is a subset of an infinite basis high efficiency for the procedures can only be attained in some restricted parameter space in general unless the score functions belong to the span of the finite base. One viable strategy is to identify a restricted parameter space for the type of applications being considered; often it suffices to use a restricted parameter space then try to identify elements to form a finite basis so that the procedures will retain high efficiency overall for
which belongs the restricted parameter space, see Luong [14] (pp. 463-468). If such a strategy is not feasible then we might want to turn to using an adaptive basis with finite elements constructed from an infinite complete basis but the elements are adapted to data. Implicilty, the idead behind is to let the data points point to a restricted parameter space and the elements of the basis are adjusted accordingly. Minimum chi-square (MCS) methods using nonoverlapping random cells make use of an adaptive basis where the elements are linearly dependent but for MQD methods and the simulated version SMQD methods we make use of overlapping random cells in the form of random points on the nonnegative quadrant and unlike minimum chi-square methods where it is difficult to have a rule to choose random cells we shall have a rule to select these points.
For MQD or SMQD methods, the matrix
which is the covariance matrix of the vector
under
plays an important role as we can obtain estimators with good efficiencies for estimators using
. Despite that
is unknown, its elements are not complicated and moreover, it can be replaced by a consistent estimate constructed empirically without affecting the asymptotic efficiency of the procedures. It is also needed for constructing chi-square tests statistics. We shall give more details about this matrix and construct an empirical estimate
which is data dependent for
.
Let
the covariance matrix of the vector
under
, its elements are given by
(12)
Clearly, these elements can be estimated empirically with the bivariate empirical survival function using grouped data provided by the contingency table; we then have the corresponding estimates given by
. (13)
Therefore, we can define the matrix
and its inverse is denoted by
and similarly let
be the inverse of
. Clearly,
as
. Now, we can define the objective functions to be minimized for the implementations of MQD and SMQD methods.
For version D, let
(14)
and let
be an estimate of
using a simulated sample of size
so that we can define
and let
(15)
for version S.
We can define the length of the random function
as
with the norm
defined as
(16)
or equivalently
(17)
as they will give asymptotic equivalent estimators and goodness-of-fit tests statistics; for finding estimators numerically, we need to minimize expression (16) and for asymptotic properties it might be slightly simpler to work with expression (17) as less notations are involved. Similarly, for version S let
(18)
or equivalently
. (19)
Note that the weight matrix
is the same for both versions and the norm
is a weighted Euclidean norm which obeys the triangle inequality so that results of Theorems given by Pakes and Pollard despite they are stated with the Euclidean norm remain valid if the Euclidean norm is replaced by a weighted Euclidean norm. Minimum QD estimators are obtained as the vector
which minimizes the objective function
or equivalently
as defined by expression (16) for version D or
which minimizes
or equivalently
as defined by expression (18) for version S.
The paper is organized as follows.
In Section 2, MQD and SMQD methods will be developed using predetermined grouped data. Asymptotic properties of the estimators are studied and asymptotic distribution for the model testing statistics are derived. The methods can be extended to the situation when comple data is available but will be grouped by defining a rule to choose points on the nonnegative quadrant to group the data. An artificial sample constructed using two sample quantiles and QMC numbers are proposed in Section 3 to select points and the methods developed with preselected points or cells in section (2) are shown to be still applicable, the methods can be seen as equivalent to minimum chi-square methods with random cells but with a rule to define these cells; using random cells is equivalent to using an adaptive basis. Both QD estimation and minimum chi-square estimation can be unified with the approach of quasilikelihood estimation using an adaptive basis and to implement MQD or SMQD methods require less computing time than the related minimum chi-square versions due to the adaptive basis being used by MQD and SMQD methods only has linearly independent elements and there is less numerical evaluations or simulations for computing probabilities assigned to points than cells. Section 4 illustrates the implementations of SMQD methods by comparing the methods of moment estimators (MM) with the SMQD estimators. The SMQD estimators appear to be much more efficient and robust than MM estimators in a limited study for a bivariate gamma model with the range of parameters often encountered in actuarial science and chosen in the study.
2. SMQD Methods Using Grouped Data
2.1. Estimation
Consistency for both versions of quadratic distance estimators using predetermined grouped data can be treated in a unified way using the following Theorem 1 which is essentially Theorem 3.1 of Pakes and Pollard [22] (p. 1038) and the proof has been given by the authors. In fact, their Theorems 3.1 and 3.3 are also useful for section (3) where we have complete data and have choices to regroup the data into cells or equivalently forming the artificial sample points on the nonnegative quadrant.
Theorem 1 (Consistency)
Under the following conditions
converges in probability to
:
1)
, the parameter space space Ω is compact
2)
,
3)
for each
.
Theorem 3.1 states condition 2) as
but in the proof the authors just use
so we state condition b) as
.
An expression is
if it converges to 0 in probability,
if it is bounded in probability and
if it converges to 0 in probability faster than
. For version D and version S, we have
occurs at the values of the vector values of the MQD estimators, so the conditions a) and b) are satisfied for both versions and compactness of the parameter space Ω is assumed. Also, for both versions
only at
in general if the number of components of
is greater than the number of parameters of the model, i.e.,
. For
we have
for some
since survival functions evaluated at points are components of
and these functions are bounded.
This implies that there exist real numbers u and v with
such that
as
.
Therefore, for both versions of
whether deterministic or simulated, the minimum quadratic distance estimators MQD and SMQD estimators are consistent using Theorem 1, i.e., the vector of MQD estimators and the vector MSQD estimators converge in probability to β0. Theorem 3.1 of Pakes and Pollard [22] (pp. 1038-1039) is an elegant theorem, its proof is also concise using the norm concept of functional analysis and it allows many results to be unified. Now we turn our attention to the question of asymptotic normality for the quadratic distance estimators and it is possible to have unified approach using their Theorem 3.3, see Pakes and Pollard [22] (pp. 1040-1043) which we shall restate their Theorem as Theorem 2 and Corollary 1 given subsequently after the following discussion on the ideas behind their Theorem which allow to get asymptotic normality results for estimators obtained from extremum of a smooth or nonsmooth objective function.
For both versions we can express
,
is as given by expression (14) for version D and as given by expression(15) for version S and since
not differentiable for version S, the traditional Taylor expansion argument cannot be used to establish asymptotic normality of estimators obtained by minimizing
.
For both versions,
with
. (20)
Explicitly,
. (21)
The points
are predetermined by a contingency table we are given and we have no choice but to analyze the grouped data as they are presented.
Note that
is non-random and if we assume
is differentiable with derivative matrix
, then we can define the random function
to approximate
for both versions in a unified way with
,
. (22)
The matrix
can be displayed explicitly as
. (23)
Note that
is differentiable for both versions. Since
is a quadratic function of
, the vector
which minimizes
can be obtained explicitly and
and since
,
is assumed to be a positive matrix, we have
Let
and
be the vectors which minimize
and
respectively. If the approximation is of the right order then
and
are asymptotically equivalent. This set up will allow a unified approach for establishing asymptotic normality for both versions. For version D, it suffices to let
and for version S, let
.
Clearly the set up fits into the scopes of their Theorem (3.3) which we shall rearrange the results of these two theorems before applying to version D and version S for MQD methods and verify that we can satisfy the regularity conditions of these two Theorems. We shall state Theorem 2 and Corollary 1 which are essentially their Theorem (3.3) and the proofs have been given by Pakes and Pollard [22] Note that the condition 4) is slightly more stringent but simpler than the condition iii) in their Theorem.
Also, for version S, the simulated samples are assumed to have size
and the same seed is used across different values of
to draw samples of size U. We make these assumptions and they are standard assumptions for simulated methods of inferences, see section 9.6 for method of simulated moments (MSM) given by Davidson and McKinnon [23] (pp. 383-394). For numerical optimization to find the minimum of the objective function, we rely on direct search simplex methods and the R package already has prewritten functions to implement direct search methods
Theorem 2.
Let
be a vector of consistent estimators for
,the unique vector which satisfies
.
Under the following conditions:
1) The parameter space Ω is compact,
is an interior point of Ω.
2)
3)
is differentiable at
with a derivative matrix
of full rank
4)
for every sequence
of positive numbers which converge to zero.
5)
.
6)
is an interior point of the parameter space Ω.
Then, we have the following representation which will give the asymptotic distribution of
in Corollary 1, i.e.,
, (24)
or equivalently, using equality in distribution,
(25)
or equivalently,
(26)
The proofs of these results follows from the results used to prove Theorem 3.3 given by Pakes and Pollard [22] (pp. 1040-1043). For expression (13) or expression (14) to hold, in general only condition 5) of Theorem 2 is needed and there is no need to assume that
has an asymptotic distribution. From the results of Theorem 2, it is easy to see that we can obtain the main result of the following Corollary 1 which gives the asymptotic covariance matrix for the quadratic distance estimators for both versions.
Corollary 1.
Let
, if
then
with
, (27)
The matrices T and V depend on
, we also adopt the notations
.
We observe that condition 4) of Theorem 2 when applies to SMQD methods in general involve technicalities. The condition 4) holds for version D, we only need to verifiy for version S. Note that to verify the condition 4, it is equivalent to verify
,
a regularity condition for the approximation is of the right order which implies the condition (3) given by their Theorem 3.3, which might be the most difficult to check. The rest of the conditions for Theorem 2 are satisfied in general.
Let
(28)
and for version S, using
(29)
Consequently,
can also be expressed as
.
Since the elements of
are bounded in probability, it is not difficult to see that the sequence
is bounded in probability and continuous in probability with
as
, using the assumption that same seed is used across different values of
and assuming that
is differentiable with respect to
and note that
. Therefore, results given in section of Luong et al. [24] (p. 218) can be used to justify the sequence of functions
attains its maximum on the compact set
in probability and hence has the property
as
and
.
We can see for version D as
,
. (30)
For version S, note that
,
we can see that
(31)
as the simulated samples size is
and the simulated samples are independent of the original sample given by the data. Implicitly, we assume that the same seed is used across different values of
to obtain simulated samples.
Using results of Corollary 1, we have asymptotic normality for the MQD estimators for version D which is given by
, (32)
is as given by expression (23) which can be estimated easily.
For version S, the SMQD estimators also follow an asymptotic normal distribution with
, (33)
an estimate of
can be obtained using the technique as given by Pakes and Pollard [22] (p. 1043).
2.2. Model Testing
2.2.1. Simple Hypothesis
In this section, the quadratic distance
will be used to construct goodness of fit test statistics for the simple hypothesis
H0: data comes from a specified distribution with distribution
,
is specified. The chi-square test statistics, their asymptotic distributions and their degree of freedoms r are given below with
for version D and (34)
for version S. (35)
The version S is of interest since it allows testing goodness of fit for continuous distributions without closed form bivariate survival functions as we only need to be able to simulate from these distributions. We shall justify the asymptotic chi-square distributions given by expression (34) and expression (35) below.
Note that
and for version D
,
.
For version S,
.
We have the asymtoptic chi-square distributions as given, using standard results for distribution of quadratic forms and
,
see Luong and Thompson [19] (p. 247) for example.
2.2.2. Composite Hypothesis
The quadratic distances
can also be used for construction of the test satistics for the composite hypothesis,
is as defined by expression (18) for version D and as defined by expression (19) for version S. The null hypothesis can be stated as
H0: data comes from a parametric model
. The chi-square test statistics are given by
, (36)
for version D and for version S,
(37)
where
and
are the vector of MQD and SMQD estimators which minimize
version D and version S respectively and assuming
. To justify these asymptotic chi-square dsitributions, note that we have for version D,
. It suffices to consider the asymptotic distribution of
as we have the following equalities in distribution,
,
as given by
expression (22). Also, using expressions (24-26).
which can be reexpressed as
using expressions (25-26).
or equivalently,
with
.
We have
,
and note that
and the trace of the matrix
is
; the rank of the matrix
is also equal to its trace. The argument used is very similar to the one used for the Pearson’s statistics, see Luong and Thompson [19] (pp. 248-249).
Similarly, for version S,
and
with
. This justifies the asymptotic chi-square distributions as given by expression (36) and expression (37).
3. Estimation and Model Testing Using Complete Data
3.1. Preliminaries: Statistical Functional and Its Influence Function
In section (3.1) and section (3.2), we shall define a rule of selecting the points
if complete data are available. Equivalently, we would like to define the cells used to group the data and we shall see that random cells will be used as the points
constructed using quasi-Monte Carlo (QMC) numbers on the unit square multiplied by two chosen sample quantiles from the two marginal distributions will be used. For minimum chi-square methods it appears to be difficult to have a rule to choose cells to group the data, see discussions by Greenwood and Nikulin [16] (pp. 194-208). We need a few tools to develop such a rule. We shall define sample quantiles then statistics can be viewed as functionals of the sample distribution; their influence functions are also needed and it allows us to find their asymptotic variance.
We shall define the pth sample quantile of a distribution as we shall need two sample quantiles from the marginal distributions together with QMC numbers to construct an approximation of an integral. Our quadratic distance based on selected points can be viewed as an approximation of a continuous version given by an integral.
From a bivariate distribution we have two marginal distributions
and
.The univariate sample pth quantile of the distribution
assumed to be continuous is based the sample distribution function
and it is defined to be
and its model counterpart is given by
. We also use the notation
and
. We define similarly the qth sample quantile for the distribution
as
and its model counterpart
with
.
The sample quantile functions
or
can be viewed as statisticaf functionals of the form
with
or
. The influence function of
is a valuable tool to study the asymptotic properties of the statistical functional and will be introduced below. Let H be the true distribution and
is the usual empirical distribution which estimates H; also let
be the degenerate distribution at x, i.e.,
if
and
, otherwise; the influence function of T viewed as a function of x,
is defined as a functional directional derivative at H in the direction of
and by letting
, i.e.,
and
is a linear functional.
Alternatively, it is easy to see that
and this gives a convenient way to compute the influence function. It can be shown that the influence function of the pth sample quantile
is given by
and
with h being the density function of the distribution H which is assumed to be absolutely continuous, see Huber [25] (p. 56), Hogg et al. [6] (p. 593). A statistical functional with bounded influence function is considered to be robust, B-robust and consequently the pth sample quantile is robust. The sample quantiles are robust statistics.
Furthermore, as
is based on a linear functional, the asymptotic variance of
is simply
with
being the variance of the expression inside the bracket since in general we have
and the following representation when
is bounded as a function of x
and
,
, see Hogg et al. [6] (p. 593). Consequently, in general we have for bounded influence functionals with the use of means of central limit theorems (CLT) the following convergence in distribution,
,
.
The influence function representation of a functional which depends only on one function such as
is the equivalent of a Taylor expansion of a univariate function and the influence function representation of a functional which depends on many functions is the equivalent of a Taylor expansion of a multivariate function with domain in an Euclidean space and having range being the real line. We will encounter an example of functionals which depend on three functions
in section (3.2).
Subsequently, we shall introduce the Halton sequences with the bases
and
and the first M terms are denoted by
, we also use
to denote set of points
. The sequence of points belong to the unit square
can be obtained as follows.
For
, we divide the interval
into half
then in fourth
so forth so on to obtain the sequence
.
For
, we divide the interval
into third
then in ninth
so forth so on to obtain the sequence
Now pairing them up we obtain the Halton sequence
Matlab and R
have functions to generate the sequences and see Glaserman [26] (pp. 293-297) for the related pseudo codes; also see the seminal paper by Halton [27] . For the general principles of QMC methods, see Glasserman [26] (pp. 281-292). The Halton sequences together with two chosen sample quantiles from the two marginal distributions will allow us to choose points to match the bivariate empirical survival function with its model counterpart as we shall have an artificial sample with values on the nonnegative quadrant with the use of two empirical quantiles from the marginal distributions. These points can be viewed as sample points from an artificial sample and since they depend on sample quantiles which are robust statistics, the artificial sample can be viewed as free of outliers and the methods which make use of them will be robust.
Note that the Halton sequences are deterministic but if we are used to integration by simulation we might want to think the M terms represent a quasi random sample of size M from a bivariate uniform distribution which can be useful
for integrating a function of the form
. Using the M terms of the Halton sequences it can be approximated as
which is similar to the sample mean from a random sample of size M.
From observations which are given by
iid with common bivariate distribution function
and survival function
. Let the two marginal distributions of
be denoted by
and
and also let
and
.
Define the bivariate empirical distribution function which is similar to the bivariate empirical survival function as
.
We might want to think that it admits a bivariate empirical density estimate
so that the following Cramer-Von Mises distances expressions are equivalent,
is equivalent to
.
For univariate Cramér-Von Mises methods, see Luong and Blier-Wong [28] .
In the next section we shall give details on how to form a type of quasi sample or artificial sample of size M from
using the Halton sequence of M terms and the pth-sample quantiles of the marginal distributions F and G which allow us to define the sequence
so that the above integrals can be approximated by the following finite sum of the type of an average of M terms,
. (38)
We can see the expression (38) is an unweighted quadratic distance using the identity matrix
as weight matrix instead of
. The unweighted quadratic distance still produces consistent estimators but possibly less efficient estimators than estimators using the quadratic distance with
for large samples and for finite samples the estimators obtained using
might still have reasonable performances and yet being simple to obtained.
The set of points
is a set of points proposed to be used to form optimum quadratic distances in case that complete data is available. We shall see the set of points depend on two quantiles chosen from the two marginal distributions and they are random consequently, we might want to think that we end up working with random overlapping cells.
As for the minimum chi-square methods if random cells stabilize into fixed cells minimum chi-square methods in general have the same efficiency as based on stabilized fixed cells, see Pollard [18] (pp. 324-326) and Moore and Spruill [17] for the notion of random cells; quadratic distance methods will share the same properties that is to say the fact that the chosen points are random but it will be shown that they do stabilize and therefore these random points can be viewed as fixed points since they do not affect efficiencies of the estimators or asymptotic distributions of goodness-of-fit test statistics which make use of them. These properties will be discussed and studied in more details in the next section along with the introduction of an artificial sample of size M given by the points
on the nonegative quadrant which give us a guideline on how to choose points if complete data is available.
3.2. Halton Sequences and an Artificial Sample
From the M terms of the Halton sequences, we have
.
Let
and
, we can form the
artificial sample with elements given by
with
with
. We can view
being a form of quasi random sample on the nonegative quadrant and these are the points proposed to be used in case of complete data is avalaible. In general, we might want to choose
if
and if n is small we try to ensure
.Consequently as
, M remains bounded.
Since
and
,
with
and
for
and the points
are non-random or fixed.
It turns out that quadratic distances for both versions constructed with the points
are asymptotic equivalent to quadratic distances using the points
so that asymptotic theory developed using the points
considered to be fixed continue to be valid; we shall show indeed this is the case. Similar conclusions have been established for the minimum chi-square methods with the use of random cells provide that these cells stabilize to fixed cells, see Theorem 2 given by Pollard [18] (pp. 324-326). We shall define a few notations to make the arguments easier to follow.
Define
and similarly let
.
We work with the quadratic distance defined using
which leads to consider quadratic of the form
as defined by expression (16) for version D and expression(18) for version S. Now to emphasize
and
which depend on
, we also use respectively the notations
and
and let
,
.
It suffices to verify that results of Theorem 1, Theorem 2 and its corollary in section (2) continue to hold.
Now observe that for both versions D and S we have
(39)
and
(40)
so that
remains the same for both versions since
is continuous with respect to
,
and
. Clearly,
. It remains to establish
and note that
is random instead
of being fixed in section (2) as we are using an adaptive basis here. Using results on the influence functions representations for functionals as discussed, it suffices to show that the vector
has the same influence representation as the vector
to conclude that all the asymptotic results are valid even
are random.
We shall derive the influence functions for elements of the vector of functionals
and show that it is the same for the corresponding elements of the vector of functionals
.
Let
be the degenerate bivariate survival function at the point
, i.e.,
if
and
and
,otherwise.
Let the degenerate distribution function at
be defined as
if
and
if
. Similarly, let the degenerate distribution function at y be defined as
if
and
if
. Now we can define the following contaminated bivariate survival and marginal distribution functions,
which is a contaminated bivariate survival function and
Similarly,
Now we consider
the jth element of
,
.
Clearly,
depend on
but we can use the influence function representation as given by Reid [29] which allows the asymptotic representation with three influence functions given by
which is bounded with respect to
,
and
.
It is interesting to note that
and
; so they do not contribute to the influence function representation of
.
If we consider the jth term of
given by the functional
which does not depend on
and
, we find that
Therefore, all the asymptotic results of section (2) remain valid and all these influence functions are bounded so that inference methods making use of these functionals are robust in general. Furthermore, we can consider the inference procedures based on quadratic distances as we have non-random points
and if needed they can be replaced by
without affecting the asymptotic results already established in section (2).
4. An Illustration and a Simulation Study
For iilustration, we consider a bivariate gamma model as discussed in section (1.2) introduced by Mathai and Moschopoulos [11] (pp. 137-139) which is a model with 5 parameters given by the vector
. Mathai and Moschopoulos [11] (pp. 138-140) also give the following model moments using the bivariate Laplace transform of the model and replacing them with the corresponding empirical moments and solving a system of equations give the method of moments (MM) estimators.
The moments being considered are given by
,
they are the first three cumulants of the marginal distribution of X; the first three cumulants of the marginal distribution of Y are given by
,
and the covariance between X and Y,
.
Let
be the first three sample cumulants of the marginal distribution of X and similarly let
be the first three sample cumulants of the marginal distribution of Y and finally let
be the sample covariance between X and Y; the MM estimators for
as given by Mathai and Moschopoulos [11] (p. 151) can be expressead as
and
.
As the MM estimators depend on statistics of polynomials of degree 3, they will not be robust and they are very sensitive to outliers.
As the bivariate density function for the model is complicated, we consider version S for quadratic distance with M = 25 and we compare the performance of MM estimators versus SMQD estimators using the overall asymptotic relative criterion
.
The mean square errors (MSE) and ARE are estimated using simulated samples. For references on simulations, we find Ross [30] and Johnson [31] useful. The mean square error of an estimator
for
is defined as
.
We fix M = 25, the two samples quantiles are 0.99 quantiles and we do not have problem to obtain the inverse matrix used as optimum matrix estimated from data. If we fix M = 30, occasionally the matrix
is nearly singular and the package R give us a warning sign. It takes around one to two minutes to complete one run on a laptop computer as we do not have large computers, so we fix N = 50 replications with each sample of size n = 500. We observe that in general, each individual ARE is smaller for SMQD estimators than for MM estimators and over all more efficient and robust than MM estimators confirming asymptotic theory; also note that MM estimators require that complete data is available. The unweighted simulated quadratic distance estimators using
perform almost as well as the estimators obtained using an optimum weight matrix. The range of parameters are fixed as follows, we let
,
,
,
and
.
The chosen ranges are often encountered in actuarial science, we observe that in general when the samples are not contaminated the SMQD estimators are at least five times better than their counterparts, the results are summarized in the upper table of Table 1.
For robustness study we contaminate the distributions used above with 5% of observations coming from a bivariate gamma with parameters outside the range being considered used to generate outliers. The bivariate gamma distribution used to generate outliers has parameters given by the vector
.We observe that SMQD estimators are at least 1000 times more efficient than MM estimators, we just display a row for illustrations of the order of efficiency gained by using SMQD methods and it is given at the bottom of Table 1. The limited study seems to point to better efficiencies and robustness for SMQD estimators than MM estimators. MM estimators are very vulnerable to outliers as expected. Despite the study is limited but it tends to confirm theoretical asymptotic results; more numerical and simulation works need to be done to further study the performances of the estimators proposed especially the performances in finite samples.
5. Conclusion
Minimum Quadratic distance methods offer a unified and numerical efficient
Table 1. Overall Asymptotic relative efficiencies comparisons between SMQD estimators and MM estimators using noncontaminated samples of size
Legend: Tabulated values are estimates of overall ARE (SMQD vs MM) based on simulated samples. Overall Asymptotic relative efficiencies comparisons between SMQD estimators and MM estimators using contaminated samples of size
with 5
contamination
Legends: Tabulated values are estimates of overall ARE (SMQD vs MM) based on simulated contaminated samples. Samples are drawn from a contaminated model of the form pbivariate-Gamma
+ qbivariate-Gamma
, p = 0.95, q = 0.05.
approach for estimation and model testing using grouped data and unify minimum chi-square methods with fixed or random cells based on the notion of projected score functions on finite bases and adaptive bases. The simulated version is especially suitable for handling bivariate distributions where numerical complications might arise. The rule on how to select points on the nonnegative quadrant for using minimum quadratic distance methods is also more clearly defined whereas it is not clear on how to form random cells for minimum chi-square methods. The methods appear to be relatively simple to implement and yet being more efficient and more robust than methods of moments in general especially when the model has more than three parameters.
Acknowledgements
The helpful and constructive comments of a referee which lead to an improvement of the presentation of the paper and support from the editorial staffs of Open Journal of Statistics to process the paper are all gratefully acknowledged.