Scientific Research

An Academic Publisher

Simulated Minimum Quadratic Distance Methods Using Grouped Data for Some Bivariate Continuous Models

**Author(s)**Leave a comment

^{}

KEYWORDS

1. Introduction

In actuarial science or biostatistics, we often encounter bivariate data which are already grouped into cells forming a contingency table and we would like to make inferences for a continuous bivariate model used to model the complete data, see Partrat [1] (p. 225), Gibbons and Chakraborti [2] (pp. 511-512) for examples.

The bivariate distributions if they have closed form expressions then there is no difficulty in general to fit these distributions using maximum likelihood or minimum chi-square methods based on grouped data for examples but many useful distributions might only be computable numerically as they are expressible only using an integral representation and if the quadrature numerical methods often fail then it appears to be natural to develop simulated methods of inferences for these distributions. We would like to have methods which offer a unified approach to estimation and model testing as well beside they should be able to handle the situation where the lack of closed form expressions for the model survival distributions might create numerical difficulties. We shall see subsequently that new distributions created using the bivariate survival mixture operator (BSPM) introduced by Marshall and Olkin [3] (pp. 834-836) and by the trivariate reduction techniques often lead to distributions with no closed form expressions for survival functions but it is easy to draw samples from such distributions. Since we focus on nonnegative bivariate distributions in actuarial science, it is natural to use survival functions instead of just using distribution functions alone. The BSPM operator will be introduced and we shall see a few examples to illustrate the numerical difficulties we might encounter when fitting these distributions.

1.1. Bivariate Survival Power Mixture Operator

Marshall and Olkin [3] in their seminal paper have introduced the following operator to create a new bivariate survival function $S\left(x,y\right)$ from two univariate survival functions $\stackrel{\xaf}{{F}_{1}}\left(x\right),\stackrel{\xaf}{{F}_{2}}\left(y\right)$ and a mixing distribution $G\left(\theta \right)$ for a nonnegative mixing random variable $\theta \ge 0$ . We shall call their operator bivariate survival power mixture operator and use the acronym BSPM and we shall see how this operator works to create new bivariate survival functions. The new survival function created can be expressed as an integral given by

$S\left(x,y\right)={\displaystyle {\int}_{0}^{\infty}{\left(\stackrel{\xaf}{{F}_{1}}\left(x\right)\right)}^{\theta}{\left(\stackrel{\xaf}{{F}_{2}}\left(y\right)\right)}^{\theta}\text{d}G\left(\theta \right)}$ .

Since there is an integral representation, the new survival function might still be computable numerically depending on the expressions for $\stackrel{\xaf}{{F}_{1}}\left(x\right),\stackrel{\xaf}{{F}_{2}}\left(y\right)$ and $G\left(\theta \right)$ .An algorithm to simulate a sample from $S\left(x,y\right)$ has also been given by Marshall and Olkin [3] (p. 840).

Later on in section (1.2) we shall examine another way to create new survival functions. Unlike new distributions created using the BSPM operator, new distributions created using means of trivariate reduction techniques often do not even have an integral representation despite the functions used are simple for examples linear functions. We shall discuss further trivariate reduction techniques in section 1.2 and consider first a few examples of new distributions cretead using the BSPM operator subsequently.

1.2. Some Examples of New Bivariate Distributions Created

Example 1

We let $\stackrel{\xaf}{{F}_{1}}\left(x;{\alpha}_{1},{\lambda}_{1}\right)={\text{e}}^{-{\lambda}_{1}{x}^{{\alpha}_{1}}},x>0,\text{\hspace{0.17em}}{\lambda}_{1},{\alpha}_{1}>0$ which is the survival function of a Weibull distribution and similarly let $\stackrel{\xaf}{{F}_{2}}\left(y;{\alpha}_{2},{\lambda}_{2}\right)={\text{e}}^{-{\lambda}_{2}{x}^{{\alpha}_{2}}},\text{\hspace{0.17em}}y>0,\text{\hspace{0.17em}}{\lambda}_{2},{\alpha}_{2}>0$ . For the mixing random variable θ let θ follows a Pareto type II distribution which is also called Lomax distribution with density function given by

$f\left(\theta ;\tau ,\delta \right)=\frac{\delta {\tau}^{\delta}}{{\left(\theta +\tau \right)}^{\delta +1}}$ with the domain given by $\theta >0$ and the parameters α and δ are positive. Note that θ has no closed form Laplace transform (LT).

The new bivariate distribution created using the BSPM operator is

${S}_{\beta}\left(x,y\right)={\displaystyle {\int}_{0}^{\infty}{\text{e}}^{-\theta {\lambda}_{1}{x}^{{\alpha}_{1}}}{\text{e}}^{-\theta {\lambda}_{2}{x}^{{\alpha}_{2}}}\frac{\delta {\tau}^{\delta}}{{\left(\theta +\tau \right)}^{\delta +1}}\text{d}\theta},\text{\hspace{0.17em}}\text{\hspace{0.17em}}\beta ={\left({\alpha}_{1},{\alpha}_{2},{\lambda}_{1},{\lambda}_{2},\delta ,\tau \right)}^{\prime}$ .

For most of the univariate distributions used in the paper, see Appendix A given by Klugman et al. [4] (pp. 459-482).

Observe that if we specify a Gamma distribution for θ instead of a Lomax distribution as discussed earlier where the density function of θ is given by

$f\left(\theta ;\tau ,\delta \right)=\frac{1}{\Gamma \left(\delta \right)}{\theta}^{\delta -1}{\text{e}}^{-\theta},\text{\hspace{0.17em}}\theta >0$

and its Laplace transform is given by $\u0444\left(s\right)={\left(1+s\right)}^{-\delta}$ , the newly created bivariate survival distribution can be expressed as

${S}_{\beta}\left(x,y\right)={\displaystyle {\int}_{0}^{\infty}{\text{e}}^{-\theta {\lambda}_{1}{x}^{{\alpha}_{1}}}{\text{e}}^{-\theta {\lambda}_{2}{x}^{{\alpha}_{2}}}\frac{1}{\Gamma \left(\delta \right)}{\theta}^{\delta -1}{\text{e}}^{-\theta}\text{d}\theta}$

and since $\u0444\left(s\right)$ has a closed form expression ${S}_{\beta}\left(x,y\right)$ has closed form expression which is given by

${S}_{\beta}\left(x,y\right)=\u0444\left({H}_{1}\left(x;{\alpha}_{1},{\lambda}_{1}\right)+{H}_{2}\left(y;{\alpha}_{2},{\lambda}_{2}\right)\right)$

where ${H}_{1}\left(x;{\alpha}_{1},{\lambda}_{1}\right)$ and ${H}_{2}\left(y;{\alpha}_{2},{\lambda}_{2}\right)$ are respectively the cumulative hazard rate functions of $\stackrel{\xaf}{{F}_{1}}\left(x;{\alpha}_{1},{\lambda}_{1}\right)$ and $\stackrel{\xaf}{{F}_{2}}\left(y;{\alpha}_{2},{\lambda}_{2}\right)$ with

${H}_{1}\left(x,{\alpha}_{1},{\lambda}_{1}\right)=-\mathrm{ln}\stackrel{\xaf}{{F}_{1}}\left(x;{\alpha}_{1},{\lambda}_{1}\right)={\lambda}_{1}{x}^{{\alpha}_{1}}$ ,

${H}_{2}\left(y,{\alpha}_{2},{\lambda}_{2}\right)=-\mathrm{ln}\stackrel{\xaf}{{F}_{2}}\left(y;{\alpha}_{2},{\lambda}_{2}\right)={\lambda}_{2}{y}^{{\alpha}_{2}}$ .

By using the usual conditioning argument, by conditioning on θ often we can obtain the first two moments of the vector $Z={\left(X,Y\right)}^{\prime}$ and even higher positive integer moments can be obtained without having a closed form for ${S}_{\beta}\left(x,y\right)$ ; see the conditioning argument for the univariate case given by Klugman et al. [4] (pp. 62-65). If complete data are available then the parameters of some bivariate distribution created using the BSPM operator can be estimated using the methods of moment (MM).

In this paper we emphasize grouped data. In general, with grouped data MM estimators cannot be obtained and furthermore with four or five parameters in the bivariate model, high order moments must be used and as a result the MM estimators are not robust in general. We consider the situation where the data have been grouped into a contingency table so that we must analyse data in this from or the complete data is available but we must group them to perform chi-square tests for model testing. If complete data are available then we have choices to group the data; in this situation we hope to be able to propose a way to group them so that inference methods based on such a grouping rule will have have high efficiencies, see the discussions in section (5) by Klugman and Parsa [5] (pp. 146-147) on the difficulties on grouping data to perfom goodness of fit tests. We use the notion of complete data to describe the situation where we have bivariate observations.

${Z}_{i}={\left({X}_{i},{Y}_{i}\right)}^{\prime},i=1,\cdots ,n$ which are independent and identically distributed(iid)from a bivariate distribution specified by a bivariate survival function ${S}_{\beta}\left(x,y\right)$ ; this includes a situation where the original observations have been left truncated by ${d}_{1}$ and ${d}_{2}$ where the values ${d}_{1}$ and ${d}_{2}$ are known; ${d}_{1}$ and ${d}_{2}$ are the amount of deductibles in actuarial science for example. We can view

${S}_{\beta}\left(x,y\right)=\frac{{S}_{\beta}^{o}\left(x,y\right)}{{S}_{\beta}^{o}\left({d}_{1},{d}_{2}\right)}$ and we only need ${S}_{\beta}\left(x,y\right)$ for fitting with ${S}_{\beta}^{o}\left(x,y\right)$

being specified as well; see Klugman and Parsa [5] (p. 142) for these models in actuarial science. Furthermore, in our set up we emphasize survival function ${S}_{\beta}\left(x,y\right)$ but clearly the bivariate distribution function ${K}_{\beta}\left(x,y\right)$ can be obtained from ${S}_{\beta}\left(x,y\right)$ using the relation

${S}_{\beta}\left(x,y\right)=1-{F}_{\beta}\left(x\right)-{G}_{\beta}\left(y\right)+{K}_{\beta}\left(x,y\right)$ ,

${F}_{\beta}\left(x\right)$ and ${G}_{\beta}\left(y\right)$ are the two marginal distributions of ${K}_{\beta}\left(x,y\right)$ .For nonnegative parametric families where the bivariate distribution functions are commonly used for specifying the families, it is not difficult to convert them to bivariate survival functions and consequently, MQD methods are still applicable; only some minor modifications are needed.

Example 2

In this example, we let $\stackrel{\xaf}{{F}_{1}}$ and $\stackrel{\xaf}{{F}_{2}}$ to be Burr survival functions, see Hogg et al. [6] (p. 201) for the Burr distribution with

$\stackrel{\xaf}{{F}_{1}}\left(x;{\alpha}_{1},{\lambda}_{1},{\gamma}_{1}\right)={\left(1+{\delta}_{1}{x}^{{\tau}_{1}}\right)}^{-{\gamma}_{1}},\text{\hspace{0.17em}}x>0$

and the parameters ${\delta}_{1},{\tau}_{1},{\gamma}_{1}$ are positive and similarly, let

$\stackrel{\xaf}{{F}_{2}}\left(y;{\alpha}_{2},{\lambda}_{2},{\gamma}_{2}\right)={\left(1+{\delta}_{2}{y}^{{\tau}_{2}}\right)}^{-{\gamma}_{2}}$ .

For the distribution of θ, we specify a Weibull distribution with density function given by

$f\left(\theta ;\lambda ,\alpha \right)=\alpha \lambda {\theta}^{\alpha -1}{\text{e}}^{-\lambda {\theta}^{\alpha}},\text{\hspace{0.17em}}\theta >0.$

The new distribution created using the BSPM operator will have bivariate survival function given by

${S}_{\beta}\left(x,y\right)={\displaystyle {\int}_{0}^{\infty}{\left(1+{\delta}_{1}{x}^{{\tau}_{1}}\right)}^{-\theta {\gamma}_{1}}{\left(1+{\delta}_{2}{y}^{{\tau}_{2}}\right)}^{-\theta {\gamma}_{2}}\cdot \alpha \lambda {\theta}^{\alpha -1}{\text{e}}^{-\lambda {\theta}^{\alpha}}\text{d}\theta}$ .

The bivariate survival function has no closed form expression. For bivariate distributions without a closed form expression for their survival functions and only representable using an integral representation on an unbounded domain, ${S}_{\beta}\left(x,y\right)$ can be evaluated numerically or not depends on the integrand and the numerical quadrature method used. In the same vein, we can mention the class of bivariate contingency tables studied by Mardia [7] , Mardia [8] , Plackett [9] as these distributions have numerical tractable bivariate distribution functions but no closed form expression for the bivariate distributions. In this paper, we emphasize statistical aspects and do not go into details in the question of dependence of the two components of the new bivariate survival function. Marshall and Olkin [3] , Marshall and Olkin [10] have discussed some of the issues of dependence and infinitely divisibility for distributions created using mixture procedures.

For ${S}_{\beta}\left(x,y\right)$ which is not numerical tractable we propose simulated minimum quadratic distance (SMQD) methods and providing that we can draw simulated samples from ${S}_{\beta}\left(x,y\right)$ , estimation of $\beta $ is still possible and minimum quadratic distance methods offer a unified approach for ${S}_{\beta}\left(x,y\right)$ with a closed form expression or without a closed form expression for grouped data without choice and with choices. MQD methods version D will be suitable for numerical tractable ${S}_{\beta}\left(x,y\right)$ and a corresponding simulated version, version S will be suitable if ${S}_{\beta}\left(x,y\right)$ is not numerically tractable. For version S, ${S}_{\beta}\left(x,y\right)$ will be replaced by a sample bivariate survival distribution ${S}_{\beta}^{s}\left(x,y\right)$ based on a simulated sample of size $U=\tau n,\tau \ge 10$ drawn from ${S}_{\beta}\left(x,y\right)$ with ${S}_{\beta}^{s}\left(x,y\right)$ converges in probability to ${S}_{\beta}\left(x,y\right)$ for each point $z={\left(x,y\right)}^{\prime}$ fixed,

i.e., ${S}_{\beta}^{s}\left(x,y\right)\stackrel{p}{\to}{S}_{\beta}\left(x,y\right)$ ,

${S}_{\beta}^{s}\left(x,y\right)$ is defined similarly as ${S}_{n}\left(x,y\right)$ , see expression (1).

Data under the form of contingency tables are often encountered in actuarial science and biostatistics where bivariate observations are grouped into a two dimensions array or matrix and only the numbers of observations or proportions of the original sample which belong to the elements of such a matrix are recorded. The original data set is lost. Obviously, when the original complete data set is available, we can always group them into a contingency table but once grouped it is impossible to convert grouped data to complete data. We focus on the situations where the complete data are observations ${Z}_{1},\cdots ,{Z}_{n}$ which are independent and identically distributed as $Z={\left(X,Y\right)}^{\prime}$ which follows a bivariate absolutely continuous distribution with domain given by the nonnegative quadrant and subsequently they are grouped and we develop statistical inference techniques using grouped data.

1.3. New Bivariate Distributions Created by Trivariate Reductions

A version of bivariate gamma distribution as introduced by Mathai and Moschopolus [11] (pp. 137-138) can be introduced with the use of two linear functions ${\u0444}_{1}\left({V}_{0},{V}_{1},{V}_{2}\right)$ and ${\u0444}_{2}\left({V}_{0},{V}_{1},{V}_{2}\right)$ , these functions are known and their arguments as given by ${V}_{0},{V}_{1},{V}_{2}$ are independent univariate random variables. It is simple to simulate a pair of observation ${\left(X,Y\right)}^{\prime}$ as we have the following equalities in distributionng for the pair of observations ${\left(X,Y\right)}^{\prime}$ , $X{=}^{d}{\u0444}_{1}\left({V}_{0},{V}_{1},{V}_{2}\right),Y{=}^{d}{\u0444}_{2}\left({V}_{0},{V}_{1},{V}_{2}\right)$ and the functional forms for ${\u0444}_{1}\left({V}_{0},{V}_{1},{V}_{2}\right)$ and ${\u0444}_{2}\left({V}_{0},{V}_{1},{V}_{2}\right)$ are given.

Let ${V}_{0},{V}_{1},{V}_{2}$ be gamma random variables with their respective density functions given by

$f\left({v}_{i};{\alpha}_{i},{\beta}_{i}\right)=\frac{1}{\Gamma \left({\alpha}_{i}\right){\beta}_{i}^{{\alpha}_{i}}}{v}_{i}^{{\alpha}_{i}-1}{\text{e}}^{-\frac{{v}_{i}}{{\beta}_{i}}},\text{\hspace{0.17em}}{v}_{i}\ge 0,\text{\hspace{0.17em}}{\alpha}_{i},{\beta}_{i}\ge 0$

and

$X={\u0444}_{1}\left({V}_{0},{V}_{1},{V}_{2}\right)=\frac{{\beta}_{1}}{{\beta}_{0}}{V}_{0}+{V}_{1}$ ,

$Y={\u0444}_{2}\left({V}_{0},{V}_{1},{V}_{2}\right)=\frac{{\beta}_{1}}{{\beta}_{0}}{V}_{0}+{V}_{2}$ .

The bivariate density function $f\left(x,y;\beta \right)$ has no closed form expression and very complicated. It has five parameters which can be represented by the vector $\beta ={\left({\alpha}_{0},{\alpha}_{1},{\alpha}_{2},{\beta}_{1},{\beta}_{2}\right)}^{\prime}$ , see section (5) as given by Mathai and Moschopoulos [11] (pp. 145-148). Mathai and Moschopolus [11] also give methods of moment estimators (MM) in section (7) of their paper. We shall consider their estimators and compare with the simulated quadratic distance estimators in section (4). The bivariate gamma distribution as introduced by Furman [12] can also be obtained similarly using another pair of linear functions ${\u0444}_{1}(.)$ and ${\u0444}_{2}(.)$ . For the use of nonlinear functions ${\u0444}_{1}(.)$ and ${\u0444}_{2}(.)$ to create bivariate distributions, see Chapter 15 given by Hutchinson and Lai [13] (pp. 218-224).

1.4. Contingency Tables

Contingency table data can be viewed as a special form of two -dimensional grouped data. We will give some more details about this form of grouped data.Assume that we have a sample ${Z}_{i}={\left({X}_{i},{Y}_{i}\right)}^{\prime},i=1,\cdots ,n$ which are independent and identically distributed as $Z={\left(X,Y\right)}^{\prime}$ which follows a non-negative continuous bivariate distribution with model survival function given by ${S}_{\beta}\left(x,y\right)$ .

The vector of parameters is $\beta ={\left({\beta}_{1},\cdots ,{\beta}_{m}\right)}^{\prime}$ , the true vector of parameters is denoted by ${\beta}_{0}$ . We do not observe the original sample but observations are grouped and put into a contingency table and only the number which fall into each cells of the contingency table are recorded or equivalently the sample proportions which fall into these cells are recorded. The grouping in two dimensional cells generalize the grouping of univariate into disjoint intervals of the nonnegative real line in one dimension. Contingency tables data are often encountered in actuarial science and biostatistics, see Partrat [1] (p. 225), Gibbons and Chakraborti [2] (pp. 511-512). We shall give a brief description below.

Let the nonnegative axis X be partitioned into disjoints interval ${\cup}_{i=1}^{J}\left[{s}_{i-1},{s}_{i}\right)$ with ${s}_{0}=0,{s}_{J}=\infty $ and similarly, the axis Y be partitioned into disjoints interval ${\cup}_{j=0}^{K}\left[{t}_{j-1},{t}_{j}\right)$ with ${t}_{0}=0,{t}_{K}=\infty $ .

The nonnegative quadrant can be partitioned into nonoverlapping cells of the form

${C}_{ij}=\left[{s}_{i-1},{s}_{i}\right)\times \left[{t}_{j-1},{t}_{j}\right),\text{\hspace{0.17em}}i=1,\cdots ,I,j=1,\cdots ,J$ .

The contingency table $T=\left({C}_{ij}\right)$ is formed which can be viewed as a matrix with elements given by

${C}_{ij},\text{\hspace{0.17em}}i=1,\cdots ,I,\text{\hspace{0.17em}}j=1,\cdots ,J$

We can define the empirical bivariate survival function as

${S}_{n}\left(x,y\right)=\frac{1}{n}{\displaystyle {\sum}_{i=1}^{n}I\left[{X}_{i}>x,{Y}_{i}>y\right]}$ (1)

and we have ${S}_{n}\left(x,y\right)\stackrel{p}{\to}{S}_{{\beta}_{0}}\left(x,y\right)$ .

The sample proportion or empirical probability for one observation which falls into cell ${C}_{ij}$ can be obtained using ${S}_{n}\left(x,y\right)$

${p}_{n}\left({C}_{ij}\right)={S}_{n}\left({s}_{i-1},{t}_{j-1}\right)-{S}_{n}\left({s}_{i-1},{t}_{j}\right)-{S}_{n}\left({s}_{i},{t}_{j-1}\right)+{S}_{n}\left({s}_{j},{t}_{j}\right)$ (2)

and the corresponding model probability is

${p}_{\beta}\left({C}_{ij}\right)={S}_{\beta}\left({s}_{i-1},{t}_{j-1}\right)-{S}_{\beta}\left({s}_{i-1},{t}_{j}\right)-{S}_{\beta}\left({s}_{i},{t}_{j-1}\right)+{S}_{\beta}\left({s}_{j},{t}_{j}\right)$ . (3)

Note that

${S}_{\beta}\left({s}_{i},{t}_{J}\right)=0,{S}_{n}\left({s}_{i},{t}_{J}\right)=0,i=1,\cdots ,I$ (4)

and similarly,

${S}_{\beta}\left({s}_{I},{t}_{j}\right)=0,{S}_{n}\left({s}_{I},{t}_{j}\right)=0,j=1,\cdots ,J$ . (5)

so they can be discarded without affecting the efficiency of inference methods and this is precisely the approach quadratic distance methods use by discarding redundant elements and create a basis with only linearly independent elements but the basis will span the same linear space. Consequently, we gain in numerical effciciency and at the same time retaining the same efficiency.

1.5. Efficient Modified Minimum Chi-Square Methods

Using the contingency table data, the modified minimum chi-square estimators which are as efficient as the likelihood estimators using the grouped data is obtained by minimizing the objective function given by

$\sum}_{i,j}\frac{{\left({p}_{n}\left({C}_{ij}\right)-{p}_{\beta}\left({C}_{ij}\right)\right)}^{2}}{{p}_{n}\left({C}_{ij}\right)$

and since ${p}_{n}\left({C}_{ij}\right)\stackrel{p}{\to}{p}_{{\beta}_{0}}\left({C}_{ij}\right)$ , the minimum chi-square estimators given by the vector $\stackrel{\u02dc}{\beta}$ which minimizes the expression given above have the same asymptotic efficiency as the vector ${\beta}^{\ast}$ which minimize $\sum}_{i,j}\frac{{\left({p}_{n}\left({C}_{ij}\right)-{p}_{\beta}\left({C}_{ij}\right)\right)}^{2}}{{p}_{{\beta}_{0}}\left({C}_{ij}\right)$ and under differentiability assumptions given by the roots of the system of equation

${\sum}_{i,j}\frac{\left({p}_{n}\left({C}_{ij}\right)-{p}_{\beta}\left({C}_{ij}\right)\right)}{{p}_{{\beta}_{0}}\left({C}_{ij}\right)}\frac{\partial {p}_{\beta}\left({C}_{ij}\right)}{\partial \beta}}=0$ .

Note that the quasi-score functions generated belong to the linear space spanned by

$\left\{{p}_{n}\left({C}_{ij}\right)-{p}_{\beta}\left({C}_{ij}\right),i=1,\cdots ,I,\text{\hspace{0.17em}}j=1,\cdots ,J\right\}$ . (6)

But since we have the property given by expressions (4-5), the same linear space is spanned by

$\left\{{S}_{n}\left({s}_{ij},{t}_{ij}\right)-{S}_{\beta}\left({s}_{ij},{t}_{ij}\right),i=1,\cdots ,I-1,\text{\hspace{0.17em}}j=1,\cdots ,J-1\right\}$ . (7)

Therefore, an equivalent method but possibly numerically more efficient, remember we need to evaluate ${p}_{\beta}\left({C}_{ij}\right)$ numerically or by simulation is to minimize a quadratic form using the elements of the basis given by expression (7) with an optimum matrix which is no longer diagonal as in the minimum chi-square objective function but it turns out to be quite simple and can be estimated empirically as ${S}_{n}\left({s}_{ij},{t}_{ij}\right)$ are relatively simple and well defined for $i=1,\cdots ,I-1,j=1,\cdots ,J-1$ .Furthermore, for performing minimum chi square methods in practice if the cells do not have more than 5 elements, they need to be regrouped into larger cells, this will reduce the efficiency of the minimum chi-square methods in practice. We do not need as many regrouping operations with the proposed methods which are quadratic distance methods. For the equivalent efficiency of the new proposed methods, see the projection argument in Luong [14] (pp. 463-468) for quadratic distance methods.

Also using expression (7) is equivalent to use overlapping cells of the form

${O}_{ij}=I\left[x>{s}_{ij},y>{t}_{ij}\right],\text{\hspace{0.17em}}i=1,\cdots ,I-1,\text{\hspace{0.17em}}j=1,\cdots ,J-1$ . (8)

The objective function of the proposed quadratic form will be given below. It is a natural extension of the objective function used in the univariate case. Define a vector with components being elements of the basis so that we only need one subscript by collapsing the matrix given by expression (7) into a vector by putting the first row of the matrix as the first batch of elements of the vector and the second row being the second batch of elements so forth so on, i.e., let

${z}_{n}={\left({S}_{n}\left({s}_{1},{t}_{1}\right),\cdots ,{S}_{n}\left({s}_{M},{t}_{M}\right)\right)}^{\prime},\text{\hspace{0.17em}}M=\left(I-1\right)\left(J-1\right)$ . (9)

and its model counterpart is

${z}_{\beta}={\left({S}_{\beta}\left({s}_{1},{t}_{1}\right),\cdots ,{S}_{\beta}\left({s}_{M},{t}_{M}\right)\right)}^{\prime}$ . (10)

The number of components of ${z}_{n}$ is M with the assumption $M>m$ .

Efficient quadratic distance methods can be constructed using the inverse of the covariance matrix of ${z}_{n}$ as the weight matrix for the quadratic form but such an optimum matrix is only defined up to a constant, so the inverse of the matrix of the following vector

$h\left(x,y\right)={\left(I\left[x>{s}_{1},y>{t}_{1}\right]-{S}_{\beta}\left({s}_{1},{t}_{1}\right),\cdots ,I\left[x>{s}_{M},y>{t}_{M}\right]-{S}_{\beta}\left({s}_{M},{t}_{M}\right)\right)}^{\prime}$ . (11)

can also be considered as optimum. It can also be replaced by a consistent estimate and since the elements of the basis can be identified as the corresponding

elements of the vector $\frac{1}{n}{\displaystyle {\sum}_{i=1}^{n}h\left({x}_{i},{y}_{i}\right)}$ with the observations being independent and identically distributed, choosing a basis is equivalent to choose the vector $h\left(x,y\right)$ .

It is not difficult to see that the infinite basis of the form

$\left\{I\left[x>{s}_{l},y>{t}_{l}\right]-{S}_{\beta}\left({s}_{l},{t}_{l}\right),l=1,2,\cdots \right\}$

is complete and the projected score functions will give estimators which have the same efficiency as the maximum likelihood estimators as the score functions belong to the space spanned by the infinite basis, see Carrasco and Florens [15] for a similar property for the univariate case; also see Luong [14] (pp. 461-468) for the notion of MQD estimators as quasilikelihood estimators based on the projected score functions on a finite basis. The MQD methods which make use of such a basis for the bivariate case will be introduced below and they are similar to the univariate case. Note if the data has been grouped, this means that we have no choice of points ${\left({s}_{l},{t}_{l}\right)}^{\prime},i=1,\cdots ,M$ as they are already predetermined by the way data are grouped into cells.

Beside the predetermined grouped scenario, we shall also examine the question: if we have complete data and we would like to choose a finite basis with M elements and since these elements are identified as points, the same question can be phrased as how we should choose M points or equivalently, how we should group the data into cells?

The question on how to choose cells appear to be already difficult for the minimum chi square methods with univariate data, see Greenwood and Nikulin [16] (pp. 194-208), we shall propose a solution based on quasi-Monte Carlo (QMC) methods via a Halton sequences and two empirical quantiles from the marginal distributions to create an artificial sample with values on the nonnegative quadrant. The selected points used to construct quadratic distances are based on these artificial sample points. Since these points are random it is similar to the use random cells for minimum chi-square methods and naturally we would like to introduce the notion of an adaptive basis used to achieve high efficiency and it will also unify quadratic distance and minimum chi-square methods. An adaptive basis is data dependent and therefore carry informations about the true vector of parameters ${\beta}_{0}$ despite that ${\beta}_{0}$ is unknown and consequently the projected scores functions on such a basis will lead to inference with better efficiency than without using the informations obtainable from data concerning ${\beta}_{0}$ , in general.

We shall discuss on how to construct such an adaptive basis when complete data is available in section (3) where we would like to give some light on the question on how to form a finite basis so that inference methods using the basis will have high efficiencies for a restricted parameter space which appear to be reasonable for the applications. It appears that such an adaptive basis which is formed from a complete basis is very practical for applications. Using an infinite basis such as in the case of generalized method of moments (GMM) with univariate observations based on a continuum moment conditions as introduced by Carrasco and Florens [15] appears to be complicated for practitioners. Consequently, only finite bases are considered in this paper.

Using the remarks given by Luong ( [14] (p. 472), we need to work with a restricted parameter space for a basis with finite number of elements so that statistical inferences using MQD methods or GMM methods can have high efficiencies. An adaptive basis only has a finite number of elements so that numerically it is not so complicated to use such a basis to construct MQD or SMQD methods. The elements of the basis will be adapted to ${\beta}_{0}$ and efficiencies of the SMQD methods using such a basis come from the fact that the elements of the basis are chosen accordingly and adjustable depending on the value of the vector of true parameters ${\beta}_{0}$ despite that ${\beta}_{0}$ is unknown.

In general, it is not obvious on how to construct an adaptive basis from a complete basis, for example it is a difficult task to construct an adaptive basis from the complete basis of polynomials $\left\{{x}^{j}{y}^{k}-{E}_{\beta}\left({x}^{j}{y}^{k}\right),j=0,1,\cdots ,k=0,1,\cdots \right\}$ so that the MQD methods using such a basis might have good efficiencies. Howewer, it is natural to construct an adaptive basis with only finite number of elements extracted from a complete basis as given by expression (11). This will be further developed in section (3) of the paper. The notion of an adaptive basis constructed from a complete basis used to project the score functions appears to be relatively new despite implicitly it has been used in the minimum chi-square methods without using explicitly this notion, see Moore and Spruill [17] and Pollard [18] (pp. 317-318).

In the literature, attentions seem to be given to complete bases. Adaptive bases will be further developed and discussed in section (3) and used to develop MQD methods with a deterministic version (version D) and a simulated version (version S) or SMQD methods.

For the deterministic version (version D) with ${S}_{\beta}\left(u,v\right)$ considered to be fixed, asymptotic properties of the methods are similar to the univariate case as given Luong and Thompson [19] , Duchesne et al. [20] ; also see related results of generalized methods of moment (GMM) in Newey and McFadden [21] (p. 2148). For the simulated version where we replace ${S}_{\beta}\left(u,v\right)$ by a sample survival function using a simulated sample of size U drawn from ${S}_{\beta}\left(u,v\right)$ , i.e., ${S}_{\beta}^{s}\left(u,v\right)$ . We can make use of results given by Theorem (3.1) and Theorem (3.3) given by Pakes and Pollard [22] (pp. 1038-1043) to establish asymptotic properties for MQD and SMQD methods.

It might be worth to mention in practice without an infinite and complete basis and only using a finite basis which is a subset of an infinite basis high efficiency for the procedures can only be attained in some restricted parameter space in general unless the score functions belong to the span of the finite base. One viable strategy is to identify a restricted parameter space for the type of applications being considered; often it suffices to use a restricted parameter space then try to identify elements to form a finite basis so that the procedures will retain high efficiency overall for ${\beta}_{0}$ which belongs the restricted parameter space, see Luong [14] (pp. 463-468). If such a strategy is not feasible then we might want to turn to using an adaptive basis with finite elements constructed from an infinite complete basis but the elements are adapted to data. Implicilty, the idead behind is to let the data points point to a restricted parameter space and the elements of the basis are adjusted accordingly. Minimum chi-square (MCS) methods using nonoverlapping random cells make use of an adaptive basis where the elements are linearly dependent but for MQD methods and the simulated version SMQD methods we make use of overlapping random cells in the form of random points on the nonnegative quadrant and unlike minimum chi-square methods where it is difficult to have a rule to choose random cells we shall have a rule to select these points.

For MQD or SMQD methods, the matrix ${\Omega}_{0}$ which is the covariance matrix of the vector $h\left(x,y\right)$ under ${\beta}_{0}$ plays an important role as we can obtain estimators with good efficiencies for estimators using ${\Omega}_{0}$ . Despite that ${\Omega}_{0}$ is unknown, its elements are not complicated and moreover, it can be replaced by a consistent estimate constructed empirically without affecting the asymptotic efficiency of the procedures. It is also needed for constructing chi-square tests statistics. We shall give more details about this matrix and construct an empirical estimate $\stackrel{^}{{\Omega}_{0}}$ which is data dependent for ${\Omega}_{0}$ .

Let ${\Omega}_{0}$ the covariance matrix of the vector $h\left(x,y\right)$ under ${\beta}_{0}$ , its elements are given by

$\begin{array}{l}{\Omega}_{0}\left(i,j\right)=cov\left(I\left[x>{s}_{i},y>{t}_{i}\right],I\left[x>{s}_{j},y>{t}_{j}\right]\right)\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}=E\left(\left(I\left[x>{s}_{i},y>{t}_{i}\right]\right)\left(I\left[x>{s}_{j},y>{t}_{j}\right]\right)\right)\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.05em}}-\left(E\left(I\left[x>{s}_{i},y>{t}_{i}\right]\right)\right)\left(E\left(I\left[x>{s}_{i},y>{t}_{i}\right]\right)\right)\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}={S}_{{\beta}_{0}}\left(\mathrm{max}\left({s}_{i},{s}_{j}\right),\mathrm{max}\left({t}_{i},{t}_{j}\right)\right)\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.05em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}-\left({S}_{{\beta}_{0}}\left({s}_{i},{t}_{i}\right)\right)\left({S}_{{\beta}_{0}}\left({s}_{j},{t}_{j}\right)\right),\text{\hspace{0.17em}}i=1,\cdots ,M,\text{\hspace{0.17em}}j=1,\cdots ,M\end{array}$ (12)

Clearly, these elements can be estimated empirically with the bivariate empirical survival function using grouped data provided by the contingency table; we then have the corresponding estimates given by

$\begin{array}{l}\stackrel{^}{{\Omega}_{0}}\left(i,j\right)={S}_{n}\left(\mathrm{max}\left({s}_{i},{s}_{j}\right),\mathrm{max}\left({t}_{i},{t}_{j}\right)\right)\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}-\left({S}_{n}\left({s}_{i},{t}_{i}\right)\right)\left({S}_{n}\left({s}_{j},{t}_{j}\right)\right),\text{\hspace{0.17em}}i=1,\cdots ,M,\text{\hspace{0.17em}}j=1,\cdots ,M\end{array}$ . (13)

Therefore, we can define the matrix $\stackrel{^}{{\Omega}_{0}}$ and its inverse is denoted by $\stackrel{^}{{W}_{0}}$ and similarly let ${W}_{0}$ be the inverse of ${\Omega}_{0}$ . Clearly, $\stackrel{^}{{W}_{0}}\stackrel{p}{\to}{W}_{0}$ as ${S}_{n}\left(x,y\right)\stackrel{p}{\to}{S}_{{\beta}_{0}}\left(x,y\right)$ . Now, we can define the objective functions to be minimized for the implementations of MQD and SMQD methods.

For version D, let

${G}_{n}\left(\beta \right)={\left({z}_{n}-{z}_{\beta}\right)}^{\prime}$ (14)

and let ${S}_{\beta}^{s}\left(x,y\right)$ be an estimate of ${S}_{\beta}\left(x,y\right)$ using a simulated sample of size $U=\tau n$ so that we can define

${z}_{\beta}^{s}={\left({S}_{\beta}^{s}\left({s}_{1},{t}_{1}\right),\cdots ,{S}_{\beta}^{s}\left({s}_{M},{t}_{M}\right)\right)}^{\prime}$

and let

${G}_{n}\left(\beta \right)={\left({z}_{n}-{z}_{\beta}^{s}\right)}^{\prime}$ (15)

for version S.

We can define the length of the random function ${G}_{n}\left(\beta \right)$ as $\Vert {G}_{n}\left(\beta \right)\Vert $ with the norm $\Vert \text{\hspace{0.05em}}.\text{\hspace{0.05em}}\Vert $ defined as

${\Vert {G}_{n}\left(\beta \right)\Vert}^{2}={\left({z}_{n}-{z}_{\beta}\right)}^{\prime}\stackrel{^}{{W}_{0}}\left({z}_{n}-{z}_{\beta}\right)$ (16)

or equivalently

${\Vert {G}_{n}\left(\beta \right)\Vert}^{2}={\left({z}_{n}-{z}_{\beta}\right)}^{\prime}{W}_{0}\left({z}_{n}-{z}_{\beta}\right)$ (17)

as they will give asymptotic equivalent estimators and goodness-of-fit tests statistics; for finding estimators numerically, we need to minimize expression (16) and for asymptotic properties it might be slightly simpler to work with expression (17) as less notations are involved. Similarly, for version S let

${Q}_{n}\left(\beta \right)={\Vert {G}_{n}\left(\beta \right)\Vert}^{2}={\left({z}_{n}-{z}_{\beta}^{s}\right)}^{\prime}\stackrel{^}{{W}_{0}}\left({z}_{n}-{z}_{\beta}^{s}\right)$ (18)

or equivalently

${Q}_{n}\left(\beta \right)={\Vert {G}_{n}\left(\beta \right)\Vert}^{2}={\left({z}_{n}-{z}_{\beta}^{s}\right)}^{\prime}{W}_{0}\left({z}_{n}-{z}_{\beta}^{s}\right)$ . (19)

Note that the weight matrix $\stackrel{^}{{W}_{0}}$ is the same for both versions and the norm $\Vert \text{\hspace{0.05em}}.\text{\hspace{0.05em}}\Vert $ is a weighted Euclidean norm which obeys the triangle inequality so that results of Theorems given by Pakes and Pollard despite they are stated with the Euclidean norm remain valid if the Euclidean norm is replaced by a weighted Euclidean norm. Minimum QD estimators are obtained as the vector $\stackrel{^}{\beta}$ which minimizes the objective function ${Q}_{n}\left(\beta \right)$ or equivalently $\Vert {G}_{n}\left(\beta \right)\Vert $ as defined by expression (16) for version D or ${\stackrel{^}{\beta}}^{S}$ which minimizes ${Q}_{n}\left(\beta \right)$ or equivalently $\Vert {G}_{n}\left(\beta \right)\Vert $ as defined by expression (18) for version S.

The paper is organized as follows.

In Section 2, MQD and SMQD methods will be developed using predetermined grouped data. Asymptotic properties of the estimators are studied and asymptotic distribution for the model testing statistics are derived. The methods can be extended to the situation when comple data is available but will be grouped by defining a rule to choose points on the nonnegative quadrant to group the data. An artificial sample constructed using two sample quantiles and QMC numbers are proposed in Section 3 to select points and the methods developed with preselected points or cells in section (2) are shown to be still applicable, the methods can be seen as equivalent to minimum chi-square methods with random cells but with a rule to define these cells; using random cells is equivalent to using an adaptive basis. Both QD estimation and minimum chi-square estimation can be unified with the approach of quasilikelihood estimation using an adaptive basis and to implement MQD or SMQD methods require less computing time than the related minimum chi-square versions due to the adaptive basis being used by MQD and SMQD methods only has linearly independent elements and there is less numerical evaluations or simulations for computing probabilities assigned to points than cells. Section 4 illustrates the implementations of SMQD methods by comparing the methods of moment estimators (MM) with the SMQD estimators. The SMQD estimators appear to be much more efficient and robust than MM estimators in a limited study for a bivariate gamma model with the range of parameters often encountered in actuarial science and chosen in the study.

2. SMQD Methods Using Grouped Data

2.1. Estimation

Consistency for both versions of quadratic distance estimators using predetermined grouped data can be treated in a unified way using the following Theorem 1 which is essentially Theorem 3.1 of Pakes and Pollard [22] (p. 1038) and the proof has been given by the authors. In fact, their Theorems 3.1 and 3.3 are also useful for section (3) where we have complete data and have choices to regroup the data into cells or equivalently forming the artificial sample points on the nonnegative quadrant.

Theorem 1 (Consistency)

Under the following conditions $\stackrel{\u02dc}{\beta}$ converges in probability to ${\beta}_{0}$ :

1) $\Vert {G}_{n}\left(\stackrel{\u02dc}{\beta}\right)\Vert \le {o}_{p}\left(1\right)+{\mathrm{inf}}_{\beta \in \Omega}\left(\Vert {G}_{n}\left(\beta \right)\Vert \right)$ , the parameter space space Ω is compact

2) $\Vert {G}_{n}\left({\beta}_{0}\right)\Vert ={o}_{p}\left(1\right)$ ,

3) ${\mathrm{sup}}_{\Vert \beta -{\beta}_{0}\Vert >\delta}\left(\frac{1}{\Vert {G}_{n}\left(\beta \right)\Vert}\right)={O}_{p}\left(1\right)$ for each $\delta >0$ .

Theorem 3.1 states condition 2) as ${G}_{n}\left({\beta}_{0}\right)={o}_{p}\left(1\right)$ but in the proof the authors just use $\Vert {G}_{n}\left({\beta}_{0}\right)\Vert ={o}_{p}\left(1\right)$ so we state condition b) as $\Vert {G}_{n}\left({\beta}_{0}\right)\Vert ={o}_{p}\left(1\right)$ .

An expression is ${o}_{p}\left(1\right)$ if it converges to 0 in probability, ${O}_{p}\left(1\right)$ if it is bounded in probability and ${o}_{p}\left({n}^{-\text{\hspace{0.17em}}\frac{1}{2}}\right)$ if it converges to 0 in probability faster than ${n}^{-\text{\hspace{0.17em}}\frac{1}{2}}\to 0$ . For version D and version S, we have ${\mathrm{inf}}_{\theta \in \Omega}\left(\Vert {G}_{n}\left(\beta \right)\Vert \right)$ occurs at the values of the vector values of the MQD estimators, so the conditions a) and b) are satisfied for both versions and compactness of the parameter space Ω is assumed. Also, for both versions $\Vert {G}_{n}\left(\beta \right)\Vert \stackrel{p}{\to}0$ only at $\beta ={\beta}_{0}$ in general if the number of components of ${G}_{n}\left(\beta \right)$ is greater than the number of parameters of the model, i.e., $M>m$ . For $\beta \ne {\beta}_{0}$ we have $0<{Q}_{n}\left(\beta \right)\le B$ for some $B>0$ since survival functions evaluated at points are components of ${G}_{n}\left(\beta \right)$ and these functions are bounded.

This implies that there exist real numbers u and v with $0<u<v<\infty $ such that

$P\left(u\le {\mathrm{sup}}_{\Vert \beta -{\beta}_{0}\Vert >\delta}\left(\frac{1}{\Vert {G}_{n}\left(\beta \right)\Vert}\right)\le v\right)\to 1$ as $n\to \infty $ .

Therefore, for both versions of
${Q}_{n}\left(\beta \right)$ whether deterministic or simulated, the minimum quadratic distance estimators MQD and SMQD estimators are consistent using Theorem 1, i.e., the vector of MQD estimators and the vector MSQD estimators converge in probability to β_{0}. Theorem 3.1 of Pakes and Pollard [22] (pp. 1038-1039) is an elegant theorem, its proof is also concise using the norm concept of functional analysis and it allows many results to be unified. Now we turn our attention to the question of asymptotic normality for the quadratic distance estimators and it is possible to have unified approach using their Theorem 3.3, see Pakes and Pollard [22] (pp. 1040-1043) which we shall restate their Theorem as Theorem 2 and Corollary 1 given subsequently after the following discussion on the ideas behind their Theorem which allow to get asymptotic normality results for estimators obtained from extremum of a smooth or nonsmooth objective function.

For both versions we can express ${Q}_{n}\left(\beta \right)={\left(\Vert {G}_{n}\left(\beta \right)\Vert \right)}^{2}$ , ${G}_{n}\left(\beta \right)$ is as given by expression (14) for version D and as given by expression(15) for version S and since ${G}_{n}\left(\beta \right)$ not differentiable for version S, the traditional Taylor expansion argument cannot be used to establish asymptotic normality of estimators obtained by minimizing ${\left(\Vert {G}_{n}\left(\beta \right)\Vert \right)}^{2}$ .

For both versions, ${G}_{n}\left(\beta \right)\stackrel{p}{\to}G\left(\beta \right)$ with

$G\left(\beta \right)={\left({z}_{{\beta}_{0}}-{z}_{\beta}\right)}^{\prime}$ . (20)

Explicitly,

$G\left(\beta \right)={\left({S}_{{\beta}_{0}}\left({s}_{1},{t}_{1}\right)-{S}_{\beta}\left({s}_{1},{t}_{1}\right),\cdots ,{S}_{{\beta}_{0}}\left({s}_{M},{t}_{M}\right)-{S}_{\beta}\left({s}_{M},{t}_{M}\right)\right)}^{\prime}$ . (21)

The points ${\left({s}_{1},{t}_{1}\right)}^{\prime},\cdots ,{\left({s}_{M},{t}_{M}\right)}^{\prime}$ are predetermined by a contingency table we are given and we have no choice but to analyze the grouped data as they are presented.

Note that $G\left(\beta \right)$ is non-random and if we assume $G\left(\beta \right)$ is differentiable with derivative matrix $\Gamma \left(\beta \right)$ , then we can define the random function ${Q}_{n}^{a}\left(\beta \right)$ to approximate ${Q}_{n}\left(\beta \right)$ for both versions in a unified way with

${Q}_{n}^{a}\left(\beta \right)={\left(\Vert {L}_{n}\left(\beta \right)\Vert \right)}^{2}$ , ${L}_{n}\left(\beta \right)={G}_{n}\left({\beta}_{0}\right)+\Gamma \left({\beta}_{0}\right)\left(\beta -{\beta}_{0}\right)$ . (22)

The matrix $\Gamma \left(\beta \right)$ can be displayed explicitly as

$\Gamma \left(\beta \right)=-\left[\begin{array}{ccc}\frac{\partial {S}_{\beta}\left({s}_{1},{t}_{1}\right)}{\partial {\beta}_{1}}& \cdots & \frac{\partial {S}_{\beta}\left({s}_{1},{t}_{1}\right)}{\partial {\beta}_{m}}\\ \vdots & \ddots & \vdots \\ \frac{\partial {S}_{\beta}\left({s}_{M},{t}_{M}\right)}{\partial {\beta}_{1}}& \cdots & \frac{\partial {S}_{\beta}\left({s}_{M},{t}_{M}\right)}{\partial {\beta}_{m}}\end{array}\right]$ . (23)

Note that ${Q}_{n}^{a}\left(\beta \right)$ is differentiable for both versions. Since ${Q}_{n}^{a}\left(\beta \right)$ is a quadratic function of $\beta $ , the vector ${\beta}^{*}$ which minimizes ${Q}_{n}^{a}\left(\beta \right)$ can be obtained explicitly and

${\beta}^{*}-{\beta}_{0}=-{\left({\Gamma}^{\prime}\stackrel{^}{{W}_{0}}\Gamma \right)}^{-1}{\Gamma}^{\prime}\stackrel{^}{{W}_{0}}{G}_{n}(\beta 0)$

and since $\stackrel{^}{{W}_{0}}\stackrel{p}{\to}{W}_{0}$ , ${W}_{0}$ is assumed to be a positive matrix, we have

$\sqrt{n}\left({\beta}^{*}-{\beta}_{0}\right)=-{\left({\Gamma}^{\prime}\stackrel{^}{{W}_{0}}\Gamma \right)}^{-1}{\Gamma}^{\prime}\stackrel{^}{{W}_{0}}\sqrt{n}{G}_{n}\left({\beta}_{0}\right)=-{\left({\Gamma}^{\prime}{W}_{0}\Gamma \right)}^{-1}{\Gamma}^{\prime}{W}_{0}\sqrt{n}{G}_{n}\left({\beta}_{0}\right)+{o}_{p}(1)$

Let $\stackrel{\u02dc}{\beta}$ and ${\beta}^{*}$ be the vectors which minimize ${Q}_{n}\left(\beta \right)$ and ${Q}_{n}^{a}\left(\beta \right)$ respectively. If the approximation is of the right order then $\stackrel{\u02dc}{\beta}$ and ${\beta}^{*}$ are asymptotically equivalent. This set up will allow a unified approach for establishing asymptotic normality for both versions. For version D, it suffices to let $\stackrel{\u02dc}{\beta}=\stackrel{^}{\beta}$ and for version S, let $\stackrel{\u02dc}{\beta}=\stackrel{^}{{\beta}^{S}}$ .

Clearly the set up fits into the scopes of their Theorem (3.3) which we shall rearrange the results of these two theorems before applying to version D and version S for MQD methods and verify that we can satisfy the regularity conditions of these two Theorems. We shall state Theorem 2 and Corollary 1 which are essentially their Theorem (3.3) and the proofs have been given by Pakes and Pollard [22] Note that the condition 4) is slightly more stringent but simpler than the condition iii) in their Theorem.

Also, for version S, the simulated samples are assumed to have size $U=\tau n$ and the same seed is used across different values of $\beta $ to draw samples of size U. We make these assumptions and they are standard assumptions for simulated methods of inferences, see section 9.6 for method of simulated moments (MSM) given by Davidson and McKinnon [23] (pp. 383-394). For numerical optimization to find the minimum of the objective function, we rely on direct search simplex methods and the R package already has prewritten functions to implement direct search methods

Theorem 2.

Let $\stackrel{\u02dc}{\beta}$ be a vector of consistent estimators for ${\beta}_{0}$ ,the unique vector which satisfies $G\left({\beta}_{0}\right)=0$ .

Under the following conditions:

1) The parameter space Ω is compact, $\stackrel{\u02dc}{\beta}$ is an interior point of Ω.

2) $\Vert {G}_{n}\left(\stackrel{\u02dc}{\beta}\right)\Vert \le {o}_{p}\left({n}^{-\frac{1}{2}}\right)+{\mathrm{inf}}_{\beta \in \Omega}\Vert {G}_{n}\left(\beta \right)\Vert $

3) $G(.)$ is differentiable at ${\beta}_{0}$ with a derivative matrix $\Gamma =\Gamma \left({\beta}_{0}\right)$ of full rank

4) ${\mathrm{sup}}_{\Vert \beta -{\beta}_{0}\Vert \le {\delta}_{n}}\sqrt{n}\Vert {G}_{n}\left(\beta \right)-G\left(\beta \right)-{G}_{n}\left({\beta}_{0}\right)\Vert ={o}_{p}\left(1\right)$ for every sequence $\left\{{\delta}_{n}\right\}$ of positive numbers which converge to zero.

5) $\Vert {G}_{n}\left(\beta \right)\Vert ={o}_{p}\left(1\right)$ .

6) ${\beta}_{0}$ is an interior point of the parameter space Ω.

Then, we have the following representation which will give the asymptotic distribution of $\stackrel{\u02dc}{\beta}$ in Corollary 1, i.e.,

$\sqrt{n}\left(\stackrel{\u02dc}{\beta}-{\beta}_{0}\right)=-{\left({\Gamma}^{\prime}\stackrel{^}{{W}_{0}}\Gamma \right)}^{-1}{\Gamma}^{\prime}\stackrel{^}{{W}_{0}}\sqrt{n}{G}_{n}\left({\beta}_{0}\right)+{o}_{p}\left(1\right)$ , (24)

or equivalently, using equality in distribution,

$\sqrt{n}\left(\stackrel{\u02dc}{\beta}-{\beta}_{0}\right){=}^{d}-{\left({\Gamma}^{\prime}{W}_{0}\Gamma \right)}^{-1}\sqrt{n}{\Gamma}^{\prime}{W}_{0}{G}_{n}\left({\beta}_{0}\right)$ (25)

or equivalently,

$\sqrt{n}\left(\stackrel{\u02dc}{\beta}-{\beta}_{0}\right){=}^{d}-{\left({\Gamma}^{\prime}\stackrel{^}{{W}_{0}}\Gamma \right)}^{-1}\sqrt{n}{\Gamma}^{\prime}\stackrel{^}{{W}_{0}}{G}_{n}\left({\beta}_{0}\right)$ (26)

The proofs of these results follows from the results used to prove Theorem 3.3 given by Pakes and Pollard [22] (pp. 1040-1043). For expression (13) or expression (14) to hold, in general only condition 5) of Theorem 2 is needed and there is no need to assume that ${G}_{n}\left({\beta}_{0}\right)$ has an asymptotic distribution. From the results of Theorem 2, it is easy to see that we can obtain the main result of the following Corollary 1 which gives the asymptotic covariance matrix for the quadratic distance estimators for both versions.

Corollary 1.

Let ${Y}_{n}=\sqrt{n}{\Gamma}^{\prime}{W}_{0}{G}_{n}\left({\beta}_{0}\right)$ , if ${Y}_{n}\stackrel{L}{\to}N\left(0,V\right)$ then $\sqrt{n}\left(\stackrel{\u02dc}{\beta}-{\beta}_{0}\right)\stackrel{L}{\to}N\left(0,T\right)$ with

$T={\left({\Gamma}^{\prime}{W}_{0}\Gamma \right)}^{-1}V{\left({\Gamma}^{\prime}{W}_{0}\Gamma \right)}^{-1}$ , (27)

The matrices T and V depend on ${\beta}_{0}$ , we also adopt the notations $T=T\left({\beta}_{0}\right),V=V\left({\beta}_{0}\right)$ .

We observe that condition 4) of Theorem 2 when applies to SMQD methods in general involve technicalities. The condition 4) holds for version D, we only need to verifiy for version S. Note that to verify the condition 4, it is equivalent to verify

${\mathrm{sup}}_{\Vert \beta -{\beta}_{0}\Vert \le {\delta}_{n}}n{\left(\Vert {G}_{n}\left(\beta \right)-G\left(\beta \right)-{G}_{n}\left({\beta}_{0}\right)\Vert \right)}^{2}={o}_{p}\left(1\right)$ ,

a regularity condition for the approximation is of the right order which implies the condition (3) given by their Theorem 3.3, which might be the most difficult to check. The rest of the conditions for Theorem 2 are satisfied in general.

Let

${g}_{n}\left(\beta \right)=n{\left(\Vert {G}_{n}\left(\beta \right)-G\left(\beta \right)-{G}_{n}\left({\beta}_{0}\right)\Vert \right)}^{2}$ (28)

and for version S, using

$\begin{array}{l}{u}_{n}\left(\beta \right)=(\left({S}_{{\beta}_{0}}^{s}\left({s}_{1},{t}_{1}\right)-{S}_{{\beta}_{0}}\left({s}_{1},{t}_{1}\right)\right)-\left({S}_{\beta}^{s}\left({s}_{1},{t}_{1}\right)-{S}_{\beta}\left({s}_{1},{t}_{1}\right)\right)\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}-{\left({S}_{{\beta}_{0}}^{s}\left({s}_{M},{t}_{M}\right)-{S}_{{\beta}_{0}}\left({s}_{M},{t}_{M}\right)\right)-\left({S}_{\beta}^{s}\left({s}_{M},{t}_{M}\right)-{S}_{\beta}\left({s}_{M},{t}_{M}\right)\right))}^{\prime}\end{array}$ (29)

Consequently, ${g}_{n}\left(\beta \right)$ can also be expressed as

${g}_{n}\left(\beta \right)=n{{u}^{\prime}}_{n}\left(\beta \right)\stackrel{^}{{W}_{0}}{{u}^{\prime}}_{n}\left(\beta \right)$ .

Since the elements of $\sqrt{n}{{u}^{\prime}}_{n}\left(\beta \right)$ are bounded in probability, it is not difficult to see that the sequence $\left\{{g}_{n}\left(\beta \right)\right\}$ is bounded in probability and continuous in probability with ${g}_{n}\left(\beta \right)\stackrel{p}{\to}{g}_{n}\left(\beta \right)$ as $\beta \to {\beta}^{\prime}$ , using the assumption that same seed is used across different values of $\beta $ and assuming that ${S}_{\beta}\left(x,y\right)$ is differentiable with respect to $\beta $ and note that ${g}_{n}\left({\beta}_{0}\right)=0$ . Therefore, results given in section of Luong et al. [24] (p. 218) can be used to justify the sequence of functions $\left\{{g}_{n}\left(\beta \right)\right\}$ attains its maximum on the compact set

${C}_{n}=\left\{\beta |\left|\beta -{\beta}_{0}\right|\le {\delta}_{n}\right\}$ in probability and hence has the property ${\mathrm{sup}}_{\Vert \beta -{\beta}_{0}\Vert \le {\delta}_{n}}{g}_{n}\left(\beta \right)\stackrel{p}{\to}0$ as $n\to \infty $ and $\beta \to {\beta}_{0}$ .

We can see for version D as $V={W}_{0}^{-1}$ ,

$\sqrt{n}{G}_{n}\left({\beta}_{0}\right)\stackrel{p}{\to}N\left(0,{W}_{0}^{-1}\right)$ . (30)

For version S, note that

${\left({z}_{n}-{z}_{\beta}^{s}\right)}^{\prime}={\left({z}_{n}-{z}_{{\beta}_{0}}-\left({z}_{\beta}^{S}-{z}_{{\beta}_{0}}\right)\right)}^{\prime}={\left({z}_{n}-{z}_{{\beta}_{0}}\right)}^{\prime}-{\left({z}_{\beta}^{S}-{z}_{{\beta}_{0}}\right)}^{\prime}$ ,

we can see that

$\sqrt{n}{G}_{n}\left({\beta}_{0}\right)\stackrel{p}{\to}N\left(0,\left(1+\frac{1}{\tau}\right){W}_{0}^{-1}\right)$ (31)

as the simulated samples size is $U=\tau n$ and the simulated samples are independent of the original sample given by the data. Implicitly, we assume that the same seed is used across different values of $\beta $ to obtain simulated samples.

Using results of Corollary 1, we have asymptotic normality for the MQD estimators for version D which is given by

$\sqrt{n}\left(\stackrel{^}{\beta}-{\beta}_{0}\right)\stackrel{L}{\to}N\left(0,{\left({\Gamma}^{\prime}{W}_{0}\Gamma \right)}^{-1}\right)$ , (32)

$\Gamma $ is as given by expression (23) which can be estimated easily.

For version S, the SMQD estimators also follow an asymptotic normal distribution with

$\sqrt{n}\left({\stackrel{^}{\beta}}^{S}-{\beta}_{0}\right)\stackrel{L}{\to}N\left(0,\left(1+\frac{1}{\tau}\right){\left({\Gamma}^{\prime}{W}_{0}\Gamma \right)}^{-1}\right)$ , (33)

an estimate of $\Gamma $ can be obtained using the technique as given by Pakes and Pollard [22] (p. 1043).

2.2. Model Testing

2.2.1. Simple Hypothesis

In this section, the quadratic distance ${Q}_{n}\left(\beta \right)$ will be used to construct goodness of fit test statistics for the simple hypothesis

H_{0}: data comes from a specified distribution with distribution
${F}_{{\beta}_{0}}$ ,
${\beta}_{0}$ is specified. The chi-square test statistics, their asymptotic distributions and their degree of freedoms r are given below with

$n{Q}_{n}\left({\beta}_{0}\right)\stackrel{L}{\to}{\chi}^{2}\left(r=M\right)$ for version D and (34)

$n\left(\frac{\tau}{\tau +1}\right){Q}_{n}\left({\beta}_{0}\right)\stackrel{L}{\to}{\chi}^{2}\left(r=M\right)$ for version S. (35)

The version S is of interest since it allows testing goodness of fit for continuous distributions without closed form bivariate survival functions as we only need to be able to simulate from these distributions. We shall justify the asymptotic chi-square distributions given by expression (34) and expression (35) below.

Note that

$n{Q}_{n}\left({\theta}_{0}\right)=\sqrt{n}{{G}^{\prime}}_{n}\left({\theta}_{0}\right)\stackrel{^}{{W}_{0}}\sqrt{n}{G}_{n}\left({\theta}_{0}\right)$ and for version D

$\sqrt{n}{G}_{n}\left({\theta}_{0}\right)\stackrel{L}{\to}N\left(0,{W}_{0}^{-1}\right)$ , ${W}_{0}^{-1}={\Omega}_{0}$ .

For version S,

$\sqrt{n}{G}_{n}\left({\theta}_{0}\right)\stackrel{L}{\to}N\left(0,\left(1+\frac{1}{\tau}\right){W}_{0}^{-1}\right)$ .

We have the asymtoptic chi-square distributions as given, using standard results for distribution of quadratic forms and
$\stackrel{^}{{W}_{0}}\stackrel{p}{\to}{W}_{0}$ ,

2.2.2. Composite Hypothesis

The quadratic distances ${Q}_{n}\left(\beta \right)$ can also be used for construction of the test satistics for the composite hypothesis, ${Q}_{n}\left(\beta \right)$ is as defined by expression (18) for version D and as defined by expression (19) for version S. The null hypothesis can be stated as

H_{0}: data comes from a parametric model
$\left\{{F}_{\beta}\right\}$ . The chi-square test statistics are given by

$n{Q}_{n}\left(\stackrel{^}{\beta}\right)\stackrel{L}{\to}{\chi}^{2}\left(r=M-m\right)$ , (36)

for version D and for version S,

$n\left(\frac{\tau}{\tau +1}\right){Q}_{n}\left({\stackrel{^}{\beta}}^{S}\right)\stackrel{L}{\to}{\chi}^{2}\left(r=M-m\right)$ (37)

where $\stackrel{^}{\beta}$ and ${\stackrel{^}{\beta}}^{S}$ are the vector of MQD and SMQD estimators which minimize ${Q}_{n}\left(\beta \right)$ version D and version S respectively and assuming $M>m$ . To justify these asymptotic chi-square dsitributions, note that we have for version D,

$n{Q}_{n}\left(\stackrel{^}{\beta}\right)=n{Q}_{n}^{a}\left(\stackrel{^}{\beta}\right)+{o}_{p}\left(1\right)$ . It suffices to consider the asymptotic distribution of

$n{Q}_{n}^{a}\left(\stackrel{^}{\beta}\right)$ as we have the following equalities in distribution,

$n{Q}_{n}\left(\stackrel{^}{\beta}\right){=}^{d}n{Q}_{n}^{a}\left(\stackrel{^}{\beta}\right)=n{\Vert {L}_{n}\left(\stackrel{^}{\beta}\right)\Vert}^{2}=\sqrt{n}{{L}^{\prime}}_{n}\left(\stackrel{^}{\beta}\right)\stackrel{^}{{W}_{0}}\sqrt{n}{L}_{n}\left(\beta \right)$ , ${L}_{n}\left(\beta \right)$ as given by

expression (22). Also, using expressions (24-26). $\sqrt{n}{L}_{n}\left(\stackrel{^}{\beta}\right){=}^{d}\sqrt{n}{G}_{n}\left({\beta}_{0}\right)+\Gamma \sqrt{n}\left(\stackrel{^}{\beta}-{\beta}_{0}\right)$ which can be reexpressed as $\sqrt{n}{L}_{n}\left(\stackrel{^}{\beta}\right){=}^{d}\sqrt{n}{G}_{n}\left({\beta}_{0}\right)-\Gamma {\left({\Gamma}^{\prime}{W}_{0}\Gamma \right)}^{-1}{\Gamma}^{\prime}{W}_{0}\sqrt{n}{G}_{n}\left({\theta}_{0}\right)$ using expressions (25-26).

or equivalently, $\sqrt{n}{L}_{n}\left(\stackrel{^}{\beta}\right){=}^{d}\left(I-\Gamma {\left({\Gamma}^{\prime}{W}_{0}\Gamma \right)}^{-1}{\Gamma}^{\prime}{W}_{0}\right)\sqrt{n}{G}_{n}\left({\theta}_{0}\right)$ with

$\sqrt{n}{G}_{n}\left({\theta}_{0}\right)\stackrel{L}{\to}N\left(0,{W}_{0}^{-1}\right)$ .

We have $\sqrt{n}{L}_{n}\left(\stackrel{^}{\beta}\right)\stackrel{L}{\to}N\left(0,\Sigma \right)$ ,

$\Sigma =\left(I-\Gamma {\left({\Gamma}^{\prime}{W}_{0}\Gamma \right)}^{-1}{\Gamma}^{\prime}{W}_{0}\right){W}_{0}^{-1}\left(I-{W}_{0}\Gamma {\left({\Gamma}^{\prime}{W}_{0}\Gamma \right)}^{-1}{\Gamma}^{\prime}\right)$ and note that $\Sigma {W}_{0}=B$

and the trace of the matrix $B=I-\Gamma {\left({\Gamma}^{\prime}{W}_{0}\Gamma \right)}^{-1}{\Gamma}^{\prime}{W}_{0}$ is $trace\left(B\right)=M-m$ ; the rank of the matrix $B$ is also equal to its trace. The argument used is very similar to the one used for the Pearson’s statistics, see Luong and Thompson [19] (pp. 248-249).

Similarly, for version S, ${Q}_{n}\left({\stackrel{^}{\beta}}^{S}\right){=}^{d}n{Q}_{n}^{a}\left({\stackrel{^}{\beta}}^{S}\right)=n{\Vert {L}_{n}\left({\stackrel{^}{\beta}}^{S}\right)\Vert}^{2}$ and $\sqrt{n}{L}_{n}\left({\stackrel{^}{\beta}}^{S}\right){=}^{d}\left(I-\Gamma {\left({\Gamma}^{\prime}{W}_{0}\Gamma \right)}^{-1}{\Gamma}^{\prime}{W}_{0}\right)\sqrt{n}{G}_{n}\left({\beta}_{0}\right)$ with $\sqrt{n}{G}_{n}\left({\beta}_{0}\right)\stackrel{L}{\to}N\left(0,\left(1+\frac{1}{\tau}\right){W}_{0}^{-1}\right)$ . This justifies the asymptotic chi-square distributions as given by expression (36) and expression (37).

3. Estimation and Model Testing Using Complete Data

3.1. Preliminaries: Statistical Functional and Its Influence Function

In section (3.1) and section (3.2), we shall define a rule of selecting the points $\left({s}_{l},{t}_{l}\right),l=1,\cdots ,M$ if complete data are available. Equivalently, we would like to define the cells used to group the data and we shall see that random cells will be used as the points $\left({s}_{l},{t}_{l}\right),l=1,\cdots ,M$ constructed using quasi-Monte Carlo (QMC) numbers on the unit square multiplied by two chosen sample quantiles from the two marginal distributions will be used. For minimum chi-square methods it appears to be difficult to have a rule to choose cells to group the data, see discussions by Greenwood and Nikulin [16] (pp. 194-208). We need a few tools to develop such a rule. We shall define sample quantiles then statistics can be viewed as functionals of the sample distribution; their influence functions are also needed and it allows us to find their asymptotic variance.

We shall define the pth sample quantile of a distribution as we shall need two sample quantiles from the marginal distributions together with QMC numbers to construct an approximation of an integral. Our quadratic distance based on selected points can be viewed as an approximation of a continuous version given by an integral.

From a bivariate distribution we have two marginal distributions $F\left(x\right)$ and $G\left(y\right)$ .The univariate sample pth quantile of the distribution $F\left(x\right)$ assumed to be continuous is based the sample distribution function ${F}_{n}\left(x\right)=\frac{1}{n}{\displaystyle {\sum}_{i=1}^{n}I\left[{x}_{i}\le x\right]}$ and it is defined to be ${\alpha}_{p}^{\left(n\right)}=\mathrm{inf}\left\{{F}_{n}\left(x\right)\ge p\right\}$ and its model counterpart is given by ${\alpha}_{p}=\mathrm{inf}\left\{F\left(x\right)\ge p\right\}$ . We also use the notation ${\alpha}_{p}^{\left(n\right)}={F}_{n}^{-1}\left(p\right)$ and ${\alpha}_{p}={F}^{-1}\left(p\right)$ . We define similarly the qth sample quantile for the distribution $G\left(y\right)$ as ${\beta}_{q}^{\left(n\right)}={G}_{n}^{-1}\left(q\right)$ and its model counterpart ${\beta}_{q}={G}^{-1}\left(q\right)$ with $0<p,q<1$ .

The sample quantile functions ${\alpha}_{p}^{\left(n\right)}$ or ${\beta}_{q}^{\left(n\right)}$ can be viewed as statisticaf functionals of the form $T\left({H}_{n}\right)$ with ${H}_{n}={F}_{n}$ or ${H}_{n}={G}_{n}$ . The influence function of $T\left({H}_{n}\right)$ is a valuable tool to study the asymptotic properties of the statistical functional and will be introduced below. Let H be the true distribution and ${H}_{n}$ is the usual empirical distribution which estimates H; also let ${\delta}_{x}$ be the degenerate distribution at x, i.e., ${\delta}_{x}\left(u\right)=1$ if $u\ge x$ and ${\delta}_{x}\left(u\right)=0$ , otherwise; the influence function of T viewed as a function of x, $I{C}_{T,H}\left(x\right)$ is defined as a functional directional derivative at H in the direction of $\left({\delta}_{x}-H\right)$ and by letting

${H}_{\epsilon}=H+\epsilon \left({\delta}_{x}-H\right)$ , i.e., $I{C}_{T,H}\left(x\right)={\mathrm{lim}}_{\epsilon \to 0}\frac{T\left({H}_{\epsilon}\right)-T\left(H\right)}{\epsilon}={{T}^{\prime}}_{H}\left({\delta}_{x}-H\right)$ and

${{T}^{\prime}}_{H}$ is a linear functional.

Alternatively, it is easy to see that $I{C}_{T,H}\left(x\right)={\frac{\partial {H}_{\epsilon}}{\partial \epsilon}|}_{\epsilon =0}$ and this gives a convenient way to compute the influence function. It can be shown that the influence function of the pth sample quantile $T\left({H}_{n}\right)$ is given by

$I{C}_{T,H}\left(x\right)=\frac{p-1}{h\left({H}^{-1}\left(p\right)\right)},x<{H}^{-1}\left(p\right)$ and $I{C}_{T,H}\left(x\right)=\frac{p}{h\left({H}^{-1}\left(p\right)\right)},x>{H}^{-1}(p)$

with h being the density function of the distribution H which is assumed to be absolutely continuous, see Huber [25] (p. 56), Hogg et al. [6] (p. 593). A statistical functional with bounded influence function is considered to be robust, B-robust and consequently the pth sample quantile is robust. The sample quantiles are robust statistics.

Furthermore, as $I{C}_{T,H}\left(x\right)$ is based on a linear functional, the asymptotic variance of $T\left({H}_{n}\right)$ is simply $\frac{1}{n}V\left(I{C}_{T,H}\left(x\right)\right)$ with $V(.)$ being the variance of the expression inside the bracket since in general we have $E\left(I{C}_{T,H}\left(x\right)\right)=0$ and the following representation when $I{C}_{T,H}\left(x\right)$ is bounded as a function of x

$T\left({H}_{n}\right)=T\left(H\right)+{{T}^{\prime}}_{F}\left({H}_{n}-H\right)+{o}_{p}(1n)$

and

${{T}^{\prime}}_{F}\left({H}_{n}-H\right)=\frac{1}{n}{\displaystyle {\sum}_{i=1}^{n}{{T}^{\prime}}_{F}\left({\delta}_{{x}_{i}}-H\right)}$ ,

${{T}^{\prime}}_{F}\left({\delta}_{{x}_{i}}-H\right)=I{C}_{T,H}\left({x}_{i}\right)$ , see Hogg et al. [6] (p. 593). Consequently, in general we have for bounded influence functionals with the use of means of central limit theorems (CLT) the following convergence in distribution,

$\sqrt{n}\left(T\left({H}_{n}\right)-T\left(H\right)\right)\stackrel{L}{\to}N\left(0,{\sigma}_{IC}^{2}\right)$ , ${\sigma}_{IC}^{2}=V\left(I{C}_{T,H}\left(x\right)\right)$ .

The influence function representation of a functional which depends only on one function such as ${H}_{n}$ is the equivalent of a Taylor expansion of a univariate function and the influence function representation of a functional which depends on many functions is the equivalent of a Taylor expansion of a multivariate function with domain in an Euclidean space and having range being the real line. We will encounter an example of functionals which depend on three functions ${S}_{n},{F}_{n},{G}_{n}$ in section (3.2).

Subsequently, we shall introduce the Halton sequences with the bases ${b}_{1}=2$ and ${b}_{2}=3$ and the first M terms are denoted by $\left({u}_{l},{v}_{l}\right)=\left({\phi}_{{b}_{1}}\left(l\right),{\phi}_{{b}_{2}}\left(l\right)\right),l=1,2,\cdots ,M$ , we also use ${H}_{M}$ to denote set of points $\left\{\left({u}_{l},{v}_{l}\right),l=1,2,\cdots ,M\right\}$ . The sequence of points belong to the unit square $\left(0,1\right)\times \left(0,1\right)$ can be obtained as follows.

For ${b}_{1}=2$ , we divide the interval $\left(0,1\right)$ into half $\left({b}_{1}=2\right)$ then in fourth $\left({b}_{1}^{2}={2}^{2}\right)$ so forth so on to obtain the sequence $\frac{1}{2},\frac{1}{4},\frac{3}{4},\cdots $ .

For ${b}_{1}=3$ , we divide the interval $\left(0,1\right)$ into third $\left({b}_{2}=3\right)$ then in ninth $\left({b}_{2}^{2}={3}^{2}\right)$ so forth so on to obtain the sequence $\frac{1}{3},\frac{2}{3},\frac{1}{9},\cdots $ Now pairing them up we obtain the Halton sequence $\left(\frac{1}{2},\frac{1}{3}\right),\left(\frac{1}{4},\frac{2}{3}\right),\left(\frac{3}{4},\frac{1}{9}\right),\cdots $ Matlab and R

have functions to generate the sequences and see Glaserman [26] (pp. 293-297) for the related pseudo codes; also see the seminal paper by Halton [27] . For the general principles of QMC methods, see Glasserman [26] (pp. 281-292). The Halton sequences together with two chosen sample quantiles from the two marginal distributions will allow us to choose points to match the bivariate empirical survival function with its model counterpart as we shall have an artificial sample with values on the nonnegative quadrant with the use of two empirical quantiles from the marginal distributions. These points can be viewed as sample points from an artificial sample and since they depend on sample quantiles which are robust statistics, the artificial sample can be viewed as free of outliers and the methods which make use of them will be robust.

Note that the Halton sequences are deterministic but if we are used to integration by simulation we might want to think the M terms represent a quasi random sample of size M from a bivariate uniform distribution which can be useful

for integrating a function of the form $A={\displaystyle {\int}_{0}^{1}{\displaystyle {\int}_{0}^{1}\psi \left(x,y\right)\text{d}x\text{d}y}}$ . Using the M terms of the Halton sequences it can be approximated as $A\approx \frac{1}{M}{\displaystyle {\sum}_{l=1}^{M}\psi \left({s}_{l},{t}_{l}\right)}$ which is similar to the sample mean from a random sample of size M.

From observations which are given by ${Z}_{i}={\left({X}_{i},{Y}_{i}\right)}^{\prime},i=1,\cdots ,n$ iid with common bivariate distribution function ${K}_{\beta}\left(x,y\right)$ and survival function ${S}_{\beta}\left(x,y\right)$ . Let the two marginal distributions of ${K}_{\beta}\left(x,y\right)$ be denoted by ${F}_{\beta}\left(x\right)$ and ${G}_{\beta}\left(y\right)$ and also let $F={F}_{{\beta}_{0}}$ and $G={G}_{{\beta}_{0}},K={K}_{{\beta}_{0}},S={S}_{{\beta}_{0}}$ .

Define the bivariate empirical distribution function which is similar to the bivariate empirical survival function as

${K}_{n}\left(x,y\right)=\frac{1}{n}{\displaystyle {\sum}_{i=1}^{n}I\left[{x}_{i}\le x,{y}_{i}\le y\right]}$ .

We might want to think that it admits a bivariate empirical density estimate ${k}_{n}\left(x,y\right)$ so that the following Cramer-Von Mises distances expressions are equivalent,

$\int}_{0}^{\infty}{\displaystyle {\int}_{0}^{\infty}{\left({S}_{n}\left(x,y\right)-{S}_{\beta}\left(x,y\right)\right)}^{2}\text{d}{K}_{n}\left(x,y\right)$

is equivalent to

$\int}_{0}^{\infty}{\displaystyle {\int}_{0}^{\infty}{\left({S}_{n}\left(x,y\right)-{S}_{\beta}\left(x,y\right)\right)}^{2}{k}_{n}\left(x,y\right)\text{d}x\text{d}y$ .

For univariate Cramér-Von Mises methods, see Luong and Blier-Wong [28] .

In the next section we shall give details on how to form a type of quasi sample or artificial sample of size M from ${k}_{n}\left(x,y\right)$ using the Halton sequence of M terms and the pth-sample quantiles of the marginal distributions F and G which allow us to define the sequence $\left({s}_{l},{t}_{l}\right),l=1,\cdots ,M$ so that the above integrals can be approximated by the following finite sum of the type of an average of M terms,

$\frac{1}{M}{\displaystyle {\sum}_{l=1}^{M}{\left({S}_{n}\left({s}_{l},{t}_{l}\right)-{S}_{\beta}\left({s}_{l},{t}_{l}\right)\right)}^{2}}$ . (38)

We can see the expression (38) is an unweighted quadratic distance using the identity matrix $I$ as weight matrix instead of $\stackrel{^}{{W}_{0}}$ . The unweighted quadratic distance still produces consistent estimators but possibly less efficient estimators than estimators using the quadratic distance with $\stackrel{^}{{W}_{0}}$ for large samples and for finite samples the estimators obtained using $I$ might still have reasonable performances and yet being simple to obtained.

The set of points $\left({s}_{l},{t}_{l}\right),l=1,\cdots ,M$ is a set of points proposed to be used to form optimum quadratic distances in case that complete data is available. We shall see the set of points depend on two quantiles chosen from the two marginal distributions and they are random consequently, we might want to think that we end up working with random overlapping cells.

As for the minimum chi-square methods if random cells stabilize into fixed cells minimum chi-square methods in general have the same efficiency as based on stabilized fixed cells, see Pollard [18] (pp. 324-326) and Moore and Spruill [17] for the notion of random cells; quadratic distance methods will share the same properties that is to say the fact that the chosen points are random but it will be shown that they do stabilize and therefore these random points can be viewed as fixed points since they do not affect efficiencies of the estimators or asymptotic distributions of goodness-of-fit test statistics which make use of them. These properties will be discussed and studied in more details in the next section along with the introduction of an artificial sample of size M given by the points $\left({s}_{l},{t}_{l}\right),l=1,\cdots ,M$ on the nonegative quadrant which give us a guideline on how to choose points if complete data is available.

3.2. Halton Sequences and an Artificial Sample

From the M terms of the Halton sequences, we have $\left({u}_{l},{v}_{l}\right),l=1,\cdots ,M$ .

Let $\eta =\frac{1}{\mathrm{max}\left({u}_{l},l=1,\cdots ,M\right)}$ and $\varrho =\frac{1}{\mathrm{max}\left({v}_{l},l=1,\cdots ,M\right)}$ , we can form the

artificial sample with elements given by $\left({s}_{l},{t}_{l}\right),l=1,\cdots ,M$ with ${s}_{l}=\eta {u}_{l}{F}_{n}^{-1}\left(p\right),{t}_{l}=\varrho {v}_{l}{G}_{n}^{-1}\left(p\right)$ with $0.90\le p\le 0.99$ . We can view $\left({s}_{l},{t}_{l}\right),l=1,\cdots ,M$ being a form of quasi random sample on the nonegative quadrant and these are the points proposed to be used in case of complete data is avalaible. In general, we might want to choose $20\le M\le 30$ if ${M}^{2}\le n$ and if n is small we try to ensure $M\le \sqrt{n}$ .Consequently as $n\to \infty $ , M remains bounded.

Since ${F}_{n}^{-1}\left(p\right)\stackrel{p}{\to}{F}^{-1}\left(p\right)$ and ${G}_{n}^{-1}\left(p\right)\stackrel{p}{\to}{G}^{-1}\left(p\right)$ , $\left({s}_{l},{t}_{l}\right)\stackrel{p}{\to}\left({s}_{l}^{0},{t}_{l}^{0}\right)$ with ${s}_{l}^{0}=\eta {u}_{l}{F}^{-1}\left(p\right)$ and ${t}_{l}^{0}=\varrho {t}_{l}{G}^{-1}\left(p\right)$ for $l=1,\cdots ,M$ and the points $\left({s}_{l}^{0},{t}_{l}^{0}\right),l=1,\cdots ,M$ are non-random or fixed.

It turns out that quadratic distances for both versions constructed with the points $\left({s}_{l},{t}_{l}\right),l=1,\cdots ,M$ are asymptotic equivalent to quadratic distances using the points $\left({s}_{l}^{0},{t}_{l}^{0}\right),l=1,\cdots ,M$ so that asymptotic theory developed using the points $\left({s}_{l},{t}_{l}\right),l=1,\cdots ,M$ considered to be fixed continue to be valid; we shall show indeed this is the case. Similar conclusions have been established for the minimum chi-square methods with the use of random cells provide that these cells stabilize to fixed cells, see Theorem 2 given by Pollard [18] (pp. 324-326). We shall define a few notations to make the arguments easier to follow.

Define $\left\{\left(s,t\right)\right\}=\left\{\left({s}_{l},{t}_{l}\right),l=1,\cdots ,M\right\}$ and similarly let

$\left\{\left({s}^{0},{t}^{0}\right)\right\}=\left\{\left({s}_{l}^{0},{t}_{l}^{0}\right),l=1,\cdots ,M\right\}$ .

We work with the quadratic distance defined using $\left\{\left(s,t\right)\right\}$ which leads to consider quadratic of the form ${\Vert {G}_{n}\left(\beta \right)\Vert}^{2}$ as defined by expression (16) for version D and expression(18) for version S. Now to emphasize ${z}_{n}$ and ${z}_{\beta}$ which depend on $\left\{\left(s,t\right)\right\}$ , we also use respectively the notations ${z}_{n}\left(\left\{\left(s,t\right)\right\}\right)$ and ${z}_{\beta}\left(\left\{\left(s,t\right)\right\}\right)$ and let ${z}_{n}^{0}={z}_{n}\left(\left\{\left({s}^{0},{t}^{0}\right)\right\}\right)$ , ${z}_{\beta}^{0}={z}_{\beta}\left(\left\{\left({s}^{0},{t}^{0}\right)\right\}\right)$ .

It suffices to verify that results of Theorem 1, Theorem 2 and its corollary in section (2) continue to hold.

Now observe that for both versions D and S we have

$\left({z}_{n}-{z}_{\beta}\right)\stackrel{p}{\to}\left({z}_{{\beta}_{0}}^{0}-{z}_{\beta}^{0}\right)$ (39)

and

$\left({z}_{n}-{z}_{\beta}^{S}\right)\stackrel{p}{\to}\left({z}_{{\beta}_{0}}^{0}-{z}_{\beta}^{0}\right)$ (40)

so that $G\left(\beta \right)$ remains the same for both versions since ${S}_{\beta}\left(x,y\right)$ is continuous with respect to $\left(x,y\right)$ , ${S}_{n}\left(x,y\right)\stackrel{p}{\to}{S}_{\beta}\left(x,y\right)$ and $\left\{\left(s,t\right)\right\}\stackrel{p}{\to}\left\{\left({s}^{0},{t}^{0}\right)\right\}$ . Clearly, $\stackrel{^}{{W}_{0}}\left(\left\{\left(s,t\right)\right\}\right)\stackrel{p}{\to}{W}_{0}\left({s}^{0},{t}^{0}\right)$ . It remains to establish $\sqrt{n}\left({z}_{n}-{z}_{\beta}\right)=\sqrt{n}\left({z}_{n}^{0}-{z}_{\beta}^{0}\right)+{o}_{p}\left(1\right)$ and note that ${z}_{\beta}$ is random instead

of being fixed in section (2) as we are using an adaptive basis here. Using results on the influence functions representations for functionals as discussed, it suffices to show that the vector $\left({z}_{n}-{z}_{\beta}\right)$ has the same influence representation as the vector $\left({z}_{n}^{0}-{z}_{\beta}^{0}\right)$ to conclude that all the asymptotic results are valid even $\left\{\left(s,t\right)\right\}$ are random.

We shall derive the influence functions for elements of the vector of functionals $\left({z}_{n}-{z}_{\beta}\right)$ and show that it is the same for the corresponding elements of the vector of functionals $\left({z}_{n}^{0}-{z}_{\beta}^{0}\right)$ .

Let ${\delta}_{x,y}^{S}\left(u,v\right)$ be the degenerate bivariate survival function at the point $\left(x,y\right)$ , i.e., ${\delta}_{x,y}^{S}\left(u,v\right)=1$ if $u<x$ and $v<y$ and ${\delta}_{x,y}^{S}\left(u,v\right)=0$ ,otherwise.

Let the degenerate distribution function at

${S}_{\epsilon}\left(u,v\right)=S\left(u,v\right)+\epsilon \left({\delta}_{x,y}^{S}\left(u,v\right)-S\left(u,v\right)\right),0\le \epsilon \le 1$

which is a contaminated bivariate survival function and

${F}_{{\epsilon}_{1}}\left(u\right)=F\left(u\right)+{\epsilon}_{1}\left({\delta}_{x}\left(u\right)-F\left(u\right)\right),0\le {\epsilon}_{1}\le 1.$

Similarly,

${G}_{{\epsilon}_{2}}\left(v\right)=G\left(v\right)+{\epsilon}_{2}\left({\delta}_{y}\left(v\right)-G\left(v\right)\right),0\le {\epsilon}_{2}\le 1.$

Now we consider $\left({z}_{jn}-{z}_{j\beta}\right)$ the jth element of $\left({z}_{n}-{z}_{{\beta}_{0}}\right)$ ,

$\left({z}_{jn}-{z}_{j{\beta}_{0}}\right)={S}_{n}\left({s}_{j}\left({F}_{n}\right),{t}_{j}\left({G}_{n}\right)\right)-S\left({s}_{j}\left({F}_{n}\right),{t}_{j}\left({G}_{n}\right)\right)={T}_{j}\left({S}_{n},{F}_{n},{G}_{n}\right),j=1,\cdots ,M$ .

Clearly, ${T}_{j}\left({S}_{n},{F}_{n},{G}_{n}\right)$ depend on ${S}_{n},{F}_{n},{G}_{n}$ but we can use the influence function representation as given by Reid [29] which allows the asymptotic representation with three influence functions given by

${\frac{\partial {T}_{j}\left({S}_{\epsilon},{F}_{{\epsilon}_{1}},{G}_{{\epsilon}_{2}}\right)}{\partial \epsilon}|}_{\epsilon ={\epsilon}_{1}={\epsilon}_{2}=0}=I\left[x>{s}_{l}^{0},y>{t}_{l}^{0}\right]-S\left({s}_{l}^{0},{t}_{l}^{0}\right)$ which is bounded with respect to $\left(x,y\right)$ , ${\frac{\partial {T}_{j}\left({S}_{\epsilon},{F}_{{\epsilon}_{1}},{G}_{{\epsilon}_{2}}\right)}{\partial {\epsilon}_{1}}|}_{\epsilon ={\epsilon}_{1}={\epsilon}_{2}=0}$ and ${\frac{\partial {T}_{j}\left({S}_{\epsilon},{F}_{{\epsilon}_{1}},{G}_{{\epsilon}_{2}}\right)}{\partial {\epsilon}_{2}}|}_{\epsilon ={\epsilon}_{1}={\epsilon}_{2}=0}$ .

It is interesting to note that ${\frac{\partial {T}_{j}\left({S}_{\epsilon},{F}_{{\epsilon}_{1}},{G}_{{\epsilon}_{2}}\right)}{\partial {\epsilon}_{1}}|}_{\epsilon ={\epsilon}_{1}={\epsilon}_{2}=0}=0$ and ${\frac{\partial {T}_{j}\left({S}_{\epsilon},{F}_{{\epsilon}_{1}},{G}_{{\epsilon}_{2}}\right)}{\partial {\epsilon}_{2}}|}_{\epsilon ={\epsilon}_{1}={\epsilon}_{2}=0}=0$ ; so they do not contribute to the influence function representation of ${T}_{j}\left({S}_{n},{F}_{n},{G}_{n}\right)$ .

If we consider the jth term of $\left({z}_{n}^{0}-{z}_{{\beta}_{0}}^{0}\right)$ given by the functional ${G}_{j}\left({S}_{n}\right)={S}_{n}\left({s}_{l}^{0},{t}_{l}^{0}\right)-S\left({s}_{l}^{0},{t}_{l}^{0}\right)$ which does not depend on ${F}_{n}$ and ${G}_{n}$ , we find that

${\frac{\partial {G}_{j}\left({S}_{\epsilon}\right)}{\partial \epsilon}|}_{\epsilon ={\epsilon}_{1}={\epsilon}_{2}=0}={\frac{\partial {T}_{j}\left({S}_{\epsilon},{F}_{{\epsilon}_{1}},{G}_{{\epsilon}_{2}}\right)}{\partial \epsilon}|}_{\epsilon ={\epsilon}_{1}={\epsilon}_{2}=0}.$

Therefore, all the asymptotic results of section (2) remain valid and all these influence functions are bounded so that inference methods making use of these functionals are robust in general. Furthermore, we can consider the inference procedures based on quadratic distances as we have non-random points $\left\{\left({s}^{0},{t}^{0}\right)\right\}$ and if needed they can be replaced by $\left\{\left(s,t\right)\right\}$ without affecting the asymptotic results already established in section (2).

4. An Illustration and a Simulation Study

For iilustration, we consider a bivariate gamma model as discussed in section (1.2) introduced by Mathai and Moschopoulos [11] (pp. 137-139) which is a model with 5 parameters given by the vector $\beta ={\left({\alpha}_{0},{\alpha}_{1},{\alpha}_{2},{\beta}_{1},{\beta}_{2}\right)}^{\prime}$ . Mathai and Moschopoulos [11] (pp. 138-140) also give the following model moments using the bivariate Laplace transform of the model and replacing them with the corresponding empirical moments and solving a system of equations give the method of moments (MM) estimators.

The moments being considered are given by

$E\left(X\right)=\left({\alpha}_{0}+{\alpha}_{1}\right){\beta}_{1},\text{\hspace{0.17em}}V\left(X\right)=\left({\alpha}_{0}+{\alpha}_{1}\right){\beta}_{1}^{2},\text{\hspace{0.17em}}E{\left(X-E\left(X\right)\right)}^{3}=2\left({\alpha}_{0}+{\alpha}_{1}\right){\beta}_{1}^{3}$ ,

they are the first three cumulants of the marginal distribution of X; the first three cumulants of the marginal distribution of Y are given by

$E\left(Y\right)=\left({\alpha}_{0}+{\alpha}_{2}\right){\beta}_{2},\text{\hspace{0.17em}}V\left(Y\right)=\left({\alpha}_{0}+{\alpha}_{2}\right){\beta}_{2}^{2},\text{\hspace{0.17em}}E{\left(Y-E\left(Y\right)\right)}^{3}=2\left({\alpha}_{0}+{\alpha}_{2}\right){\beta}_{2}^{3}$ ,

and the covariance between X and Y, $cov\left(X,Y\right)={\alpha}_{0}{\beta}_{1}{\beta}_{2}$ .

Let $\stackrel{\xaf}{X},{s}_{X}^{2},{m}_{3}^{\left(1\right)}$ be the first three sample cumulants of the marginal distribution of X and similarly let $\stackrel{\xaf}{Y},{s}_{Y}^{2},{m}_{3}^{\left(2\right)}$ be the first three sample cumulants of the marginal distribution of Y and finally let ${s}_{XY}$ be the sample covariance between X and Y; the MM estimators for $\beta $ as given by Mathai and Moschopoulos [11] (p. 151) can be expressead as

$\stackrel{\u02dc}{{\beta}_{1}}=\frac{{m}_{3}^{\left(1\right)}}{2{s}_{X}^{2}},\text{\hspace{0.17em}}\stackrel{\u02dc}{{\beta}_{2}}=\frac{{m}_{3}^{\left(2\right)}}{2{s}_{Y}^{2}},\text{\hspace{0.17em}}\stackrel{\u02dc}{{\alpha}_{0}}=\frac{{s}_{XY}}{\stackrel{\u02dc}{{\beta}_{1}}\stackrel{\u02dc}{{\beta}_{2}}}$

and

$\stackrel{\u02dc}{{\alpha}_{1}}=\frac{{s}_{X}^{2}}{{\stackrel{\u02dc}{{\beta}_{1}}}^{2}}-\stackrel{\u02dc}{{\alpha}_{0}},\text{\hspace{0.17em}}\text{\hspace{0.17em}}\stackrel{\u02dc}{{\alpha}_{2}}=\frac{{s}_{Y}^{2}}{{\stackrel{\u02dc}{{\beta}_{2}}}^{2}}-\stackrel{\u02dc}{{\alpha}_{0}}$ .

As the MM estimators depend on statistics of polynomials of degree 3, they will not be robust and they are very sensitive to outliers.

As the bivariate density function for the model is complicated, we consider version S for quadratic distance with M = 25 and we compare the performance of MM estimators versus SMQD estimators using the overall asymptotic relative criterion

$ARE=\frac{MSE\left(\stackrel{^}{{\alpha}_{0}^{S}}\right)+MSE\left(\stackrel{^}{{\alpha}_{1}^{S}}\right)+MSE\left(\stackrel{^}{{\alpha}_{2}^{S}}\right)+MSE\left(\stackrel{^}{{\beta}_{1}^{S}}\right)+MSE\left(\stackrel{^}{{\beta}_{2}^{S}}\right)}{MSE\left(\stackrel{\u02dc}{{\alpha}_{0}}\right)+MSE\left(\stackrel{\u02dc}{{\alpha}_{1}}\right)+MSE\left(\stackrel{\u02dc}{{\alpha}_{2}}\right)+MSE\left(\stackrel{\u02dc}{{\beta}_{1}}\right)+MSE\left(\stackrel{\u02dc}{{\beta}_{2}}\right)}$ .

The mean square errors (MSE) and ARE are estimated using simulated samples. For references on simulations, we find Ross [30] and Johnson [31] useful. The mean square error of an estimator $\stackrel{^}{\pi}$ for ${\pi}_{0}$ is defined as

$MSE\left(\stackrel{^}{\pi}\right)=E{\left(\stackrel{^}{\pi}-{\pi}_{0}\right)}^{2}$ .

We fix M = 25, the two samples quantiles are 0.99 quantiles and we do not have problem to obtain the inverse matrix used as optimum matrix estimated from data. If we fix M = 30, occasionally the matrix ${\stackrel{^}{\Omega}}_{0}$ is nearly singular and the package R give us a warning sign. It takes around one to two minutes to complete one run on a laptop computer as we do not have large computers, so we fix N = 50 replications with each sample of size n = 500. We observe that in general, each individual ARE is smaller for SMQD estimators than for MM estimators and over all more efficient and robust than MM estimators confirming asymptotic theory; also note that MM estimators require that complete data is available. The unweighted simulated quadratic distance estimators using $I$ perform almost as well as the estimators obtained using an optimum weight matrix. The range of parameters are fixed as follows, we let ${\alpha}_{0}=2$ , ${\alpha}_{1}={\alpha}_{2}$ , ${\beta}_{1}={\beta}_{2}$ , ${\alpha}_{1}=2,3,4,6,8,9,10$ and ${\beta}_{1}=2,3,4,6,8,9,10$ .

The chosen ranges are often encountered in actuarial science, we observe that in general when the samples are not contaminated the SMQD estimators are at least five times better than their counterparts, the results are summarized in the upper table of Table 1.

For robustness study we contaminate the distributions used above with 5% of observations coming from a bivariate gamma with parameters outside the range being considered used to generate outliers. The bivariate gamma distribution used to generate outliers has parameters given by the vector ${\beta}^{c}={\left({\alpha}_{0}^{c}=2,{\alpha}_{1}^{c}=2,{\alpha}_{2}^{c}=2,{\beta}_{1}^{c}=50,{\beta}_{2}^{c}=50\right)}^{\prime}$ .We observe that SMQD estimators are at least 1000 times more efficient than MM estimators, we just display a row for illustrations of the order of efficiency gained by using SMQD methods and it is given at the bottom of Table 1. The limited study seems to point to better efficiencies and robustness for SMQD estimators than MM estimators. MM estimators are very vulnerable to outliers as expected. Despite the study is limited but it tends to confirm theoretical asymptotic results; more numerical and simulation works need to be done to further study the performances of the estimators proposed especially the performances in finite samples.

5. Conclusion

Minimum Quadratic distance methods offer a unified and numerical efficient

Table 1. Overall Asymptotic relative efficiencies comparisons between SMQD estimators and MM estimators using noncontaminated samples of size $n=500.$

$ARE=\frac{MSE\left(\stackrel{^}{{\alpha}_{0}^{S}}\right)+MSE\left(\stackrel{^}{{\alpha}_{1}^{S}}\right)+MSE\left(\stackrel{^}{{\alpha}_{2}^{S}}\right)+MSE\left(\stackrel{^}{{\beta}_{1}^{S}}\right)+MSE\left(\stackrel{^}{{\beta}_{2}^{S}}\right)}{MSE\left(\stackrel{\u02dc}{{\alpha}_{0}}\right)+MSE\left(\stackrel{\u02dc}{{\alpha}_{1}}\right)+MSE\left(\stackrel{\u02dc}{{\alpha}_{2}}\right)+MSE\left(\stackrel{\u02dc}{{\beta}_{1}}\right)+MSE\left(\stackrel{\u02dc}{{\beta}_{2}}\right)}$

Legend: Tabulated values are estimates of overall ARE (SMQD vs MM) based on simulated samples. Overall Asymptotic relative efficiencies comparisons between SMQD estimators and MM estimators using contaminated samples of size
$n=500$ with 5

${\alpha}_{0}=2,{\beta}_{1}={\beta}_{2}=b,{\alpha}_{1}={\alpha}_{2}=a$

Legends: Tabulated values are estimates of overall ARE (SMQD vs MM) based on simulated contaminated samples. Samples are drawn from a contaminated model of the form pbivariate-Gamma $\left({\alpha}_{0}=2,{\beta}_{1}={\beta}_{2}=b,{\alpha}_{1}={\alpha}_{2}=a\right)$ + qbivariate-Gamma $\left({\alpha}_{0}=2,{\beta}_{1}={\beta}_{2}=50,{\alpha}_{1}={\alpha}_{2}=2\right)$ , p = 0.95, q = 0.05.

approach for estimation and model testing using grouped data and unify minimum chi-square methods with fixed or random cells based on the notion of projected score functions on finite bases and adaptive bases. The simulated version is especially suitable for handling bivariate distributions where numerical complications might arise. The rule on how to select points on the nonnegative quadrant for using minimum quadratic distance methods is also more clearly defined whereas it is not clear on how to form random cells for minimum chi-square methods. The methods appear to be relatively simple to implement and yet being more efficient and more robust than methods of moments in general especially when the model has more than three parameters.

Acknowledgements

The helpful and constructive comments of a referee which lead to an improvement of the presentation of the paper and support from the editorial staffs of Open Journal of Statistics to process the paper are all gratefully acknowledged.

Conflicts of Interest

The authors declare no conflicts of interest.

Cite this paper

*Open Journal of Statistics*,

**8**, 362-389. doi: 10.4236/ojs.2018.82024.

[1] |
Partrat, C. (1995) Compound Model for Two Dependent Kinds of Claims. Insurance: Mathematics and Economics, 15, 219-231.
https://doi.org/10.1016/0167-6687(94)90796-X |

[2] |
Gibbons, J.D. and Chakraborti, S. (2011) Nonparametric Statistical Inference. Fifth Edition, CRC Press, Boca Raton. https://doi.org/10.1007/978-3-642-04898-2_420 |

[3] |
Marshall, A.W. and Olkin, I. (1988) Families of Multivariate Distributions. Journal of the American Statistical Association, 83, 834-841.
https://doi.org/10.1080/01621459.1988.10478671 |

[4] | Klugman, S.A., Panjer, H.H. and Willmot, G.E. (2012) Loss Models: From Data to Decisions. Fourth Edition, Wiley, New York. |

[5] |
Klugman, S.A. and Parsa, A. (1999) Fitting Bivariate Distributions with Copulas. Insurance: Mathematics and Economics, 24, 139-148.
https://doi.org/10.1016/S0167-6687(98)00039-0 |

[6] | Hogg, R.V., McKean, J.W. and Craig, A.T. (2013) Introduction to Mathematical Statistics. Pearson, New York. |

[7] | Mardia, K.V. (1967) Some Contributions to Contingency Type Bivariate Distributions. Biometrika, 56, 449-451. |

[8] | Mardia, K.V. (1970) Families of Bivariate Distributions. Griffin, London. |

[9] |
Plackett, R.L. (1965) A Class of Bivariate Distributions. Journal of the American Statistical Association, 60, 516-522.
https://doi.org/10.1080/01621459.1965.10480807 |

[10] |
Marshall, A.W. and Olkin, I. (1990) Multivariate Distributions Generated from Mixture of Convolution and Product Families. Lectures Notes and Monograph Series, Volume 16, Institute of Mathematical Statistics, 371-393.
https://doi.org/10.1214/lnms/1215457574 |

[11] |
Mathai, A.M. and Moschopoulos, P.G. (1991) On a Multivariate Gamma. Journal of Multivariate Analysis, 39, 135-153. https://doi.org/10.1016/0047-259X(91)90010-Y |

[12] |
Furman, E. (2008) On a Multivariate Gamma Distribution. Statistics and Probability Letters, 78, 2353-2360. https://doi.org/10.1016/j.spl.2008.02.012 |

[13] | Hutchinson, T. and Lai, C. (1990) Continuous Bivariate Distributions Emphasizing Applications. Rumbsby, Adelaide. |

[14] |
Luong, A. (2017) Maximum Entropy Empirical Likelihood Methods Based on Laplace Transforms for Nonnegative Continuous Distribution with Actuarial Applications. Open Journal of Statistics, 7, 459-482.
https://doi.org/10.4236/ojs.2017.73033 |

[15] |
Carrasco, M. and Florens, J.-P. (2000) Generalization of GMM to a Continuum of Moment Conditions. Econometric Theory, 16, 797-834.
https://doi.org/10.1017/S0266466600166010 |

[16] | Greenwood, P. and Nikulin, M.S. (1996) A Guide to Chi-Square Testing. Wiley, New York. |

[17] |
Moore, D.S. and Spruill, M.C. (1975) Unified Large Sample Theory of General Chi-Squared Statistics for Tests of Fit. Annals of Statistics, 3, 599-616.
https://doi.org/10.1214/aos/1176343125 |

[18] |
Pollard, D. (1979) General Chi-Square Goodness-of-Fit Tests with Data Dependent Cells. Zeitschrift für Wahrscheinlichkeitstheorie und Verwandte Gebiete, 50, 317-331.
https://doi.org/10.1007/BF00534153 |

[19] |
Luong, A. and Thompson, M.E. (1987) Minimum Quadratic Distance Methods Based on Quadratic Distances for Transforms. Canadian Journal of Statistics, 15, 239-251. https://doi.org/10.2307/3314914 |

[20] |
Duchesne, T., Rioux, J. and Luong, A. (1997) Minimum Cramér-Von Mises Distance Methods for Complete and Grouped Data. Communications in Statistics, Theory and Methods, 26, 401-420. https://doi.org/10.1080/03610929708831923 |

[21] | Newey, W.K. and McFadden, D. (1994) Large Sample Estimation and Hypothesis Testing. In: Engle, R.F. and McFadden, D., Eds., Handbook of Econometrics, Volume 4, Amsterdam. |

[22] |
Pakes, A. and Pollard, D. (1989) Simulation Asymptotic of Optimization Estimators. Econometrica, 57, 1027-1057. https://doi.org/10.2307/1913622 |

[23] | Davidson, R. and MacKinnon, J.G. (2004) Econometric Theory and Methods. Oxford University Press, Oxford. |

[24] |
Luong, A., Bilodeau and Blier-Wong, C. (2018) Simulated Minimum Hellinger Distance Inference Methods for Count Data. Open Journal of Statistics, 8, 187-219.
https://doi.org/10.4236/ojs.2018.81012 |

[25] |
Huber, P. (1981) Robust Statistics. Wiley, New York.
https://doi.org/10.1002/0471725250 |

[26] |
Glasserman, P. (2003) Monte Carlo Methods in Financial Engineering. Springer, New York. https://doi.org/10.1007/978-0-387-21617-1 |

[27] |
Halton, J. (1960) On the Efficiency of Certain Quasi Random Sequences of Points in Evaluating Multi-Dimensional Integrals. Numerische Mathematik, 2, 84-90.
https://doi.org/10.1007/BF01386213 |

[28] |
Luong, A. and Blier-Wong, C. (2017) Simulated Minimum Cramér-Von Mises Distance Estimation for Some Actuarial and Financial Models. Open Journal of Statistics, 7, 815-833. https://doi.org/10.4236/ojs.2017.75058 |

[29] |
Reid, N. (1981) Influence Functions for Censored Data. Annals of Statistics, 9, 78-92. https://doi.org/10.1214/aos/1176345334 |

[30] | Ross, S.M. (2013) Simulation. Fifth Edition, Elsevier, New York. |

[31] | Johnson, M.E. (1986) Multivariate Statistical Simulation. Wiley, New York. |

Copyright © 2020 by authors and Scientific Research Publishing Inc.

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.