Scientific Research

An Academic Publisher

Generalized Method of Moments and Generalized Estimating Functions Using Characteristic Function ()

^{}

Keywords

Share and Cite:

*Open Journal of Statistics*,

**10**, 581-599. doi: 10.4236/ojs.2020.103035.

1. Introduction and an Overview of GMM Procedures Based on Empirical Characteristic Function

1.1. Introduction

In many applied fields, data analysts often have to use distributions with density functions having complicated forms. They are often expressed using mean of series representations but model characteristic functions are simpler and have closed form expressions. For actuarial sciences, the compound Poisson distributions are classical examples and for finance, the stable distributions fall into the same category. These are infinitely divisible and many infinitely divisible distributions share the same property of having much simpler characteristic functions than density functions. We shall examine in more details using the Generalized Normal Laplace (GNL) distribution which is obtained by adding a normal component to the GAL random variable hence can be viewed as created by a convolution operation. The GNL distribution was introduced by Reed [1] and we shall use it to motivate inferences procedures based on characteristic functions instead of densities. Both the GNL and GAL distributions provide better fit to log returns data in finance. The density of the GNL distribution is more complicated than the density of the GAL distribution. The book by Kotz et al. [2] gives a very comprehensive account of the GAL distribution. Obviously, if these distributions with properties just mentioned are used for modelling, we still want to be able to estimate the parameters and perform tests for validating the models used. Often maximum likelihood (ML) procedures are difficult to implement due to the lack of closed form for the density functions for the models being used and even the ML estimators are available, when they are used with the Pearson chi-square statistics in general do not lead to distribution free statistics further complicate ML procedures. Therefore, it is natural that we aim at a unified approach to estimation and testing. Inferences developed in this paper will be unified using GMM approach but with the use of estimating function theory to select sample moments using moment conditions extracted from model characteristic function or more precisely the square of its modulus for constructing the GMM objective function. Subsequently, estimation and testing can be carried out. Before giving more details of the developed GMM procedures of this paper and how they differ from GMM procedures in the literature and the advantages of the new procedures, we shall give more details about the GNL distribution where the use of characteristic function appears to be more natural than the use of the model density. The GMM methods developed are also less simulation intensive than simulated methods which appear in the paper by Luong and Bilodeau [3] and faster in computing time for implementing.

The GNL generalizes the GAL distribution, the density of the GAL can be obtained in closed form but depend on Bessel functions, see Kotz et al. [2] (page 189), Luong [4] and since the GAL distribution can also be obtained from the distribution of the difference of two gamma random variables, we shall consider first the characteristic function of the gamma distribution in example 1 and subsequently in example 2 and example 3, we shall consider respectively the characteristic function of the GAL and GNL distributions.

First, recall that the characteristic function $\varphi \left(s\right)$ of a random variable X is a complex function defined as

$\varphi \left(s\right)=E\left({\text{e}}^{isX}\right)$

and it can be expressed as

$\varphi \left(s\right)=Re\varphi \left(s\right)+iIm\varphi (\; s\; )$

with the real and imaginary parts of $\varphi \left(s\right)$ given respectively by $Re\varphi \left(s\right)$ and $Im\varphi \left(s\right)$. We can also use polar forms instead of algebraical forms to express complex numbers or functions.

The modulus of $\varphi \left(s\right)$ is defined as

$\left|\varphi \left(s\right)\right|={\left[{\left(Re\varphi \left(s\right)\right)}^{2}+{\left(Im\varphi \left(s\right)\right)}^{2}\right]}^{1/2}$

and the argument $\omega \left(s\right)$ of $\varphi \left(s\right)$ is defined as $\omega \left(s\right)=\mathrm{arctan}\frac{Im\varphi \left(s\right)}{Re\varphi \left(s\right)}$. This

allows to express $\varphi \left(s\right)=\left|\varphi \left(s\right)\right|{\text{e}}^{i\omega (s)}$ and depending on situations, the polar form of $\varphi \left(s\right)$ can be simpler to handle than its algebraical form as illustrated in example 1 which gives the characteristic function of the gamma random variable using both representations for the gamma distribution.

Example 1 (characteristic function of the gamma distribution)

It is well known than the characteristic function of the Gamma distribution in algebraical form is given by

${\varphi}_{\gamma}\left(s\right)={\left(1-i\beta s\right)}^{-\rho},\text{\hspace{0.17em}}\text{\hspace{0.17em}}\gamma ={\left(\rho ,\beta \right)}^{\prime}$

with $\beta $ being the scale parameter and $\rho $ being the shape parameter, $\beta >0$ and $\rho >0$. Now before giving the polar form of $\varphi \left(s\right)$, we give the polar form of $z\left(s\right)=1-i\beta s$ first and using properties of the modulus as well as properties of the argument of a complex number we then give the polar form for ${\varphi}_{\gamma}\left(s\right)$. The modulus of $z\left(s\right)$ is denoted by

$\left|z\left(s\right)\right|={\left[1+{\beta}^{2}{s}^{2}\right]}^{1/2}$

and the argument of $\mathrm{arg}\left(z\left(s\right)\right)=\mathrm{arctan}\left(-\beta s\right)$ and since the function $\mathrm{arctan}\left(x\right)$ is odd, $\mathrm{arg}\left(z\left(s\right)\right)=-\mathrm{arctan}\left(\beta s\right)$. This allows the representation in polar form for $z\left(s\right)$.

Using properties of the modulus, since $\varphi \left(s\right)={\left(z\left(s\right)\right)}^{-\rho}$, so $\left|\varphi \left(s\right)\right|={\left|z\left(s\right)\right|}^{-\rho}$. With $z\left(s\right)={\left[1+{\beta}^{2}{s}^{2}\right]}^{1/2}{\text{e}}^{i\left(-\mathrm{arctan}\left(\beta s\right)\right)}$ and since $\varphi \left(s\right)={\left(z\left(s\right)\right)}^{-\rho}$, it is easy to see that the characteristic function of the gamma distribution is given by ${\varphi}_{\gamma}\left(s\right)={\left[1+{\beta}^{2}{s}^{2}\right]}^{-\rho /2}{\text{e}}^{i\left(\rho \mathrm{arctan}\left(\beta s\right)\right)}$, $\gamma ={\left(\beta ,\rho \right)}^{\prime}$ using polar form.

Using the characteristic function of the gamma distribution in polar form, we can find the characteristic function of the GAL distribution which can be considered as the difference of two independent gamma random variables.

Example 2 (characteristic function of the GAL distribution)

Among many representations in distribution of the GAL distribution, the one which makes use of two independent random gamma random variables allows the following representation for the GAL random variable X, see proposition

4.1.3 given by Kotz et al. [2] (p 183), $X{=}^{d}\theta +\frac{\sigma}{\kappa \sqrt{2}}{G}_{1}-\frac{\sigma \kappa}{\sqrt{2}}{G}_{2}$ with ${=}^{d}$ being

an equality in distribution, ${G}_{1}$ and ${G}_{2}$ are independent and identically distributed as G which follows a gamma distribution with scale parameter equals to one and shape parameter being $\rho $, the parameter $\theta $ is a location parameter with $-\infty <\theta <\infty $. The parameter $\kappa $ controls the skewness of the GAL distribution, $\kappa >0$ and if $\kappa =1$, the distribution is symmetric. The parameter $\sigma $ is a scale parameter with $\sigma >0$. Using the representation with gamma random variables it is easy to see that by letting

$\frac{1}{\alpha}=\frac{\sigma}{\kappa \sqrt{2}},\frac{1}{\beta}=\frac{\sigma \kappa}{\sqrt{2}}$ and $X{=}^{d}\theta +\frac{1}{\alpha}{G}_{1}-\frac{1}{\beta}{G}_{2}$, $\kappa =1$ if $\alpha =\beta $.

The characteristic function $\varphi \left(s\right)$ for the GAL distribution in polar form is given by

${\varphi}_{\gamma}\left(s\right)={\left(1+\frac{{s}^{2}}{{\alpha}^{2}}\right)}^{-\rho /2}{\left(1+\frac{{s}^{2}}{{\beta}^{2}}\right)}^{-\rho /2}{\text{e}}^{i\left(\theta +\rho {\omega}_{1}\left(s\right)+\rho {\omega}_{2}\left(s\right)\right)},\text{\hspace{0.17em}}\text{\hspace{0.17em}}\gamma ={\left(\theta ,\alpha ,\beta ,\rho \right)}^{\prime}$

${\omega}_{1}\left(s\right)=\mathrm{arctan}\left(\frac{s}{\alpha}\right)$, ${\omega}_{2}\left(s\right)=\mathrm{arctan}\left(-\frac{s}{\beta}\right)$, using the characteristic function of

the gamma distribution in polar form as given by example 1. Instead of using, replace it by $\rho \mu $ then

${\varphi}_{\gamma}\left(s\right)={\left(1+\frac{{s}^{2}}{{\alpha}^{2}}\right)}^{-\rho /2}{\left(1+\frac{{s}^{2}}{{\beta}^{2}}\right)}^{-\rho /2}{\text{e}}^{i\rho \left(\mu +{\omega}_{1}\left(s\right)+\rho {\omega}_{2}\left(s\right)\right)},\text{\hspace{0.17em}}\text{\hspace{0.17em}}\gamma ={\left(\mu ,\alpha ,\beta ,\rho \right)}^{\prime}$

Using this parametrization, it is easier to connect with the GNL distribution with the representation of the GNL random variable as the convolution of a normal random variable with a GAL random variable. The GAL is symmetric if $\alpha =\beta $ and its characteristic function can be further simplified and reduced to

${\varphi}_{\gamma}\left(s\right)={\left(1+\frac{{s}^{2}}{{\alpha}^{2}}\right)}^{-\rho}{\text{e}}^{i\rho \mu}$ or ${\varphi}_{\gamma}\left(s\right)={\left(1+\frac{{s}^{2}}{{\alpha}^{2}}\right)}^{-\rho}{\text{e}}^{i\theta}$.

Observe that often it is relatively simple to find characteristic function of a distribution of a convolution of two independent random variables using characteristic functions of the component independent random variables and also the characteristic function of the GAL distribution does not depend on the Bessel functions and is much simpler than its density despite the density of the GAL density has closed form expression. The GNL random variable X can be created by adding an independent normal random variable to a GAL random variable and allows the following representation as introduced by Reed [1] (p 475),

$X{=}^{d}\sigma \sqrt{\rho}Z+\rho \mu +\frac{1}{\alpha}{G}_{1}-\frac{1}{\beta}{G}_{2}$

or equivalently,

$X{=}^{d}\sigma \sqrt{\rho}Z+\theta +\frac{1}{\alpha}{G}_{1}-\frac{1}{\beta}{G}_{2}$.

Z is a standard normal random variable and independent of ${G}_{1}$ and ${G}_{2}$ with ${G}_{1}$ and ${G}_{2}$ are as defined as in example 2. Since the characteristic function for

the standard normal random variable is ${\text{e}}^{\frac{1}{2}{s}^{2}}$ and the characteristic function of

the GAL distribution is already obtained, the polar form of the GNL distribution can be also obtained and it is given in the following example.

Example 3 (characteristic function of the GNL distribution)

From the representation of the GNL random variable it is easy to see that the characteristic function of the GNL distribution in algebraical form is

${\varphi}_{\gamma}\left(s\right)={\text{e}}^{\rho i\mu s-\rho {\sigma}^{2}\frac{{s}^{2}}{2}}{\left(\frac{1}{1-is/\alpha}\right)}^{\rho}{\left(\frac{1}{1+is/\beta}\right)}^{\rho}$

which is given by Reed [1] (p 474) and in polar form

${\varphi}_{\gamma}\left(s\right)={\text{e}}^{-\rho {\sigma}^{2}\frac{{s}^{2}}{2}}{\left(1+\frac{{s}^{2}}{{\alpha}^{2}}\right)}^{-\rho /2}{\left(1+\frac{{s}^{2}}{{\beta}^{2}}\right)}^{-\rho /2}{\text{e}}^{i\rho (\mu +{\omega}_{1}\left(s\right)+\rho {\omega}_{2}(\; s\; )}$

with ${\omega}_{1}\left(s\right)$ and ${\omega}_{2}\left(s\right)$ are as defined in example 2.

Using the modulus of ${\varphi}_{\gamma}\left(s\right)$,

$\left|{\varphi}_{\gamma}\left(s\right)\right|={\text{e}}^{-\rho {\sigma}^{2}\frac{{s}^{2}}{2}}{\left(1+\frac{{s}^{2}}{{\alpha}^{2}}\right)}^{-\rho /2}{\left(1+\frac{{s}^{2}}{{\beta}^{2}}\right)}^{-\rho /2}$,

we also have

${\varphi}_{\gamma}\left(s\right)=\left|{\varphi}_{\gamma}\left(s\right)\right|{\text{e}}^{i\rho \left(\mu +{\omega}_{1}\left(s\right)+\rho {\omega}_{2}\left(s\right)\right)},\text{\hspace{0.17em}}\text{\hspace{0.17em}}\gamma ={\left(\mu ,\alpha ,\beta ,{\sigma}^{2},\rho \right)}^{\prime}$

As for the GAL distribution, if $\alpha =\beta $, the GNL distribution is symmetric and its characteristic function is further simplified and is given by

${\varphi}_{\gamma}\left(s\right)={\text{e}}^{-\rho {\sigma}^{2}\frac{{s}^{2}}{2}}{\left(1+\frac{{s}^{2}}{{\alpha}^{2}}\right)}^{-\rho}{\text{e}}^{i\theta}$ with $\theta =\rho \mu $

Reed [1] using the characteristic function also established expressions for the k-th cumulants ${c}_{k},k=1,2,\cdots $ with

${c}_{1}=\rho \left(\mu +\frac{1}{\alpha}-\frac{1}{\beta}\right),{c}_{2}=\rho \left({\sigma}^{2}+\frac{1}{{\alpha}^{2}}+\frac{1}{{\beta}^{2}}\right)$,

${c}_{r}=\rho \left(r-1\right)!\left(\frac{1}{{\alpha}^{2}}+{\left(-1\right)}^{r}\frac{1}{{\beta}^{r}}\right),r>3$

The GNL distribution provides a better fit to log returns data than the GAL distribution and both these distributions provide much better fit to log returns data than the normal distribution. In addition, all integer moments exist for these distributions and they are also infinitely divisible like the normal distribution which makes them being good alternatives to the normal distribution. From the characteristic function of the GNL distribution, it is easy to see that the real and imaginary part of the characteristic function are given respectively as

$Re{\varphi}_{\gamma}\left(s\right)=\left|{\varphi}_{\gamma}\left(s\right)\right|\mathrm{cos}\left(\rho \left(\mu +{\omega}_{1}\left(s\right)+\rho {\omega}_{2}\left(s\right)\right)\right)$

and

$Im{\varphi}_{\gamma}\left(s\right)=\left|{\varphi}_{\gamma}\left(s\right)\right|\mathrm{sin}\left(\rho \left(\mu +{\omega}_{1}\left(s\right)+\rho {\omega}_{2}\left(s\right)\right)\right)$.

1.2. Empirical Characteristic Function and GMM Procedures in the Literature

For inferences, we assume that we have a random sample of size n which consists of ${X}_{1},\cdots ,{X}_{n}$ of independent and identically distributed continuous random variables and they are distributed as X, with common characteristic function ${\varphi}_{\gamma}\left(s\right)$, $\gamma $ is a p by 1 vector of parameters of interests with $\gamma ={\left({\gamma}_{1},\cdots ,{\gamma}_{p}\right)}^{\prime}$, ${\gamma}_{0}$ is the vector of the true parameters with ${\gamma}_{0}\in \Omega $, the parameter space is assumed to be compact. The number of parameters in the model is p. In fact, most inferences procedures based on characteristic function proposed in the literature are still valid if X has a discontinuity point with mass attributed at the origin such as in the cases of the compound distributions. If X is discrete, it is often preferred to work with probability generating function rather characteristic function and for related procedures using probability generating function, see Luong [5].

Commonly proposed GMM procedures in the literature are based on the empirical characteristic function which is the counterpart of the theoretical one and it is defined as

${\varphi}_{n}\left(s\right)=Re{\varphi}_{n}\left(s\right)+iIm{\varphi}_{n}(\; s\; )$

with the real and imaginary parts given respectively by

$Re{\varphi}_{n}\left(s\right)=\frac{1}{n}{\displaystyle {\sum}_{i=1}^{n}\mathrm{cos}\left(s{X}_{i}\right)}$ and $Im{\varphi}_{n}\left(s\right)=\frac{1}{n}{\displaystyle {\sum}_{i=1}^{n}\mathrm{cos}\left(s{X}_{i}\right)}$.

For example, the K-L procedures proposed by Feuerverger and McDunnough [6] p 20-23) can be viewed as equivalent to GMM procedures based on 2k sample moments of the following forms with the first k sample moments which make use respectively the chosen points ${s}_{1},\cdots ,{s}_{k}$ from the real part of empirical characteristic function and the real part of the model characteristic function

${g}_{1}\left(\gamma \right)={\left(\frac{1}{n}{\displaystyle {\sum}_{i=1}^{n}\mathrm{cos}\left({s}_{1}{X}_{i}\right)}-Re{\varphi}_{\gamma}\left({s}_{1}\right),\cdots ,\frac{1}{n}{\displaystyle {\sum}_{i=1}^{n}\mathrm{cos}\left({s}_{k}{X}_{i}\right)}-Re{\varphi}_{\gamma}\left({s}_{k}\right)\right)}^{\prime}$

and the rest of moments are similarly formed but based on the imaginary part of the empirical characteristic function and the imaginary part of the model characteristic function,

${g}_{2}\left(\gamma \right)={\left(\frac{1}{n}{\displaystyle {\sum}_{i=1}^{n}\mathrm{sin}\left({s}_{1}{X}_{i}\right)}-Im{\varphi}_{\gamma}\left({s}_{1}\right),\cdots ,\frac{1}{n}{\displaystyle {\sum}_{i=1}^{n}\mathrm{sin}\left({s}_{k}{X}_{i}\right)}-Im{\varphi}_{\gamma}\left({s}_{k}\right)\right)}^{\prime}$.

By letting

$g\left(\gamma \right)=\left(\begin{array}{c}{g}_{1}\left(\gamma \right)\\ {g}_{2}(\; \gamma \; )\end{array}\right)$

and define S to be the limit covariance matrix of the vector $\sqrt{n}g\left(\gamma \right)$ under the true parameter ${\gamma}_{0}$ when $n\to \infty $ and let $\stackrel{^}{S}$ be a preliminary consistent estimate of $S$ and from which we can obtain a preliminary consistent estimate ${\stackrel{^}{S}}^{-1}$ for the inverse of S then the related GMM objective function $Q\left(\gamma \right)$ can be formed, i.e., $Q\left(\gamma \right)={g}^{\prime}\left(\gamma \right){\stackrel{^}{S}}^{-1}g\left(\gamma \right)$ and minimizing $Q\left(\gamma \right)$ will give the vector of K-L estimators.

The following expectation properties are quite obvious and the elements of the covariance matrix for $g\left(\gamma \right)$ can be found explicitly using the following identities which are established using properties of trigonometric functions, we have:

$E\left(Re{\varphi}_{n}\left(s\right)\right)=Re{\varphi}_{\gamma}\left(s\right)$, $E\left(Im{\varphi}_{n}\left(s\right)\right)=Im{\varphi}_{\gamma}\left(s\right)$,

$Cov\left(Re{\varphi}_{n}\left(s\right),Re{\varphi}_{n}\left(t\right)\right)=\left(Re{\varphi}_{\gamma}\left(s+t\right)+Re{\varphi}_{\gamma}\left(t-s\right)-2Re{\varphi}_{\gamma}\left(s\right)Re{\varphi}_{\gamma}\left(t\right)\right)/2n$,

$Cov\left(Im{\varphi}_{n}\left(s\right),Im{\varphi}_{n}\left(t\right)\right)=\left(Re{\varphi}_{\gamma}\left(t-s\right)+Re{\varphi}_{\gamma}\left(t+s\right)-2Im{\varphi}_{\gamma}\left(s\right)Im{\varphi}_{\gamma}\left(t\right)\right)/2n$,

$Cov\left(Re{\varphi}_{n}\left(t\right),Im{\varphi}_{n}\left(s\right)\right)=\left(Im{\varphi}_{\gamma}\left(t+s\right)-Im{\varphi}_{\gamma}\left(t-s\right)-2Im{\varphi}_{\gamma}\left(s\right)Re{\varphi}_{\gamma}\left(t\right)\right)/2n$ (1)

The above identities are results of Proposition 3.1 given by Groparu-Cojocaru and Doray [7] (p 1992) or results from Koutrouvelis [8] (p 919).

Observe that for the K-L procedures or GMM procedures based on the above 2k sample moments, we need to fix the points ${s}_{1},\cdots ,{s}_{k}$ where we can make use of the real and imaginary part of the model characteristic function ${\varphi}_{\gamma}\left(s\right)$ and there is still a lack of general criteria on how to choose these points, see discussions by Tran [9] but it is recommended that these points are equally spaced, i.e., consider ${s}_{j}=j\tau $ and the optimum choice for $\tau $ has the property that $\tau \to 0$ when $k\to \infty $.

Koutrouvelis [8] (p 920) has shown that in general, since the variances of $\mathrm{cos}\left(sX\right)$ and $\mathrm{sin}\left(sX\right)$ have the following properties

$V\left(\mathrm{cos}\left(sX\right)\right)=\sqrt{n}V\left(Re{\varphi}_{n}\left(s\right)\right)\to 0$ and $V\left(\mathrm{sin}\left(sX\right)\right)=\sqrt{n}V$ $\left(Im{\varphi}_{n}\left(s\right)\right)\to 0$

as $s\to 0$.

and argued that we should select points in the range of $\left(0,\pi \right)$ as points near 0 that we need to focus when extracting information from the model characteristic functions. Despite that the K-L procedures have good potentials for generating good efficiencies for estimators but it is often numerical difficult to implement, as the studies of Groparu-Cojocaru and Doray [7] (p 1996) have shown that in practice we need at least $k\ge 10$ which means that at least 20 sample moments are needed for the procedures to have good efficiency and in these situations, the matrices $S$ and $\stackrel{^}{S}$ are often nearly singular and inverting such large matrix often create difficulties and we shall see that GMM procedures proposed in this paper with the use of theory of estimation function to select sample moments will only need a number of sample moments which is less than 10 in general instead of at least 20. In addition, the number of points from the model characteristic function used to construct sample moments also goes to infinity as the sample size $n\to \infty $.

The proposed GMM procedures with the selection of the sample moments based on estimating function theory will be developed in the next section. With the original sample, we also transform it to a sample of n observations which are still independent and we work with the original sample and the transformed sample to construct moment conditions.

Carrasco and Florens [10] have introduced GMM methods with a continuum of moment conditions and Carrasco and Kotchoni [11] have used the empirical characteristic function and developed GMM procedures based on objective functions which match the empirical characteristic function with the model counterpart using points which belong to a continuum interval. Using a continuum of moment conditions is a solution for the arbitrariness choice of selecting points of the characteristic functions to extract information but the procedures might be difficult to implement for practitioners meanwhile our procedures remain simple and closer to the classical GMM procedures with a finite number of moments but we shall use estimation theory to select sample moments and the number of points will be selected equally spaced in the interval $\left(0,\pi \right)$ and the number of points will go to infinity as $n\to \infty $.

In fact, the points

${s}_{i}=\frac{\pi}{n}\left(i-\frac{1}{2}\right),i=1,\cdots ,n$

and observe that the spacing used is $\frac{\pi}{n}\to 0$, as $n\to \infty $ and also observe that

the spacing mimics the behavior of the optimum spacing and numerically it bypasses the difficulties of having to find explicitly the value of the optimum spacing by minimizing the determinant of the asymptotic covariance of the K-L estimators if the K-L procedures are used.

For the proposed methods, we need the additional assumption that the first four integer moments of the model distribution exist but in practical situations, this assumption is often met.

The proposed procedures make use of sample moments which focus on extracting information from the square of the modulus of the characteristic function

${\left|{\varphi}_{\gamma}\left(s\right)\right|}^{2}$ using the points ${s}_{i}=\frac{\pi}{n}\left(i-\frac{1}{2}\right),i=1,\cdots ,n$ and clearly there will be as many points as the sample size.

For model, with a location parameter $\mu $, the modulus $\left|{\varphi}_{\gamma}\left(s\right)\right|$ and consequently ${\left|{\varphi}_{\gamma}\left(s\right)\right|}^{2}$ will not depend on the location parameter $\mu $ and we need another two sample moments beside the sample moments which make use of ${\left|{\varphi}_{\gamma}\left(s\right)\right|}^{2}$ to take care of this situation. The example given below will help to clarify the problem that we might encounter when the modulus $\left|{\varphi}_{\gamma}\left(s\right)\right|$ or the square of the modulus ${\left|{\varphi}_{\gamma}\left(s\right)\right|}^{2}$ is used for inferences.

For the normal distribution with the vector of parameters $\gamma ={\left(\mu ,{\sigma}^{2}\right)}^{\prime}$, the characteristic function is

${\varphi}_{\gamma}\left(s\right)={\text{e}}^{i\mu t-\frac{{\sigma}^{2}{t}^{2}}{2}}$

and its modulus is

$\left|{\varphi}_{\gamma}\left(s\right)\right|={\text{e}}^{-\frac{{\sigma}^{2}{t}^{2}}{2}}$

and the square of the modulus is

${\left|{\varphi}_{\gamma}\left(s\right)\right|}^{2}={\text{e}}^{-{\sigma}^{2}{t}^{2}}$,

The location parameter $\mu $ is missing in $\left|{\varphi}_{\gamma}\left(s\right)\right|$ and consequently, in ${\left|{\varphi}_{\gamma}\left(s\right)\right|}^{2}$. This is to illustrate that there might be one parameter being left out if inferences procedures are solely based on $\left|{\varphi}_{\gamma}\left(s\right)\right|$ or ${\left|{\varphi}_{\gamma}\left(s\right)\right|}^{2}$. This also means that for GMM procedures which make use of sample moments formed using $\left|{\varphi}_{\gamma}\left(s\right)\right|$ or ${\left|{\varphi}_{\gamma}\left(s\right)\right|}^{2}$, there should also be other sample moments to take into account the parameters being left out and if there are parameters being left out, it only affects the location parameter of the model in general, so if we use two additional sample moments which take care of the parameters being left out beside the sample moments which make use of ${\left|{\varphi}_{\gamma}\left(s\right)\right|}^{2}$, the GMM procedures will be viable. As mentioned often at most there is one parameter in the model which is not included in ${\left|{\varphi}_{\gamma}\left(s\right)\right|}^{2}$ so the proposed will make use of additional moments which are based on the mean and variance of the model distribution beside the moments based on

${\left|{\varphi}_{\gamma}\left(s\right)\right|}^{2}$ using ${s}_{i}=\frac{\pi}{n}\left(i-\frac{1}{2}\right),i=1,\cdots ,n$

We hope to achieve good efficiency yet preserve simplicity by not using more than ten sample moments, this achieved by using the theory of estimating function for building sample moments which make use of

${\left|{\varphi}_{\gamma}\left(s\right)\right|}^{2}$ using ${s}_{i}=\frac{\pi}{n}\left(i-\frac{1}{2}\right),i=1,\cdots ,n$

Therefore, it is relatively simple to implement and all can be done within the classical context of GMM procedures without having to rely on a continuum of moment conditions which the practitioners might find difficult to implement. The use of theory of estimating function appears to be new and not included in proposed GMM procedures in the literature which focused on the use of the empirical characteristic function. The new procedures also make use of transformed observations besides the original observations.

The paper is organized as follows. Section 1 introduces the commonly used GMM procedures which are based on empirical characteristic function, the approach taken here does not use the empirical characteristic function and relies on estimating function theory to select sample moments based on the square of the modulus of the model characteristic function. The new GMM procedures are introduced in Section 2.1 with the choice of selected sample moments aiming to provide efficiency for GMM estimation. In Section 2.2 the chi-square test for moment restrictions which can be interpreted as goodness-of-fit is presented. In Section 3, illustrations for implementing the methods using the GNL distribution and normal distribution, the methods appear to be relatively simple to implement yet being very efficient based on the limited studies and appear to be better alternatives the method of moments (MOM) in general.

2. The Proposed GMM Procedures Based on Theory of Estimating Functions

2.1. Estimation

The theory of GMM procedures are well established in the literature, see Martin et al. [12], Hayashi [13], Hamilton [14] but based on the assumption that sample moments are already selected. In this paper, we focus on how to select moments for model with characteristic function being simple and has closed form but the model density is complicated and we do not make use of the classical empirical characteristic function as other GMM procedures being proposed in the literature. Here, we focus on the square of the modulus of the model characteristic function to build sample moments and since the modulus might not include all the parameters of the model such as the case when there is a location parameter, we shall also include two moments which focus on the model distribution mean and variance to complete the set of sample moments and for practical applications, we do not need more than ten sample moments for the use of the proposed GMM procedures.

We shall define the sample moments focusing on ${\left|{\varphi}_{\gamma}\left(s\right)\right|}^{2}$. Let us consider the basic estimating functions

$\begin{array}{l}h\left({x}_{i},{s}_{i};\gamma \right)=Re{\varphi}_{\gamma}\left({s}_{i}\right)\left(\mathrm{cos}\left({s}_{i}{X}_{i}\right)-Re{\varphi}_{\gamma}\left({s}_{i}\right)\right)\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}+Im{\varphi}_{\gamma}\left({s}_{i}\right)\left(\mathrm{sin}\left({s}_{i}{X}_{i}\right)-Im{\varphi}_{\gamma}\left({s}_{i}\right)\right),\text{\hspace{0.17em}}i=1,\cdots ,n\end{array}$

Clearly, the basic estimating functions are unbiased, i.e.,

${E}_{\gamma}\left(h\left({x}_{i},{s}_{i};\gamma \right)\right)=0,i=1,\cdots ,n$

Using ${\left|{\varphi}_{\gamma}\left(s\right)\right|}^{2}={\left(Re{\left|{\varphi}_{\gamma}\left(s\right)\right|}^{2}\right)}^{2}+{\left(Im{\varphi}_{\gamma}\left(s\right)\right)}^{2}$, we can also express

$\begin{array}{l}h\left({x}_{i},{s}_{i};\gamma \right)=Re{\varphi}_{\gamma}\left({s}_{i}\right)\left(\mathrm{cos}\left({s}_{i}{X}_{i}\right)\right)+Im{\varphi}_{\gamma}\left({s}_{i}\right)\left(\mathrm{sin}\left({s}_{i}{X}_{i}\right)\right)\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}-{\left|{\varphi}_{\gamma}\left({s}_{i}\right)\right|}^{2},\text{\hspace{0.17em}}i=1,\cdots ,n\end{array}$ (2)

Now we can construct the optimum estimating functions for estimating $\gamma $ or more precisely for parameters which appear in ${\left|{\varphi}_{\gamma}\left(s\right)\right|}^{2}$ using results of Godambe and Thompson [15] (p 139) or Morton [16] (p 229). The optimum estimating functions which are linear combinations of elements of the set $\left\{h\left({x}_{i},{s}_{i};\gamma \right),i=1,\cdots ,n\right\}$ and with $\gamma ={\left({\gamma}_{1},\cdots ,{\gamma}_{p}\right)}^{\prime}$ can be expressed as

$\frac{1}{n}{\displaystyle {\sum}_{i=1}^{n}h\left({x}_{i},{s}_{i};\gamma \right)\frac{{E}_{\gamma}\left(\frac{\partial h\left({x}_{i},{s}_{i};\gamma \right)}{\partial {\gamma}_{j}}\right)}{{v}_{\gamma}\left(h\left({x}_{i},{s}_{i};\gamma \right)\right)}},j=1,\cdots ,p$ (3)

and ${v}_{\gamma}\left(h\left({x}_{i},{s}_{i};\gamma \right)\right)$ denote the variance of $h\left({x}_{i},{s}_{i};\gamma \right)$ and can be obtained explicitly, see expression (1).

We would like to make a few remarks here. First note that it is easy to show that

${E}_{\gamma}\left(\frac{\partial h\left({x}_{i},{s}_{i};\gamma \right)}{\partial {\gamma}_{j}}\right)=-\frac{1}{2}\frac{\partial {\left|{\varphi}_{\gamma}\left(s\right)\right|}^{2}}{\partial {\gamma}_{j}}$

using ${E}_{\gamma}\left(\mathrm{cos}\left(sX\right)\right)=Re{\varphi}_{\gamma}\left(s\right)$ and ${E}_{\gamma}\left(\mathrm{sin}\left(sX\right)\right)=Im{\varphi}_{\gamma}\left(s\right)$ and clearly if there is one parameter of the model says ${\gamma}_{l}$ which does not appear in ${\left|{\varphi}_{\gamma}\left(s\right)\right|}^{2}$ then

${E}_{\gamma}\left(\frac{\partial h\left({x}_{i},{s}_{i};\gamma \right)}{\partial {\gamma}_{j}}\right)=0$ and there is no optimum estimating function for this parameter and if we want to estimate all the parameters, we need an extra estimating function. We use the following notation for the vector of optimum estimating function discarding the ones with ${E}_{\gamma}\left(\frac{\partial h\left({x}_{i},{s}_{i};\gamma \right)}{\partial {\gamma}_{j}}\right)=0$. The vector of optimum estimating functions for parameters included in ${\left|{\varphi}_{\gamma}\left(s\right)\right|}^{2}$ adopting the convention discarding those with ${E}_{\gamma}\left(\frac{\partial h\left({x}_{i},{s}_{i};\gamma \right)}{\partial {\gamma}_{j}}\right)=0$ is given by

$\frac{1}{n}{\displaystyle {\sum}_{i=1}^{n}h\left({x}_{i},{s}_{i};\gamma \right)\frac{{E}_{\gamma}\left(\frac{\partial h\left({x}_{i},{s}_{i};\gamma \right)}{\partial {\gamma}_{1}}\right)}{{v}_{\gamma}\left(h\left({x}_{i},{s}_{i};\gamma \right)\right)}}$

where we partition the vector $\gamma $ into two components, $\gamma =\left(\begin{array}{c}{\gamma}_{1}\\ {\gamma}_{2}\end{array}\right)$ with the property that all the parameters appear in ${\left|{\varphi}_{\gamma}\left(s\right)\right|}^{2}$ form the vector ${\gamma}_{1}$ and all the

remaining parameters are included in the vector ${\gamma}_{2}$. In general, if $\gamma \ne {\gamma}_{1}$ in then ${\gamma}_{2}$ is reduced to a scalar. Therefore, the vector of optimum estimating function in general is either a vector of p elements or $p-1$ elements and consequently when these estimating functions are converted to sample moments, we shall have either p or $p-1$ sample moments.

We shall let ${g}_{1}\left(\gamma \right)$ be vector the sample moments which make use of points of ${\left|{\varphi}_{\gamma}\left(s\right)\right|}^{2}$ as

${g}_{1}\left(\gamma \right)=\frac{1}{n}{\displaystyle {\sum}_{i=1}^{n}h\left({x}_{i},{s}_{i};\gamma \right)\frac{{E}_{\gamma}\left(\frac{\partial h\left({x}_{i},{s}_{i};\gamma \right)}{\partial {\gamma}_{1}}\right)}{{v}_{\gamma}\left(h\left({x}_{i},{s}_{i};\gamma \right)\right)}}$,

${v}_{\gamma}\left(h\left({x}_{i},{s}_{i};\gamma \right)\right)$ can be obtained using the real and imaginary parts of the model characteristic function and from the definition of $h\left({x}_{i},{s}_{i};\gamma \right)$, it then follows that

$\begin{array}{c}{v}_{\gamma}\left(h\left({x}_{i},{s}_{i};\gamma \right)\right)={\left(Re{\varphi}_{\gamma}\left({s}_{i}\right)\right)}^{2}var\left(\mathrm{cos}\left({s}_{i}{X}_{i}\right)\right)+{\left(Im{\varphi}_{\gamma}\left({s}_{i}\right)\right)}^{2}var\left(\mathrm{sin}\left({s}_{i}{X}_{i}\right)\right)\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}+2\left(Re{\varphi}_{\gamma}\left({s}_{i}\right)\left(Im{\varphi}_{\gamma}\left({s}_{i}\right)\right)\right)cov\left(\mathrm{cos}\left({s}_{i}{X}_{i}\right),\mathrm{sin}\left({s}_{i}{X}_{i}\right)\right)\end{array}$ (4)

Using the identities as given by expression (1), the variance of $\mathrm{cos}\left({s}_{i}{X}_{i}\right)$ is

$var\left(\mathrm{cos}\left({s}_{i}{X}_{i}\right)\right)=\left(Re{\varphi}_{\gamma}\left(2{s}_{i}\right)+1-2{\left(Re{\varphi}_{\gamma}\left({s}_{i}\right)\right)}^{2}\right)/2$ (5)

and the variance of $\mathrm{sin}\left({s}_{i}{X}_{i}\right)$ and the covariance $cov\left(\mathrm{cos}\left({s}_{i}{X}_{i}\right),\mathrm{sin}\left({s}_{i}{X}_{i}\right)\right)$ are given respectively by

$var\left(\mathrm{sin}\left({s}_{i}{X}_{i}\right)\right)=\frac{\left(1-Re{\varphi}_{\gamma}\left(2{s}_{i}\right)-2{\left(Im{\varphi}_{\gamma}\left({s}_{i}\right)\right)}^{2}\right)}{2}$, (6)

$cov\left(\mathrm{cos}\left({s}_{i}{X}_{i}\right),\mathrm{sin}\left({s}_{i}{X}_{i}\right)\right)=\left(Im{\varphi}_{\gamma}\left(2{s}_{i}\right)-2\left(Re{\varphi}_{\gamma}\left({s}_{i}\right)\right)\left(Im{\varphi}_{\gamma}\left({s}_{i}\right)\right)\right)/2$. (7)

These variances and covariance terms can also be obtained using results given by Groparu-Cojocaru and Doray [7] (p 1993). Asymptotic properties of estimators obtained by solving estimating equations have been given by Yuan and Jennrich [17] but we emphasize GMM estimation in this paper.

Now define two additional sample moments ${g}_{2}\left(\gamma \right)$ and ${g}_{3}(\; \gamma \; )$

${g}_{2}\left(\gamma \right)=\frac{1}{n}{\displaystyle {\sum}_{i=1}^{n}\left({X}_{i}-{E}_{\gamma}\left(X\right)\right)}$,

The location parameter if it belongs to the model but is not included in ${\gamma}_{1}$, it will appear in ${E}_{\gamma}\left(X\right)$ which is the mean of the model distribution and it can be obtained by differentiating the model characteristic function and for GMM procedures we prefer to have the number of sample moments exceeding the number of parameters in the model, so we also consider the following sample moment which makes use of the variance of the model distribution ${V}_{\gamma}\left(X\right)$ and it can also be obtained by differentiating twice the model characteristic function, i.e.,

${g}_{3}\left(\gamma \right)=\frac{1}{n}{\displaystyle {\sum}_{i=1}^{n}\left({X}_{i}-{V}_{\gamma}\left(X\right)\right)}$.

The vector sample moments for the developed GMM procedures is given by

$g\left(\gamma \right)=\left(\begin{array}{c}{g}_{1}\left(\gamma \right)\\ {g}_{2}\left(\gamma \right)\\ {g}_{3}(\; \gamma \; )\end{array}\right)$

and notice ${g}_{1}\left(\gamma \right)$ makes use of the transformed observations ${s}_{i}{X}_{i},i=1,\cdots ,n$ but ${g}_{2}\left(\gamma \right)$ and ${g}_{3}\left(\gamma \right)$ make use of the original sample observations ${X}_{i},i=1,\cdots ,n$.

The vector of the proposed GMM estimators $\stackrel{^}{\gamma}$ is obtained by minimizing the criterion function,

$Q\left(\gamma \right)={g}^{\prime}\left(\gamma \right){\stackrel{^}{S}}^{-1}g(\; \gamma \; )$

with ${\stackrel{^}{S}}^{-1}$ being a positive definite matrix with probability one and it will be defined subsequently after the definition of $S$ and its inverse ${S}^{-1}$, ${\stackrel{^}{S}}^{-1}$ is a consistent estimate of ${S}^{-1}$.

For finding elements of the matrix, we can first express the components of the vector of sample moments as

$g\left(\gamma \right)=\left(\begin{array}{c}{g}_{1}\left(\gamma \right)\\ {g}_{2}\left(\gamma \right)\\ {g}_{3}(\; \gamma \; )\end{array}\right)$

with

${g}_{1}\left(\gamma \right)=\frac{1}{n}{\displaystyle {\sum}_{i=1}^{n}{m}_{1}\left({x}_{i};{s}_{i},\gamma \right)}$, ${m}_{1}\left({x}_{i};{s}_{i},\gamma \right)=h\left({x}_{i},{s}_{i};\gamma \right)\frac{{E}_{\gamma}\left(\frac{\partial h\left({x}_{i},{s}_{i};\gamma \right)}{\partial {\gamma}_{1}}\right)}{{v}_{\gamma}\left(h\left({x}_{i},{s}_{i};\gamma \right)\right)}$,

${g}_{2}\left(\gamma \right)=\frac{1}{n}{\displaystyle {\sum}_{i=1}^{n}{m}_{2}\left({x}_{i};\gamma \right)}$, ,

, ,

and let

.

The matrix is defined as

.

Now if we have a preliminary consistent estimate for the vector then an estimate for which is can be defined with

and its inverse which is is a consistent estimate of, often a numerical algorithm is used to minimize, the vector of the GMM estimators is obtained after the convergence of a numerical iterative process, at each iteration we might want to readjust in a similar way as when performing an iteratively feasible nonlinear weighted least-squares procedures where the weights are re-estimated at each step of the iterations.

The vector of the GMM estimators is consistent in general, this follows from general theory of GMM procedures, i.e., we have, with denotes convergence in probability. In addition, under suitable differentiability imposed on the vector (), the vector of GMM estimators has an asymptotic multivariate normal distribution, i.e.,

,

denotes convergence in law with

which is a r by p matrix, r is the number of sample moments used or equivalently the number of elements of the vector. can be estimated by

and can be estimated by

and consequently, the asymptotic variance of can be estimated.

2.2. Testing Moment Restrictions

One of the advantages of GMM procedures is that it can lead to distribution free chi-square test. The asymptotic null distribution of the statistic no longer depends on the parameters using statistic based on the same objective function to obtain the vector of GMM estimators. Howewer, the test for moment restrictions when used as goodness-of-fit test, it might not be consistent as it might fail to detect departures from the specified model even as the sample size. The inclusion of some more relevant sample moments so that the number of sample moments used r is still manageable and as results of this will make the test more powerful yet without creating too many numerical difficulties on inverting the matrix in the same vein as for chi-square test for count models using probability generating function as given by Luong [17] is a topic for further studies.

For testing the moment restrictions

Assuming we have already minimized to obtain, it follows from standard results of GMM theory that the following statistic can be used and the statistic has an asymptotic chi-square distribution with degrees of freedom, assuming, i.e.,

. (8)

3. Numerical Illustrations and Simulations

For illustrations of the newly developed methods, we shall examine the symmetric GNL distribution and compare the efficiencies of GMM estimators vs the efficiencies of method of moment estimators (MOM) as given by Reed [1] (p 47). The characteristic function of the symmetric GNL distribution only has 4 parameters as, it is easy to see that its characteristic function is reduced to

,.

The location parameter instead of being we can use and Reed’s MOM estimator for can be obtained independently of.

It is not difficult to see that

(9)

(10)

The vector of parameters of the symmetric GNL distribution is. Since the model has a location parameter it is expected that the modulus of the characteristic function only has three parameters, it is easy to see that indeed this is the case with

which gives the following derivatives

,

and the variance of the basic estimating equation can be obtained using expression (1), the number of sample moments will be, i.e. there will be 5 elements for the vector.

The first two cumulants which give the mean and the variance of the distribution

are, and the method of moment estimators (MOM) estimators can be obtained in closed form. Let be the i-th sample cumulant, the MOM estimators have been given by Reed [1] (p 477) with, , and.

Reed [1] (p 477) also noted that the MOM estimators often give a negative sign value for positive parameters like or. It might be due to the lack of efficiency for MOM estimators for these parameters and therefore it is natural to consider alternatives like the GMM estimators. For financial applications with log returns data recorded as percentages being small in magnitudes, the parameters tend to have small values in magnitude only is around 18. GMM procedures with a number of sample moments larger than 20 might lead to the inverse of the matrix difficult to obtain numerically.

By fixing we consider the ranges and. We simulate samples of size each and compute the MOM estimators and the GMM estimators. The GMM estimator for the location parameter is identical to the MOM estimator and only with this parameter that MOM estimator performs well. For other parameters, GMM estimators perform much better than MOM estimators.

Using simulated samples, we can estimate the individual mean square error for each parameter for each estimator. The vector of GMM estimators is denoted by. For GMM estimators, the following mean square errors are estimated using simulated samples, , , ,. Similarly, we also estimate

, , , ,.

with these quantities estimated, we can estimate the asymptotic overall relative of efficiency of these two methods as

.

Since GMM procedures perform better than MOM procedures for estimating and consequently overall GMM procedures perform much better based on the limited simulation study. More numerical studies should be conducted but it appears that the GMM procedures have the potentials to outperform MOM procedures for many models especially with models with more than 3 parameters as sample moments of order greater than 3 often are not stable in general. The results are displayed in table A in the Appendix.

In the second limited study, we consider the normal distribution which is also used for modelling log-returns data. The normal distribution is, the ML estimators for and are and. They are given respectively by the sample mean and the sample variance. The characteristic function for the normal distribution is

, ,

, , (11)

and and (12)

We only have 3 sample moments for GMM estimation for the normal model and we also use simulated samples each with size to obtains GMM estimators and ML estimators for and for the range of parameters with and. The ranges of the parameters are often encountered for financial data and we estimate the individual mean square errors for the GMM estimators and ML estimators. We found GMM estimator for is slightly less efficient than ML estimator but overall, GMM estimators are nearly as efficient as ML estimators based on the simulation results obtained and we estimate the overall relative efficiency

,

the estimate ARE is close to 1 for the parameters being considered in TableA1 & TableA2.

4. Conclusion

Based on theoretical results and numerical results, it appears that:

1) The new GMM procedures are relatively simple to implement. The number of sample moments can be kept below the number ten yet the methods appear to have good efficiencies and offer good alternatives to MOM procedures which in general are not efficient for models with more than three parameters.

2) The proposed procedures are simpler to implement than GMM procedures based on a continuum of moment conditions and consequently might be of interests for practitioner who want to use these methods to analyze date where the model characteristic function is simple and have closed form but the density function is complicated, these situations often occur in practice.

3) The methods are less simulation oriented and consequently faster in computing time for implementations.

4) The estimators obtained have good efficiencies for some models being considered but more numerical and simulation works are needed to confirm the efficiencies using different parametric models and larger scale of simulations. In addition, further studies are needed for the topic on adding sample moments to make the chi-square goodness-of-fit test consistent without creating extensive numerical difficulties when it comes to obtaining the efficient matrix which is used for the quadratic form of the GMM objective function.

Acknowledgements

The helpful and constructive comments of a referee which lead to an improvement of the presentation of the paper and support from the editorial staff of Open Journal of Statistics to process the paper are all gratefully acknowledged.

Appendix

Table A1. Overall estimate asymptotic relative efficiencies of the GMM estimators vs MOM estimators.

Simulation studies for symmetric GNL distributions with.

Table A2. Overall estimate asymptotic relative efficiencies of the GMM estimators vs ML estimators.

Simulation studies for normal distributions.

Conflicts of Interest

The authors declare no conflicts of interest.

[1] |
Reed, W.J. (2007) Brownian-Laplace Motion and Its Use in Financial Modelling. Communications in Statistics, Theory and Methods, 36, 473-484.
https://doi.org/10.1080/03610920601001766 |

[2] |
Kotz, S., Kozubowski, T.J. and Podgorski, K. (2001) The Laplace Distribution and Generalizations. Birkhauser, Boston. https://doi.org/10.1007/978-1-4612-0173-1 |

[3] |
Luong, A. and Bilodeau, C. (2017) Simulated Minimum Hellinger Distance Estimation for Some Financial and Actuarial Models. Open Journal of Statistics, 7, 743-759.
https://doi.org/10.4236/ojs.2017.74052 |

[4] |
Luong, A. (2017) Likelihood and Quadratic Distance Methods for the Generalized Asymmetric Laplace Distribution for Financial Data. Open Journal of Statistics, 7, 347-368. https://doi.org/10.4236/ojs.2017.72025 |

[5] |
Luong, A. (2020) Generalized Method Moments and Generalized Estimating Functions Based on Generating Function for Count Models. Open Journal of Statistics, 10, 516-539. https://doi.org/10.4236/ojs.2020.103031 |

[6] |
Feuerverger, A. and McDunnough, P. (1981) On the Efficiency of Empirical Characteristic Function. Journal of the Royal Statistical Society, Series B, 43, 20-27.
https://doi.org/10.1111/j.2517-6161.1981.tb01143.x |

[7] |
Groparu-Cojocaru, I. and Doray, L.G. (2013) Inference for the Generalized Normal Laplace Distribution. Communications in Statistics—Simulation and Computation, 42, 1989-1997. https://doi.org/10.1080/03610918.2012.687064 |

[8] |
Koutrouvelis, I.A. (1980) Regression Type Estimation of the Parameters of Stable Laws. Journal of the American Statistical Association, 75, 918-928.
https://doi.org/10.1080/01621459.1980.10477573 |

[9] |
Tran, K.C. (1998) Estimating Mixtures of Normal Distributions via Empirical Characteristic Function. Econometric Reviews, 17, 167-183.
https://doi.org/10.1080/07474939808800410 |

[10] |
Carrasco, M. and Florens, J.P. (2000) Generalization of GMM to a Continuum of Moment Conditions. Econometric Theory, 16, 797-834.
https://doi.org/10.1017/S0266466600166010 |

[11] |
Carrasco, M. and Kotchoni, R. (2017) Efficient Estimation Using the Characteristic Function. Econometric Theory, 33, 479-526.
https://doi.org/10.1017/S0266466616000025 |

[12] | Martin, V., Hurn, S. and Harris, D. (2013) Econometric Modelling with Time Series: Specification, Estimation and Testing. Cambridge University Press, Cambridge. |

[13] | Hayashi, F. (2000) Econometrics. Princeton University Press, Princeton. |

[14] | Hamilton, J.D. (1994) Time Series Analysis. Princeton University Press, Princeton. |

[15] |
Godambe, V.P. and Thompson, M.E. (1989) An Extension of Quasi-Likelihood Estimation. Journal of Statistical Planning and Inference, 22, 137-152.
https://doi.org/10.1016/0378-3758(89)90106-7 |

[16] |
Morton, R. (1981) Efficiency of Estimating Equations and the Use of Pivots. Biometrika, 68, 227-233. https://doi.org/10.1093/biomet/68.1.227 |

[17] |
Yuan, K.-H. and Jennrich, R.I. (1998) Asymptotic of Estimating Equations under Natural Conditions. Journal of Multivariate Analysis, 65, 245-260.
https://doi.org/10.1006/jmva.1997.1731 |

Copyright © 2020 by authors and Scientific Research Publishing Inc.

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.