A First Order Stationary Branching Negative Binomial Autoregressive Model with Application ()

Bakary Traore^{1}, Bonface Miya Malenje^{2}, Herbert Imboga^{2}

^{1}Department of Mathematics, Pan African University, Institute of Basic Science, Technology and Innovation (PAUSTI), Nairobi, Kenya.

^{2}Department of Statistics and Actuarial Sciences, Jomo Kenyatta University of Agriculture and Technology (JKUAT), Nairobi, Kenya.

**DOI: **10.4236/ojs.2022.126046
PDF
HTML XML
105
Downloads
871
Views
Citations

In the area of time series modelling, several applications are encountered in real-life that involve analysis of count time series data. The distribution characteristics and dependence structure are the major issues that arise while specifying a modelling strategy to handle the analysis of those kinds of data. Owing to the numerous applications there is a need to develop models that can capture these features. However, accounting for both aspects simultaneously presents complexities while specifying a modeling strategy. In this paper, an alternative statistical model able to deal with issues of discreteness, overdispersion, serial correlation over time is proposed. In particular, we adopt a branching mechanism to develop a first-order stationary negative binomial autoregressive model. Inference is based on maximum likelihood estimation and a simulation study is conducted to evaluate the performance of the proposed approach. As an illustration, the model is applied to a real-life dataset in crime analysis.

Keywords

Branching Process, Negative Binomial, Time Series of Count Data, Serial Dependence, Overdispersion

Share and Cite:

Traore, B. , Malenje, B. and Imboga, H. (2022) A First Order Stationary Branching Negative Binomial Autoregressive Model with Application. *Open Journal of Statistics*, **12**, 810-826. doi: 10.4236/ojs.2022.126046.

1. Introduction

In the area of time series modelling, several applications in real-life involve analysis of count time series data, which motivated researchers to develop models that can handle count data in term of time series. Such applications include monthly polio counts (see e.g. [1] [2]), traffic accident counts (see e.g. [3] [4]), daily asthma presentation at a hospital (see e.g. [2]), crime counts analysis (see e.g. [5] [6]), stock market application (see e.g. [7] [8]), sandstorm counts (see e.g. [9]), among others in the fields like demography, economic, meteorology, sociology, epidemiology, finance, and education. Facing such kind of data, Poisson and Binomial marginal distributions are some of the most used to develop serial dependence models. An important number of counts time series data are characterized by overdispersion, under-dispersion, equi-dispersion and excess of zeros. Handling this issues [10] discussed modelling time series of counts with overdispersion and [11] modelled an INAR (1) process for count time series with equi-dispersion, under-dispersion and over-dispersion. Beyond that, several strategical approaches of serial dependence models have been proposed including autoregressive AR (1) (see e.g. in [12]), Integer-valued autoregressive (INAR (1)) (see e.g. in [13]), Hidden Markov Model (HMM) (see e.g. in [14] [15]), Generalized Linear Model (GLM) (see e.g. in [16]), thinning-based models (see e.g. in [17]) and others in order to contribute in this area. Although many autoregressive models has been proposed, for example [18] discussed the Autoregressive Conditional Poisson (ACP) models able to handle overdispersion but also the Integer Autoregressive modeling framework is the popular approach used to preserve the discrete nature of data, which has been exploited by various authors both in a univariate and multivariate setting. Among them we have Bayesian INAR models, New Geometric INAR models, Zero truncated Poisson INAR models, Zero-modified geometric INAR, etc. [19] introduced a process of new stationary first-order integer-valued autoregressive with geometric marginal distributions based on negative binomial thinning. Conditional least squares, Yule-Walker and maximum likelihood methods are used to estimate the parameters. [20] investigated negative binomial time series models based on the binomial thinning and two other expectation thinning operations, and showed how they differ in conditional variance or heteroscedasticity. [16] discussed the approach based on generalized linear models and the class of integer autoregressive processes, which provides a natural extension to the traditional ARMA methodology. [21] proposed a new autoregressive time series of counts model with Poisson-negative binomial (PNB) distribution, which is the convolution of Poisson and Negative binomial random variables. They also introduced the Geometric PNB and the Geometric semi PNB distributions. [11] presented with Bernoulli-geometric marginals based on a new type of generalized thinning operator, a first-order non-negative integer-valued autoregressive model for stationary count data process. The maximum likelihood method is used for estimating the model parameters. Few years ago, threshold negative binomial autoregressive model considered as an observation-driven model for time series of counts which allows for overdispersion and negative serial dependence in the observation, is studied by [22]. However, in this paper, we develop a new approach motivated by an offspring generating process which in turn leads to a flexible Negative Binomial modeling approach based on the branching process which is a recursive dependence mechanism instead of thinning process, to avoid scalar multiplication in the autoregressive model. The obtained model can be able to deal with overdispersed counts time series data and its serial dependence. So far, it is applied to a real-life dataset.

The rest of the paper is organized as follows; Section 2 outlines the approach adopted in developing the stationary first order branching negative binomial (bNB) autoregressive model as well the procedure for inference, Section 3 provides the results for the simulation of stationary bNB autoregressive model, Section 4 presents the results from the application of the model to real-life data. Finally, Section 5 concludes the work then followed by acknowledgement.

2. Methods and Materials

In this section, the adopted modelling approach for this study is discussed. Specific attention is given to the negative binomial distribution which is able to capture overdispersion, a common feature in count observations. Further, the specific autoregressive mechanism for the serial dependence is also demonstrated. In particular, we consider a branching mechanism, a concept derived from the field of demography and epidemiology whereby a finite number of elements is consider as the starting stage of the process. With the passage of time, each element can either vanish with probability
${p}_{o}$ or produce *k* new elements with probability
${p}_{k}$, where
$k=1,2,\cdots $. Each of these *k* elements behaves in the same way as their parents do. Let
${X}_{n}$ represent the population size following *n* such incidents. The process
$\left\{{X}_{n}\mathrm{,}n\ge 0\right\}$ is a branching process, which is a Markov chain. Many problems in science and engineering are modelled with branching processes, including population expansion, pandemic spread, and nuclear fission are all issues that must be addressed.

Overall, the idea is adopted to propose model for stationary time series of count data and consider real data examples. The software package R is used in this work for implementing simulation study and application.

2.1. Model Specification

The classical equation for an *AR* (1) process
$\left\{{X}_{n}\right\}$ is

${X}_{n}=\alpha {X}_{n-1}+{E}_{n}$ (1)

where $\left\{{E}_{n}\right\}$ is a sequence of independent and identically distributed random variables. Thus if $\left\{{X}_{n}\right\}$ is a positive random variable then $\alpha \in \left(\mathrm{0,1}\right)$. Since the output of the Equation (1) does not guaranty the issue of non-negative integer values. Therefore, to vanish this issue, some researchers mentioned in Section 1 have used the thinning process with Equation (1) to become

${X}_{n}=\alpha \ast {X}_{n-1}+{E}_{n}$ (2)

The operator
$\ast $ is defined as
$\alpha \ast X={\displaystyle {\sum}_{i=0}^{N\left(X\right)}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{W}_{i}$ where
${W}_{i}$ are i.i.d random variable following a discrete law and for each fixed non-negative integer-value *x* of *X*,
$N\left(x\right)$ is a binomial random variable with parameters *x* and
$\lambda $. The
${E}_{n}$ are i.i.d negative binomial random variable, see [12].

The negative binomial distribution $NB\left(m\mathrm{,}\pi \right)$ is usually parametrized with pmf

$p\left(x\right)=\frac{\Gamma \left(m+x\right)}{\Gamma \left(m\right)x!}{\pi}^{m}{\left(1-\pi \right)}^{x},x\in {\mathbb{N}}_{0}=\left\{0,1,2,\cdots \right\}$ (3)

For any $\lambda >0$ there is one process ${Y}_{t}$ with $NB\left(m\mathrm{,}\pi \right)$ marginal distributions that is stationary, Markov, time-reversible, has infinitely-divisible marginal distribution and with correlation $\text{Corr}\left({Y}_{i-1}\mathrm{,}{Y}_{i}\right)=\mathrm{exp}\left(-\lambda \right)$, (see theorem in [23]).

As an alternative to thinning process (Equation (2)), we define an observation driven model ${Y}_{t}$ with a Negative Binomial marginal distribution which is the recursive scheme from ${Y}_{t-1}$ to ${Y}_{t}$ using the branching mechanism, where the Binomial law is involved to obtain model defined as follows

${Y}_{t}={N}_{t-1}+{E}_{t}$ (4)

where

${N}_{t-1}~Bin\left({Y}_{t-1}\mathrm{,}\frac{\rho \pi}{1-\rho +\rho \pi}\right)\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{and}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{E}_{t}~NB\left(m+{N}_{t-1}\mathrm{,}\frac{\pi}{1-\rho +\rho \pi}\right)$

We denote the obtained model as $bNB\left(m\mathrm{,}\pi \mathrm{,}\rho \right)$.

In terms of the branching process ${N}_{t-1}$ arises from ${Y}_{t-1}$ according to the binomial law (Bin) with probability $\frac{\rho \pi}{1-\rho +\rho \rho}$. The dependence parameter $\rho $ determines how many off springs are generated from the parent ${Y}_{t-1}$ which in turn specifies the dependence in the autoregressive process ${Y}_{t}$ and ${Y}_{t-1}$.

The error component ${E}_{t}$ has a Negative Binomial distribution which ensures the marginal distribution of the arising process is Negative Binomial.

The distribution of the two components is given bellow;

$P\left({N}_{t-1}=k\right)=\left(\begin{array}{c}{Y}_{t-1}\\ k\end{array}\right){\left(\frac{\rho \pi}{1-\rho +\rho \pi}\right)}^{k}{\left(1-\frac{\rho \pi}{1-\rho +\rho \pi}\right)}^{{Y}_{t-1}-k}$ (5)

$\begin{array}{c}P\left({E}_{t}=r\right)=\left(\begin{array}{c}\left(m+{N}_{t-1}\right)+r-1\\ r\end{array}\right){\left(\frac{\pi}{1-\rho +\rho \pi}\right)}^{\left(m+{N}_{t-1}\right)}\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\times {\left(1-\frac{\pi}{1-\rho +\rho \pi}\right)}^{r}\end{array}$ (6)

2.2. Model Estimation

The process, ${Y}_{t}$ has both marginal and dependence parameters $m\mathrm{,}\pi $ and $\rho $. These parameters are estimated via Maximum Likelihood Estimation.

2.2.1. Likelihood Function

Let $T=\left\{{t}_{0}<{t}_{1}<\cdots <{t}_{n}\right\}$ be an increasing set of times and $y=\left\{{y}_{0},{y}_{1},\cdots ,{y}_{n}\right\}\subset \mathbb{N}$ an arbitrary set of non-negative integers. Then the joint pmf for $Y~bNB\left(m\mathrm{,}\pi \mathrm{,}\rho \right)$ is the product of the marginal and the conditionals.

$P\left[Y=y|m,\pi ,\rho \right]=p\left({y}_{o}\right){\displaystyle \underset{t=1}{\overset{n}{\prod}}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}P\left({y}_{t}|{y}_{t-1}\right)$ (7)

Where $m>0$, $\pi \in \left[\mathrm{0,1}\right]$, and $\rho \in \left[\mathrm{0,1}\right]$. Using marginal and conditionals probabilities, we can explicitly define Equation (7).

Conditional or transition probability mass function (pmf)

$\begin{array}{l}p\left({y}_{t}\mathrm{|}{y}_{t-1}\right)=P\left[{Y}_{t}={y}_{t}\mathrm{|}{Y}_{t-1}={y}_{t-1}\right]\\ ={\displaystyle \underset{{N}_{t-1}=0}{\overset{\infty}{\sum}}}\left(\begin{array}{c}{y}_{t-1}\\ {N}_{t-1}\end{array}\right){\left(\frac{\rho \pi}{1-\rho +\rho \pi}\right)}^{{N}_{t-1}}{\left(\frac{1-\rho}{1-\rho +\rho \pi}\right)}^{{y}_{t-1}-{N}_{t-1}}\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.05em}}\times \left(\begin{array}{c}m+{N}_{t-1}+{E}_{t}-1\\ {E}_{t}\end{array}\right){\left(\frac{\pi}{1-\rho +\rho \pi}\right)}^{m+{N}_{t-1}}{\left(\frac{\left(1-\rho \right)\left(1-\pi \right)}{1-\rho +\pi}\right)}^{{E}_{t}}\\ =\frac{{y}_{t-1}\mathrm{!}\Gamma \left(m+{y}_{t-1}\right){\pi}^{m}{\left(1-\pi \right)}^{{y}_{t-1}}{\left(1-\rho \right)}^{{y}_{t}+{y}_{t-1}}}{{\left(1-\rho +\rho \pi \right)}^{m+{y}_{t-1}+{y}_{t}}}\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}\times {\displaystyle \underset{{N}_{t-1}=0}{\overset{\infty}{\sum}}}\frac{1}{{N}_{t-1}\mathrm{!}\left({y}_{t}-{N}_{t-1}\right)\mathrm{!}\left({y}_{t-1}-{N}_{t-1}\right)\mathrm{!}\Gamma \left(m+{N}_{t-1}\right)}{\left[\frac{\rho {\pi}^{2}}{{\left(1-\rho \right)}^{2}\left(1-\pi \right)}\right]}^{{N}_{t-1}}\\ =\frac{\Gamma \left(m+{y}_{t}\right)}{\Gamma \left(m\right){y}_{t}!}\frac{{\pi}^{m}{\left(1-\rho \right)}^{{y}_{t}+{y}_{t-1}}{\left(1-\pi \right)}^{{y}_{t}}}{{\left(1-\rho +\rho \pi \right)}^{m+{y}_{t}+{y}_{t-1}}}{}_{2}F{}_{1}\left(-{y}_{t-1},-{y}_{t};m;z\right)\end{array}$ (8)

where
$z=\frac{\rho {\pi}^{2}}{{\left(1-\rho \right)}^{2}\left(1-\pi \right)}$ and
${}_{2}F{}_{1}\left(a\mathrm{,}b\mathrm{;}c\mathrm{;}k\right)$ is Gauss’ hypergeometric function with parameters
$a\mathrm{,}b\mathrm{,}c$ and *k.*

Marginal distribution

The marginal distribution for ${y}_{o}$ is:

$p\left({y}_{o}\right)=\frac{\Gamma \left(m+{y}_{o}\right)}{\Gamma \left(m\right){y}_{o}\mathrm{!}}{\pi}^{m}{\left(1-\pi \right)}^{{y}_{o}}$ (9)

Hence, by using conditional probabilities (Equation (8)) and marginal distribution (Equation (7)) in equation (Equation (7)), the likelihood is as follow

$\begin{array}{l}P\left[Y=y|m,\pi ,\rho \right]\\ =\frac{\Gamma \left(m+{y}_{o}\right)}{\Gamma \left(m\right){y}_{o}!}{\pi}^{m}{\left(1-\pi \right)}^{{y}_{o}}{\displaystyle \underset{t=1}{\overset{n}{\prod}}}\frac{\Gamma \left(m+{y}_{t-1}\right)}{\Gamma \left(m\right){y}_{t}!}\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}\times \frac{{\pi}^{m}{\left(1-\rho \right)}^{{y}_{t-1}+{y}_{t}}{\left(1-\pi \right)}^{{y}_{t}}}{{\left(1-\rho +\rho \pi \right)}^{m+{y}_{t-1}+{y}_{t}}}{}_{2}F{}_{1}\left(-{y}_{t-1},-{y}_{t};m;z\right)\\ ={\displaystyle \underset{t=0}{\overset{n}{\prod}}}\left\{\frac{\Gamma \left(m+{y}_{t}\right){\pi}^{m}}{\Gamma \left(m\right){y}_{t}!}\right\}\frac{{\left(1-\pi \right)}^{{\displaystyle {\sum}_{t=0}^{n}{y}_{t}}}{\left(1-\rho \right)}^{{\displaystyle {\sum}_{t=1}^{n}\left({y}_{t-1}+{y}_{t}\right)}}}{{\left(1-\rho +\rho \pi \right)}^{nm+{\displaystyle {\sum}_{t=1}^{n}\left({y}_{t-1}+{y}_{t}\right)}}}{\displaystyle \underset{t=1}{\overset{n}{\prod}}}{}_{2}F{}_{1}\left(-{y}_{t-1},-{y}_{t};m;z\right)\\ ={\displaystyle \underset{t=0}{\overset{n}{\prod}}}\left\{\frac{\Gamma \left(m+{y}_{t}\right){\pi}^{m}}{\Gamma \left(m\right){y}_{t}!}\right\}{\displaystyle \underset{t=1}{\overset{n}{\prod}}}{}_{2}F{}_{1}\left(-{y}_{t-1},-{y}_{t};m;z\right)\frac{{\left(1-\pi \right)}^{{y}_{+}}{\left(1-\rho \right)}^{2{y}_{+}-{y}_{o}-{y}_{n}}}{{\left(1-\rho +\rho \pi \right)}^{nm+2{y}_{+}-{y}_{o}-{y}_{n}}}\end{array}$ (10)

where ${y}_{+}={\displaystyle \underset{t=0}{\overset{n}{\sum}}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{y}_{t}$ and ${y}_{o}$ stand for the initial values of the branching process.

2.2.2. Log-Likelihood Function

We derived Equation (11) by taking the natural logarithm of Equation (10), and it follows;

$\begin{array}{l}\mathrm{log}L\left(m,\pi ,\rho |Y=y\right)\\ ={\displaystyle \underset{t=0}{\overset{n}{\sum}}}\mathrm{log}\Gamma \left(m+{y}_{t}\right)+\left(n+1\right)m\mathrm{log}\pi +{\displaystyle \underset{t=1}{\overset{n}{\sum}}}{\mathrm{log}}_{2}{F}_{1}\left(-{y}_{t-1},-{y}_{t};m;z\right)\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}-\left(n+1\right)\mathrm{log}\Gamma \left(m\right)+{y}_{+}\mathrm{log}\left(1-\pi \right)+\left(2{y}_{+}-{y}_{o}-{y}_{n}\right)\mathrm{log}\left(1-\rho \right)\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}-\left(nm+2{y}_{+}-{y}_{o}-{y}_{n}\right)\mathrm{log}\left(1-\rho +\rho \pi \right)\end{array}$ (11)

3. Simulation Study

Monte Carlo experiments have been conducted several times to evaluate the performance of the Branching Negative Binomial estimators for the parameters. The experiment entails generating *n* time series of size *N* from the models and then estimating the parameter vector
$\theta =\left(m\mathrm{,}\pi \mathrm{,}\rho \right)$. For each combination (*θ*, *N*), we compute the mean, bias, and the mean-squared error (MSE) given by,

$\stackrel{^}{\theta}=\frac{1}{n}{\displaystyle \underset{i=1}{\overset{n}{\sum}}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{\stackrel{^}{\theta}}_{i},Bias\left(\stackrel{^}{\theta}\right)=\frac{1}{n}{\displaystyle \underset{i=1}{\overset{n}{\sum}}}\left({\stackrel{^}{\theta}}_{i}-\theta \right)\text{\hspace{0.17em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{and}\text{\hspace{0.17em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}MSE\left(\stackrel{^}{\theta}\right)=\frac{1}{n}{\displaystyle \underset{i=1}{\overset{n}{\sum}}}{\left({\stackrel{^}{\theta}}_{i}-\theta \right)}^{2}$

where
${\stackrel{^}{\theta}}_{i}$ is the estimated parameter vector values from the *i*^{th} simulated series.

Using the software package R, we generate by Monte Carlo replicates n = 1000 random samples of length *N *= 250, 500 and 1000 from bNB, then calculate the mean, bias and mean-squared error of the estimator. Table 1 shows the simulation results for the model. From that, it’s clear that with the estimation of the parameter *m* for a sample size of *N *= 250 and the performance seems to improve with increasing sample size. It’s observed that the bias and MSE reduce as the sample size increases which confirms that the parameter vector
$\theta $ can be consistently estimated by likelihood function. Feature work.

3.1. Simulated Data

The following, Figures 1-4 display the time series plot and their corresponding autocorrelation functions of simulated data by varying each one of the parameters. When parameters *m* increase, the range of the entries (observations) increases (Figure 2) while it decreases when parameter
$\pi $ increases (Figure 3). When parameter
$\rho $ increases, the dependence of data becomes strong which is shown in Figure 4. For Figures 1-3 cases, the huge difference is only the time series plot while ACF and PACF are almost the same, which implies that the autoregressive model is order one.

3.2. Simulation Results

Figures 5-8 provide graphical inspection of the results from the simulation experiment. They depict the effect of different levels of temporal dependence and sample size on the performance of the developed model. The results are consistent with those reported in Table 1. Overall, the performance of the model is better when the dependence is increased and increasing the sample size improves the performance (median estimates are closer to the true values).

4. Application to Real-Life Data

In this section, the Branching Negative Binomial model is applied to New South Wales (NSW) dataset police reports provided by the NSW Bureau of Crime Statistics and Research, see http://www.bocsar.nsw.gov.au/. The entire dataset is organized by offence type, month and Local Government Area (LGA). In particular, we study the monthly number of *Domestic violence related assault* reported in the city of Kempsey, Australia, over the period January 1995 to Jun 2022. Table 2 shows the summary statistics of the dataset, the sample mean and variance are 17.78 and 39.93837 respectively, which indicates that we are facing to an overdispersed data. Figure 5 depicts the time series plot and the corresponding autocorrelation functions (ACF and PACF).

Figure 9 displays the visualization of real-data considered for the application of the branching Negative Binomial autoregressive model. The time series plot

Table 1. Summary statistics for the estimator for different parameter values
$\theta =\left(m,\pi ,\rho \right)$ and different sample sizes *N*. These statistics are obtained from 1000 Monte Carlo replications of the developed model.

Figure 1. Time series and autocorrelation functions plots by varying sample sizes respectively to 250, 500 and 1000.

Figure 2. Time series and autocorrelation functions plots by varying parameter *m* respectively to 5, 15 and 25.

Figure 3. Time series and autocorrelation functions plots by varying probability respectively to 0.3, 0.5 and 0.9.

Figure 4. Time series and autocorrelation functions plots by varying dependence parameter $\rho $ respectively to 0.3, 0.5 and 0.9.

Figure 5. Boxplots with results from the simulation experiment for *T *= 250. The red line corresponds to the true parameter values, while the black line is the median.

Figure 6. Boxplots with results from the simulation experiment for *T *= 1000.

of Crime count data showed that data is stationary in time range considered. The ACF showed that there are some serial dependences between consecutive observations, and through the Partial ACF, we can see that the order of the autoregressive model is one. Therefore, the selected data matches well to this first order branching Negative Binomial autoregressive model.

By fitting the developed model to the crime count data, the estimate parameters and their standard errors obtained are displayed in Table 3.

Figure 7. Boxplots with results from the simulation experiment for *T *= 250.

Figure 8. Boxplots with results from the simulation experiment for *T *= 1000.

Table 2. Summary statistics for the monthly number of domestic violence related assault reported in Kempsey from January 1995 to Jun 2022.

These obtained parameters have been used in the model by fitting the crime data to branching Negative Binomial model, see Figure 10 which implies that the model fits well these crime data.

The summary Table 4 displayed below is the data generated by the model. We

Table 3. Estimate parameters obtained by fitting crime data in the model.

Figure 9. Time series of monthly number of domestic violence related assault reported in Kempsey from January 1995 to Jun 2022 and the autocorrelation functions.

Figure 10. Time series plots of Crime data in blue and its Estimates values in red from the model.

Table 4. Summary statistics of the estimate crime data obtained from the branching Negative Binomial model.

see a good coherent with the summary of the crime data in Table 2.

5. Conclusion and Suggestions

In this paper we introduce an autoregressive branching Negative Binomial (bNB) as an alternative model for time series of count data. It allows for overdispersion and serial dependence in count observations. The bNB simulation study provides a performance evaluation with maximum likelihood as the estimation method of model parameters. Finally, crime count data from Kempsey, a city of Australia is used to display the usefulness of the developed model. Suggestions for future works include extending the model to account for higher order dependence, allowing for covariates and multivariate extensions.

Acknowledgements

First and foremost, I would like to express my sincere gratitude to my supervisors, Dr. BONFACE MIYA MALENJE and Dr. HERBERT IMBOGA, for their support, encouragement, and insightful guidance during my study at Pan African University, Institute of Basic Science, Technology and Innovation (PAUIBSTI). They guided me to sharpen my thinking and brought my academic work to a higher level. Without them, I could not reach this point. I want to thank African Union Commission for funding the masters studies. Particularly, I thank all the faculty members of the Mathematics Department of PAUSTI, for their help in many ways to improve my knowledge. Finally, I would like to extend my special and sincere thanks to Dr. IBRAHIMA BAKAYOKO from “the University of N’Zerekore (UZ), Guinea” (in Guinea), for not only the basic knowledge but also the wonderful advice and motivation that he provided to me during my undergraduate studies, and Dr. TAMBA NICOLAS MILLIMONO from “the Institute of Science and Technology of Mamou (ISTM), Guinea”, for the support and motivation that he is giving me to succeed in my scientific career. Last but not least, I want to thank God for sustaining me throughout the previous years and for placing wonderful people around me.

Conflicts of Interest

The authors declare no conflicts of interest.

[1] |
Zeger, S.L. (1988) A Regression Model for Time Series of Counts. Biometrika, 75, 621-629. https://doi.org/10.1093/biomet/75.4.621 |

[2] |
Davis, R.A., Dunsmuir, W.T. and Wang, Y. (2000) On Autocorrelation in a Poisson Regression Model. Biometrika, 87, 491-505.
https://doi.org/10.1093/biomet/87.3.491 |

[3] |
Brännäs, K. and Johansson, P. (1994) Time Series Count Data Regression. Communications in Statistics—Theory and Methods, 23, 2907-2925.
https://doi.org/10.1080/03610929408831424 |

[4] |
Quddus, M.A. (2008) Time Series Count Data Models: An Empirical Application to Traffic Accidents. Accident Analysis and Prevention, 40, 1732-1741.
https://doi.org/10.1016/j.aap.2008.06.011 |

[5] |
Saridakis, G. (2004) Violent Crime in the United States of America: A Time-Series Analysis between 1960-2000. European Journal of Law and Economics, 18, 203-221.
https://doi.org/10.1023/B:EJLE.0000045082.09601.b2 |

[6] |
Borowik, G., Wawrzyniak, Z.M. and Cichosz, P. (2018) Time Series Analysis for Crime Forecasting. 2018 26th International Conference on Systems Engineering (ICSEng), Sydney, 18-20 December 2018, 1-10.
https://doi.org/10.1109/ICSENG.2018.8638179 |

[7] |
Banerjee, D. (2014) Forecasting of Indian Stock Market Using Time-Series ARIMA Model. 2014 2nd International Conference on Business and Information Management (ICBIM), IEEE, Durgapur, 9-11 January 2014, 131-135.
https://doi.org/10.1109/ICBIM.2014.6970973 |

[8] |
Idrees, S.M., Alam, M.A. and Agarwal, P. (2019) A Prediction Approach for Stock Market Volatility Based on Time Series Data. 2021 IEEE International Conference on Power Electronics, Computer Applications (ICPECA), Vol. 7, 17287-17298.
https://doi.org/10.1109/ACCESS.2019.2895252 |

[9] |
Alqawba, M. and Diawara, N. (2021) Copula-Based Markov Zero-Inflated Count Time Series Models with Application. Journal of Applied Statistics, 48, 786-803.
https://doi.org/10.1080/02664763.2020.1748581 |

[10] |
Weiß, C.H. (2009) Modelling Time Series of Counts with Overdispersion. Statistical Methods and Applications, 18, 507-519. https://doi.org/10.1007/s10260-008-0108-6 |

[11] |
Bourguignon, M. and Weiß, C.H. (2017) An INAR(1) Process for Modeling Count Time Series with Equidispersion, Underdispersion and Overdispersion. Test, 26, 847-868. https://doi.org/10.1007/s11749-017-0536-4 |

[12] |
Al-Osh, M.A. and Aly, E.E.A. (1992) First Order Autoregressive Time Series with Negative Binomial and Geometric Marginals. Communications in Statistics—Theory and Methods, 21, 2483-2492. https://doi.org/10.1080/03610929208830925 |

[13] |
Al-Osh, M.A. and Alzaid, A.A. (1987) First-Order Integer-Valued Autoregressive (INAR(1)) Process. Journal of Time Series Analysis, 8, 261-275.
https://doi.org/10.1111/j.1467-9892.1987.tb00438.x |

[14] | Blunsom, P. (2004) Hidden Markov Models. Lecture Notes, 15, 48. |

[15] |
Olteanu, M. and Ridgway, J. (2012) Hidden Markov Models for Time Series of Counts with Excess Zeros. European Symposium on Artificial Neural Networks, Bruges, 25-27 April 2012, 133-138. https://hal.archives-ouvertes.fr/hal-00655588 |

[16] |
Fokianos, K. (2012) Count Time Series Models. In: Handbook of Statistics, Vol. 30, Elsevier, Amsterdam, 315-347. https://doi.org/10.1016/B978-0-444-53858-1.00012-0 |

[17] |
Weiß, C.H. (2021) Stationary Count Time Series Modelings. Wiley Interdisciplinary Reviews: Computational Statistics, 13, e1502. https://doi.org/10.1002/wics.1502 |

[18] |
Heinen, A. (2003) Modelling Time Series Count Data: An Autoregressive Conditional Poisson Model. https://ssrn.com/abstract=1117187 https://doi.org/10.2139/ssrn.1117187 |

[19] |
Ristić, M.M., Bakouch, H.S. and Nastić, A.S. (2009) A New Geometric First-Order Integer-Valued Autoregressive (NGINAR(1)) Process. Journal of Statistical Planning and Inference, 139, 2218-2226. https://doi.org/10.1016/j.jspi.2008.10.007 |

[20] |
Zhu, R. and Joe, H. (2010) Negative Binomial Time Series Models Based on Expectation Thinning Operators. Journal of Statistical Planning and Inference, 140, 1874-1888. https://doi.org/10.1016/j.jspi.2010.01.031 |

[21] |
Jose, K.K. and Mariyamma, K.D. (2016) A Note on an Integer Valued Time Series Model with Poisson-Negative Binomial Marginal Distribution. Communications in Statistics—Theory and Methods, 45, 123-131.
https://doi.org/10.1080/03610926.2013.826979 |

[22] |
Liu, M., Li, Q. and Zhu, F. (2019) Threshold Negative Binomial Autoregressive Model. Statistics, 53, 1-25. https://doi.org/10.1080/02331888.2018.1546307 |

[23] |
Wolpert, R.L. and Brown, L.D. (2011) Stationary Infinitely-Divisible Markov Processes with Non-Negative Integer Values.
http://www2.stat.duke.edu/~rlw/talks/rlw-ldb.pdf |

Journals Menu

Contact us

+1 323-425-8868 | |

customer@scirp.org | |

+86 18163351462(WhatsApp) | |

1655362766 | |

Paper Publishing WeChat |

Copyright © 2023 by authors and Scientific Research Publishing Inc.

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.