An Approximation Method for a Maximum Likelihood Equation System and Application to the Analysis of Accidents Data ()
1. Introduction
Approximation methods for Maximum Likelihood (ML) systems of equations are of interest and are motivated in this paper by the need to find estimation methods that are simple and easy to implement in the specific field of the statistical evaluation of the impact of a road safety measure. In practice, the estimation methods dedicated to this evaluation depend both on the nature of the measure and the available data. Methods based on frequencies combination have received considerable attention [1] [2] [3] and for the most part of them, we are faced with the estimation of unknown parameters which are often functionally dependent.
Many approximation methods for maximum likelihood estimation need to solve systems of linear or non-linear equations with or without constraints [4] - [9] . Newton-Raphson’s method and Fisher scoring are certainly the most commonly used approximation methods. They consist in updating the whole parameter vector using the iterative scheme:
(1)
where
is the estimate of the vector parameter at the step
,
is the log-likelihood function,
is the gradient of
and
is the observed or the expected information matrix. Both methods require the com- putation of second-order partial derivatives and a matrix inversion in each iteration, which can be very costly. Some authors such as Wang [10] have proposed quadratic approximations and extended Fisher scoring. The main point is to use quadratic approximations to the log-likelihood function and optimize these approximations within the constrained parameter space.
Within the framework of crash data analysis, different iterative estimation methods have been proposed [2] [11] . For example, Mkhadri et al. [11] propose a Minorization-Maximization (MM) algorithm for the maximum likelihood estimation of the parameter vector of a multinomial distribution modelling crash data. Their proposed MM algorithm cycles through the components of the parameter vector and updates one component at a time which leads to closed-form expressions of the parameters. They claim that their MM algorithm is simple to implement without any matrix inversion and constraints are integrated easily.
Despite the above advantages, the choice of the starting value
remains a major issue since a value of
relatively far from the true unknown value of the vector parameter can lead to erroneous solutions or to non-convergence. In addition to this, it must be noted that obtaining explicit expressions of standard errors is not generally easy.
In this paper, we present an estimation method without matrix inversion based on a linear approximation of the likelihood equations in a neighborhood of the constrained maximum likelihood estimator. We obtain closed-form ap- proximations of solutions and standard errors. Then, we propose a partial linear approximation (PLA) algorithm which cycles through the components of the vector parameter and updates one component at a time. The initial solution is automated and standard errors are obtained in a closed-form. The PLA is com- pared to some of the best available algorithms on R and MATLAB software through a simulation study and applied to real crash data.
The remainder of the paper is organized as follows. Section 2 is devoted to the statistical model and the main assumptions used to get closed-form appro- ximations and standard errors. The proposed estimation method and the method for computing standard errors are presented in Section 3. The general framework of the proposed algorithm is also described. In Section 4, we give an illustration of our results using a crash data model. The numerical performance of the proposed algorithm is examined in Section 5 through a simulation study while a real-data application is given in Section 6. The appendix is devoted to the technical details of the main results.
2. Statistical Model and Main Assumptions
2.1. Statistical Model
Let
be a random vector with
compo- nents and
be a vector of pro- babilities such that
where
is a parameter vector. It is assumed that the vector
has the multinomial distribution
where
is a known integer. The basic principle of the multinomial distribution
consists in dis- tributing
items in
categories or classes (
being the number of com- ponents of vector
). The probability for an object to fall in a class is called class probability with the sum of all class probabilities equal to 1. Here, the class probabilities
depend on the unknown vector parameter
.
Given a vector of integers
such that
the probability function related to
is defined by
(2)
2.2. ML Estimation and Main Assumptions
Assumption 1. The vector parameter
is partitioned as
where
is a real parameter,
is a vector and
for all
.
Assumption 2. The unknown vector
is subject to a linear constraint
where
is a continuously differentiable function from
to
.
Let
be a vector of observed data and
be the logarithm of the probability density function
defined by Equation (2). The constrained maximum likelihood estimator
, is solution to the optimization problem
(3)
Problem (3) is equivalent to the maximization of function
(4)
where
is the Lagrange multiplier. The information matrix linked to the constrained maximum likelihood estimator
is
(5)
where
,
,
and
is a
matrix whose entries are
,
. We also assume that the following conditions are verified:
Assumption 3.
and
where
is the inner product,
is a constant and
;
Assumption 4. For any
, there exists a non-singular
matrix
such that
and the non-linear system
is approximated by the linear system
where
and
is a
vector whose components are obtained from
.
Assumption 5. There exists a function
such that the equation
is equivalent to
.
Assumption 6. There exist two strictly positive real numbers
and
, a non-singular
diagonal matrix
and a vector
such that
,
and
.
Assumption 3 specifies the form of function
. Particularly,
is only a function of sub-vector
. Assumptions 4 and 5 enable us to get
from
and inversely. Assumption 6 enables us to transform the Fisher information matrix in order to use classical results on matrix inversion with Schur’s com- plement [17] .
3. The Estimation Method
3.1. Partial Linear Approximation Principle
The general problem of finding the constrained maximum likelihood estimator has been discussed by many authors [12] [13] [14] . The classical approach is based on a Newton-type algorithm and computes the components of
at once. Except from some few simple cases, it is not generally possible to get explicit expressions of the components of
. One shows the following lemma (we refer the reader to the appendix for a proof).
Lemma 1. The constrained maximum likelihood estimator
, provided it exists, is solution to the non-linear system
(6)
where
.
Our approach consists in dividing Equation (6) into two parts: one con- cerning the first component of
and the other one concerning the sub-vector
.
Theorem 1. Under assumptions 3 - 5, the constrained MLE
is given by:
The non-obvious part of the proof consists in the determination of
by inverting the
matrix linked to the linear system in Assumption 4. This result based on the inversion of partitioned matrices will not be demonstrated in this paper. We refer the reader to classical papers on Schur complement [15] [16] .
From Theorem 1, it is seen that the MLE
is a fixed point of the function from
to itself defined by
. We can then build an iterative algorithm to estimate
. The classical fixed point method which consists in simultaneously updating
and
may be hard to im- plement because of the link between
and
. We propose instead to alternate between updating
holding
fixed and vice-versa. Starting from a given
, we compute
and then
and
and so on. The process is repeated until a stopping criteria is satisfied. For example, we can stop the iterations when successive values of the log-likelihood satisfy the condition
where
.
The estimation process may be completed by the computation of standard errors with Theorem 2 below.
Theorem 2. Under assumptions 3 - 6, the asymptotic variance of the com- ponents of
are
(7)
and
(8)
where
and the real values
(resp.
),
, are the diagonal elements (resp. components) of matrix
(resp. of vector
).
The proof is given in the appendix. It stems from the results of N'Guessan and Langrand [17] .
3.2. General Framework of the Partial Linear Approximation Algorithm
Algorithm 1 (The partial linear approximation algorithm).
Step 0 (Initialization) Given
,
,
, compute
.
Step 1 (Loop for computing
) For a given
,
a) Compute
and
.
b) If
, then replace
by
and return to Step 1.
Else, set
,
and go to Step 2.
Step 2 (Computation of standard errors)
a) Compute
.
b) For
, compute
.
The aim of this paper is not to conduct a theoretical study of the convergence of the proposed algorithm. We rather focus on the numerical aspect of this convergence through an application model. Nevertheless, we can notice that the estimation of
using our algorithm does not require any matrix inversion. It is thus easy to think that getting the constrained maximum likelihood estimator
is improved in terms of computation time.
4. Application to the Combination of Crash Data
4.1. Statistical Model
We apply the above algorithm to estimate the parameters of a statistical model used to assess the effect of a road safety measure applied to an experimental site presenting
mutually exclusive accident types (fatal accidents, seriously injured people, slightly injured people, material damage, etc
) over a fixed period. Let us consider the random vector
where
and
respectively represent the number of accidents of type
registered on the experimental site before and after the application of the road safety measure. In order to take into account some external factors (such as traffic flow, speed limit variation, weather conditions, regression to the mean effect, etc
), the experimental site is linked to a control area where the safety measure was not directly applied. The accidents data for the control area over the same period is given by the non-random vector
where
denotes the ratio of the number of accidents of type
registered in the period after to the number of accidents of type
registered in the period before. Following N’Guessan et al. [18] , we assume that the vector
has the multi- nomial distribution
where
is the total number of crashes recorded at the experimental site and
is a vector of class probabilities whose components are
(9)
By construction, the parameter vector
of this model satisfies the conditions
,
and the linear constraint
(10)
where
. The scalar
denotes the unknown parameter average effect of the road safety measure while each
denotes the risk of having an accident of type
on a site having the same characteristics as the experimental site. This model is a special case of the multinomial model proposed by N’Guessan et al. [18] which was applied simultaneously on several sites.
4.2. Cyclic Estimation of the Average Effect and the Different Accident Risks
The log-likelihood is specified up to an additive constant by
(11)
where
. Different iterative methods can be used to compute the constrained MLE
. Most of them look for stationary points of the Lagrangian
N'Guessan et al. [18] showed that a stationary point of
must be the solution to the following system of non-linear equations:
(12)
where
and
.
The main idea proposed in this paper consists in neglecting the term
so that we can write the remaining equations
as the linear system of equations
(13)
where
,
is a diagonal
matrix and
Drawing our inspiration from the work by N'Guessan [19] , we show that the linear system (13) has the vector
as unique so- lution. One shows that
The components of the constrained MLE
can then be computed as follows.
Corollary 1. The components of
are given by
(14)
(15)
where
.
Applying Theorem 2’s results, we can give the asymptotic variance of the constrained MLE
.
Corollary 2. The asymptotic variance of the components of the constrained maximum likelihood estimator
is given by
(16)
(17)
where
and
.
Technical proof uses the Schur complement approach and stems from [17] . One shows (see the appendix) that the elements of the asymptotic information matrix
linked to
are
(18)
where
(19)
Setting
and
, we show that Assumption 6 is satisfied. We then apply Theorem 2 to get the results of Corollary 2.
4.3. Practical Aspect of the Partial Linear Approximation Algorithm
Algorithm 2.
Step 0 (Initialization) Given
and
, set
and compute
.
Step 1 (Loop for computing
) For a given
,
a) Compute
b) For
, compute
c) If
, then replace
by
and return to Step 1.
Else, set
,
and go to Step 2.
Step 2 (Computation of standard errors)
a) Compute
.
b) For
, compute
.
The partial linear approximation algorithm for computing the constrained maximum likelihood estimator
of the model presented in subsection 4.1 stems from the cyclic algorithm of N’Guessan and Geraldo [20] . The PLA proceeds as follows: step 1 allows to estimate
alternating between its two components
et
. To start the procedure, we initialize
. Then we compute
and define
. But, we could also initialize
using the problem’s data and get
. The process is repeated until the stopping criterion is satisfied. We note that our algorithm is automated and can be started as soon as the problem’s data
is entered.
We can also note that the second partial derivatives of the log-likelihood function are no longer used in our algorithm. The aim of this paper is not to carry out a theoretical study of the convergence of the proposed algorithm. We rather focus on the numerical aspect of this convergence using simulated datasets.
5. Numerical Results with Simulated Datasets
5.1. Data Simulation Principle
For a given value of
(the number of crash types), we generate the com-
ponents of vector
from a uniform random variable
.
The true value of
denoted
is fixed and the true value of
, denoted
, such that
, comes from a uniform random variable
where
. Using those values, we compute the true probabilities
linked to the multinomial distribution presented in subsection 4.1. Finally, one generates the total number
of crash data from a Poisson distribution and then randomly shares it between the before and after periods using probabilities
and
. The observed values of
such that
are then found.
5.2. Numerical Results
This subsection deals with the numerical convergence of the partial linear approximation algorithm. As usual in the study of iterative algorithms, we analyse the influence of the initial solution
, the number of iterations, the computation time (CPU time) and the mean squared error. The performances of the partial linear approximation algorithm are compared to those of some classical optimization methods available in R and MATLAB software. The computations presented in this section were executed on a PC with an AMD E-350 Processor 1.6 GHz CPU.
The methods selected for comparison are the Newton-Raphson’s method, Nelder-Mead’s (NM) simplex algorithm [21] , quasi-Newton BFGS algorithm (from the names of its authors Broyden, Fletcher, Goldfarb and Shanno [22] [23] [24] [25] ), Interior Point (IP) algorithm [26] , the Lenverberg-Marquardt (LM) algorithm [27] [28] and Trust Region (TR) algorithms [29] . In our work, the BFGS and NM algorithms are implemented using the package developed by Varadhan [30] .
The simulation process was performed on many simulated crash datasets. For each one, small and large values of
were considered. The results presented here correspond to the case
,
,
with
and
.
Three different ways of setting
were considered:
1) Uniform initialisation:
.
2) Proportional initialisation:
,
.
3) Random initialisation: for
,
where each
is randomly generated from an uniform distribution
.
By combining the two values of
and the three ways to initialize
, we get six scenarios and for each one, we performed 1000 replications. The stopping criterion is the condition
.
Tables 1-6 present a few of the results obtained for an overall 6000 simu- lations. All computation times are given in seconds and the duration ratio of an
![]()
Table 1. Results for uniform initialisation of
,
,
.
![]()
Table 2. Results for uniform initialisation of
,
,
.
![]()
Table 3. Results for proportional initialisation of
,
,
.
![]()
Table 4. Results for proportional initialisation of
,
,
.
algorithm is defined as the ratio between the mean computation time of this latter and the mean computation time of the PLA (therefore the duration ratio of the partial linear approximation algorithm always equals 1). The computation time depends on the computer used to perform the simulations while the duration ratio is computer-free and therefore more useful.
To analyse the convergence, we used the mean squared error (MSE) defined as:
![]()
Table 5. Results for random initialisation of
,
,
.
![]()
Table 6. Results for random initialisation of
,
,
.
(20)
One can see that the MSE decreases when
(the total number of road crashes) increases. The PLA is at least as efficient as the other algorithms. It always converges to reasonable and acceptable parameter vector estimates and the estimate gets closer to the the true value as
increases.
For a small value of
(see Table 1, Table 3 and Table 5), the estimate
produced by the PLA is relatively close to the true parameter vector and quite close to those of the other methods. More generally, all the compared methods have a MSE of order
. However the estimates produced by the BFGS and Nelder-Mead’s methods are very far from the true values (see Table 5). In the case of random initialisation, the MSE for Nelder-Mead’s and BFGS algorithms is 20 times greater than those of the other algorithms.
When
increases
, the MSE of the PLA decreases from
to
. So the PLA is also efficient. Unsurprisingly, when
is very great the estimates produced by the other algorithms are closer to the true values than those of the PLA. This is expected because the other algorithms work with the exact gradient. However, the Nelder-Mead’s and BFGS algorithms produce estimates very far from the truth. Their MSE’s order is
(Table 4 and Table 6) while those of the PLA and the other methods are
.
To analyse the influence of the initial guess, we considered the mean number of iterations and the amplitude of the iterations (i.e. difference between the maximum and the minimum number of iterations). An increase in the am- plitude of iterations suggests a greater influence of the initial solution
. The results given in Tables 1-6 suggest that the PLA is stable and robust to initial guesses of the parameter being estimated. For the 6000 replications, the number of iterations needed by the PLA to converge lies between 2 and 4. In other words, setting the initial guess to 6000 different values chosen in the parameter vectors space does not disturb the PLA. This performance is as good as those of Newton- Raphson and Levenberg-Marquardt and by far better than those of BFGS, Nelder-Mead’s and Interior Point which have their number of iterations varying respectively from 1 to 15, 1 to 21 and 3 to 31.
As far as the computation time is concerned, it can be noticed that all the CPU time ratio are greater than 1 which means that none of the compared algorithms is faster than the PLA. On average, the PLA is 4 to 5 times quicker than Newton-Raphson, 239 to 345 times quicker than BFGS algorithm, 354 to 395 times quicker than Nelder-Mead, 27 to 31 times quicker than Levenberg- Marquardt, 33 to 39 times quicker than Trust region algorithms and 311 to 383 times quicker than Interior point algorithms. This can be an important factor when larger values of
are considered.
6. Real-Data Analysis
We apply the partial linear approximation algorithm to the data concerning the changes applied to road markings on a rural Nord Pas-de-Calais road (France) [31] . This road lay-out consisted in what is called “Italian marking”. The road markings were modified in order to make overtaking impossible in both directions at the same time. On a short distance, there are two lanes for the first direction (overtaking is then allowed in that direction) and there is only one for the other direction. Both directions are separated by a white line. A bit farther away, the system is inverted so that overtaking is possible in the opposite direction, and so on. The data recorded for eight years (four years before and four years after) on the marked area are given by Table 7.
Note that the number of fatal crashes (Fatal) decreased from four (in the before period) to one after the lay-out change. Indeed there were four accidents with at least one person killed in the period before the road works and one crash with at least one person killed in the period after. The number of slight crashes (no serious bodily injuries involved) recorded in the same period were more than halved. For the same lengths of time, on a portion of National Road 17 used as a control area, accident reports are the following.
The values given in Table 8 are obtained by dividing, for each accident type, the number of crashes after the changes by the number of crashes before. On the whole, a decrease can be noted in comparison with the control area accident numbers between the 4-year period after the changes and the 4-year period before. All the algorithms can be applied to Table 7 and Table 8. Since the simulation results suggested that the partial linear approximation algorithm con- verges after a few iterations and remains steady, we only present the estimations related to the partial linear approximation algorithm (Table 9).
Parameters
,
and
enable us to assess the risks for each type of accident on the test area in the eight years when the road markings’ effects are studied. Estimated values
,
and
are respectively
,
and
. These values suggest that in the eight years of road marking analysis,
of the crashes recorded in the test area were fatal,
were serious and
were slight as compared to the crashes recorded in the control area.
The estimated mean efficiency index
is 0.7054. It corresponds to an average decrease in proportion of
of the whole
![]()
Table 7. Crash data for experimental zone.
![]()
Table 8. Crash data for control zone.
![]()
Table 9. Pas-De-Calais’s crash data using partial linear approximation algorithm.
set of accidents in the test area as compared to the average trend in the control area. The mean effect significance for this type of lay-out may also be tested by using the confidence interval at the
level associated to parameter
. This confidence interval reveals a bracket of values whose lower (resp. upper) bound is strictly superior to 0 (resp. 1). Indeed, as
, we cannot rule out the hypothesis
versus
with a type-1 error of
. Even if in the case studied here, we can notice a decrease in proportion of
in the average accident number in the test area, the above test shows that the mean efficiency index value is not significantly different from 1 to enable us to conclude that this type of road marking is efficient. In practice, an analysis according to periods, recorded data and control area should be carried out in order to get more appropriate conclusions.
7. Concluding Remarks
We propose in this paper, under assumptions, a principle of 2-block splitting of a constrained Maximum Likelihood (ML) system of equations linked to a parameter vector. We obtain analytical approximations of the components of the constrained ML estimator. The standard asymptotic errors are also obtained in closed-form.
We then build an iterative algorithm by initializing the first component of the parameter vector without inverting (at each iteration) the information matrix nor the Hessian matrix. Our partial linear approximation algorithm cycles through the components of the parameter vector and updates one component at a time. It is very simple to program and the constraints are integrated easily. To implement our algorithm, we use a particular version of the multinomial model of N’Guessan et al. [18] used to estimate the average effect of a road safety measure and the different accident risks when road conditions are improved. We prove that the assumptions are all satisfied and we obtain simple expressions of the estimators and their asymptotic variance. Afterwards, we give a practical version of our algorithm.
The numerical performance of the proposed algorithm is examined through a simulation study and compared to those of classical methods available in R and MATLAB software. The choice of these other methods is dictated not only by the fact that they are relevant and integrated in most statistical software but also by the fact that some need second order derivatives and others do not. The comparisons suggest that not only the partial linear approximation algorithm is competitive in statistical accuracy and computational speed with the best currently available algorithm, but also it is not disturbed by the initial guess.
The link between the numerical performance of our algorithm and the particular model used in this paper seems to be a limitation to the com- petitiveness of our algorithm with regards to the other methods considered in this paper. However, this particular choice of model allows not only to show the feasibility of our algorithm but also represents a good basis for approaching, under additional conditions, more general models.
The simulations results obtained on the particular model considered in this paper suggest that our algorithm may be extended to other families of multi- nomial models (such as the model of N’Guessan et al. [18] ). Drawing our inspiration from [32] , we may also prove the asymptotic strong consistency of the estimator obtained from our algorithm. This perspective will give a wider interest to our algorithm in the context of a large-scale use.
Acknowledgements
We thank the Editor and the anonymous referee for their remarks and suggestions which enabled a substantial improvement of this paper. This article was partially written while the second author was visiting the Lille 1 University (France).
8. Appendix
8.1. Proof of Lemma 1
Proof. Problem (3) is equivalent to maximizing function
(21)
where
is the Lagrange multiplier. Any solution
must satisfy
(22)
From assumption 3, system (22) is equivalent to
(23)
where
. After multiplication by
and summation on the index
, we get:
that is equivalent to
We obtain (6) by substitution of
in (23). □
8.2. Proof of Theorem 2
Proof. Under conditions 6,
is non singular and after some matrix mani- pulations, we get
where
is a scalar,
is the inverse of
,
, is a scalar,
, is a
matrix and
is the
asymptotic variance-covariance matrix of
. We show that
where
is a scalar and
is a
matrix. The asymptotic variance of
is given by
and
is obtained by the diagonal elements of
. After straightforward calculation, we get
and
where
. The results of Theorem 2 follow from a direct calculation.
□
8.3. Proof of Corollary 2
Proof. Taking the second derivatives of the negative of
with respect to the
components of
and
, we show that:
,
,
,
and
for
,
. Now, setting
,
and
, we obtain the components of matrix
as follows
where
(resp.
and
) are the components of vector
(resp. the elements of matrix
). Using the last expressions of the elements of
and
and after straightforward calculation, we show that
and then, we deduce the expression of
given by corollary 2. In the same way, we obtain
where
. Using the last expression of
, we get the expression of
. □