A Study of EM Algorithm as an Imputation Method: A Model-Based Simulation Study with Application to a Synthetic Compositional Data ()

Yisa Adeniyi Abolade^{}, Yichuan Zhao^{}

Department of Mathematics and Statistics, Georgia State University, Atlanta, Georgia, USA.

**DOI: **10.4236/ojmsi.2024.122002
PDF
HTML XML
125
Downloads
431
Views
Citations

Department of Mathematics and Statistics, Georgia State University, Atlanta, Georgia, USA.

Compositional data, such as relative information, is a crucial aspect of machine learning and other related fields. It is typically recorded as closed data or sums to a constant, like 100%. The statistical linear model is the most used technique for identifying hidden relationships between underlying random variables of interest. However, data quality is a significant challenge in machine learning, especially when missing data is present. The linear regression model is a commonly used statistical modeling technique used in various applications to find relationships between variables of interest. When estimating linear regression parameters which are useful for things like future prediction and partial effects analysis of independent variables, maximum likelihood estimation (MLE) is the method of choice. However, many datasets contain missing observations, which can lead to costly and time-consuming data recovery. To address this issue, the expectation-maximization (EM) algorithm has been suggested as a solution for situations including missing data. The EM algorithm repeatedly finds the best estimates of parameters in statistical models that depend on variables or data that have not been observed. This is called maximum likelihood or maximum a posteriori (MAP). Using the present estimate as input, the expectation (E) step constructs a log-likelihood function. Finding the parameters that maximize the anticipated log-likelihood, as determined in the E step, is the job of the maximization (M) phase. This study looked at how well the EM algorithm worked on a made-up compositional dataset with missing observations. It used both the robust least square version and ordinary least square regression techniques. The efficacy of the EM algorithm was compared with two alternative imputation techniques, k-Nearest Neighbor (k-NN) and mean imputation (), in terms of Aitchison distances and covariance.

Keywords

Compositional Data, Linear Regression Model, Least Square Method, Robust Least Square Method, Synthetic Data, Aitchison Distance, Maximum Likelihood Estimation, Expectation-Maximization Algorithm, k-Nearest Neighbor, and Mean imputation

Share and Cite:

Abolade, Y. and Zhao, Y. (2024) A Study of EM Algorithm as an Imputation Method: A Model-Based Simulation Study with Application to a Synthetic Compositional Data. *Open Journal of Modelling and Simulation*, **12**, 33-42. doi: 10.4236/ojmsi.2024.122002.

1. Introduction

Compositional data exclusively consists of relative information. These entities are part of a broader entity. Typically, closed data or data that aggregates to a constant value, such as 100%, is commonly documented. An illustrative instance within the field of medicine involves the examination of the constituent elements present in bodily fluids such as blood and urine. The statistical linear model is frequently employed to uncover latent associations among relevant random variables due to its user-friendly nature and interpretability. In the domain of machine learning and its associated disciplines, ensuring the quality of data is a significant difficulty. The quality of the underlying data plays a crucial role in determining the quality of information obtained through Machine Learning algorithms, as these algorithms rely solely on data for their functioning. One significant concern pertaining to data quality involves the presence of missing data, particularly in compositional datasets. The linear regression model is a widely employed statistical modeling technique that is utilized across various applications to ascertain correlations between variables of interest. The method of maximum likelihood estimation (MLE) is commonly employed to estimate the parameters of linear regression by determining the values that maximize the likelihood function given the observed data. The obtained model can be utilized for doing partial effects analysis on the independent variables as well as for making predictions about future outcomes.

However, it is important to note that many datasets often exhibit missing observations. In the context of research, it is possible for participants to opt out of providing a response to a survey query, for files to finally undergo destruction, or for data to be inadequately preserved. The process of recommencing data collecting and recovery in this case will incur financial expenses and necessitate a significant amount of time. The matter pertaining to the evaluation of inadequate data necessitates attention. The expectation-maximization (EM) technique has been proposed as a potential solution for scenarios with missing data due to its robust convergence properties. The estimation of parameters in statistical models that involve unobserved variables or unobserved data is commonly achieved by the repetitive use of the Expectation-Maximization (EM) technique, which seeks to find the maximum likelihood or maximum a posteriori (MAP) estimates. The M phase of the EM algorithm involves the estimation of parameters that maximize the expected log-likelihood obtained during the E stage. The E-step, in contrast, constructs a function that calculates the expected value of the log-likelihood, using the current estimate as the parameters. In the subsequent E phase, these estimates are subsequently employed to determine the distribution of the latent variables or missing data.

Despite the wide investigation of the EM algorithm as an imputation tool, there is a lack of knowledge regarding its effectiveness when used for compositional data. This study investigates the performance of the EM method on a synthetic compositional dataset with missing observations. Two imputation techniques, namely the robust least square version and least square, are utilized and evaluated. The EM technique is applied to simulated studies by making iterative assumptions about a compositional dataset with random missing data and outliers, assuming a normal distribution. The effectiveness of the EM method was evaluated by comparing its results with two commonly employed imputation techniques, namely k-Nearest Neighbor (k-NN) and mean imputation ( $\stackrel{\xaf}{x}$ ), in terms of Aitchison distance and covariance [1] . Based on the conducted trials, it was shown that the robust variant of the EM algorithm exhibited superior performance compared to alternative imputation strategies.

2. Methodolgy

2.1. Linear Regression Model

We take into consideration a one-dimensional estimator and response linear regression model. Let’s say we have *n* observations in our dataset. We define the predictor
$X=\left({x}_{1},{x}_{2},\cdots ,{x}_{n}\right)$ , and the response
$Y=\left({y}_{1},{y}_{2},\cdots ,{y}_{n}\right)$ . For the *i*^{th} observation, we assume that
${y}_{i}$ and
${x}_{i}$ are related by the linear regression model and in Equation (1):

${y}_{i}={\beta}_{0}+{\beta}_{1}{x}_{i}+\u03f5,\text{\hspace{0.17em}}\text{\hspace{0.17em}}\u03f5~NID\left(0,{\sigma}^{2}\right)$ (1)

We assume that
${x}_{i}~N\left(\alpha ,{\delta}^{2}\right)$ ,* i.i.d*. Under such assumptions, the conditional distribution of *Y* given *X* is
$\left[Y|X\right]~N\left({\beta}_{0}+{\beta}_{1}X,{\sigma}^{2}\right)$ . Then we can write down the joint probability density of *X* and *Y* given by

$\begin{array}{c}f\left({y}_{i},{x}_{i}\right)=f\left({y}_{i}|{x}_{i}\right)f\left({x}_{i}\right)\\ =\frac{1}{\sigma \sqrt{2\pi}}{\text{e}}^{-\frac{1}{2{\sigma}^{2}}{\left({y}_{i}-{\beta}_{0}-{\beta}_{1}{x}_{i}\right)}^{2}}\times \frac{1}{\delta \sqrt{2\pi}}{\text{e}}^{-\frac{1}{2{\delta}^{2}}{\left({x}_{i}-\alpha \right)}^{2}}\end{array}$ (2)

2.2. Missing Values

The data is not completely observed in a lot of real-world scenarios. We expand our model to have response values *Y* fully observed and only m of the predictor values observed (*i.e.*,
$n-m$ predictor values missing), and the response values *Y* are fully observed. We can arrange the dataset so that the first *m* observation is fully observed.

${X}_{comp}=\left({x}_{1},\cdots ,{x}_{m},{x}_{m+1},\cdots ,{x}_{n}\right)=\left({X}_{obs},{X}_{miss}\right)$ (3)

Equation (4) thus allows for the decomposition of the entire data log-likelihood for the model into the observable and missing parts.

$\begin{array}{c}L\left(\theta ;X,Y\right)={s}_{i=1}^{n}L\left(\theta ;{x}_{i,}{y}_{i}\right)\\ =-2n\mathrm{log}\sqrt{{\sigma}^{2}2\pi}-2n\mathrm{log}\sqrt{{\delta}^{2}2\pi}-\frac{1}{2{\sigma}^{2}}{\displaystyle \underset{i=1}{\overset{m}{\sum}}{\left({y}_{i}-{\beta}_{0}-{\beta}_{1}{x}_{i}\right)}^{2}}\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}-\frac{1}{2{\sigma}^{2}}{\displaystyle \underset{i=m+1}{\overset{n}{\sum}}{\left({y}_{i}-{\beta}_{0}-{\beta}_{1}{x}_{i}\right)}^{2}}-\frac{1}{2{\delta}^{2}}{\displaystyle \underset{i=1}{\overset{m}{\sum}}{\left({x}_{i}-\alpha \right)}^{2}}\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}-\frac{1}{2{\delta}^{2}}{\displaystyle \underset{i=m+1}{\overset{n}{\sum}}{\left({x}_{i}-\alpha \right)}^{2}}\end{array}$ (4)

where $\theta =\left({\beta}_{0},{\beta}_{1},{\sigma}^{2},\alpha ,{\delta}^{2}\right)\in {\mathbb{R}}^{5}$

2.3. The EM Algorithm Formulation

The issue with the above calculation is that *X _{mis}* with not observed and needs to be estimated. One reasonable approach is that we simply require each
${x}_{m+1},\cdots ,{x}_{n}$ to be replaced by its conditional expectation given the observed data,

1) *E-step*:* *

$\begin{array}{l}E\left[\underset{i=m+1}{\overset{n}{{\displaystyle \sum}}}{\left({y}_{i}-{\beta}_{0}-{\beta}_{1}{x}_{i}\right)}^{2}\right]\\ =\underset{i=m+1}{\overset{n}{{\displaystyle \sum}}}\left({\left({y}_{i}-{\beta}_{0}\right)}^{2}+{\beta}_{1}^{2}E\left({X}_{i}^{2}|{y}_{i},{\theta}^{*}\right)-2{\beta}_{1}\left({y}_{i}-{\beta}_{0}\right)E\left({X}_{i}|{y}_{i},{\theta}^{*}\right)\right)\end{array}$ * *

$\begin{array}{l}E\left[\underset{i=m+1}{\overset{n}{{\displaystyle \sum}}}{\left({x}_{i}-\alpha \right)}^{2}\right]\\ =\underset{i=m+1}{\overset{n}{{\displaystyle \sum}}}\left(E\left({X}_{i}^{2}|{y}_{i},{\theta}^{*}\right)-2\alpha E\left({X}_{i}|{y}_{i},{\theta}^{*}\right)+{\alpha}^{2}\right)\end{array}$ (5)

where
$E\left[{x}_{i}|{y}_{i},{\theta}^{*}\right]$ and
$E\left[{x}_{i}^{2}|{y}_{i},{\theta}^{*}\right]$ are the first and second conditional moments, respectively. Since *X* and *Y* have a bivariate normal distribution, we can derive the conditional of *X *given *Y* and
${\theta}^{*}$

$E\left[{X}_{i}|{y}_{i},{\theta}^{*}\right]~N\left(\alpha +\frac{{\beta}_{1}{\delta}^{2}}{{\sigma}^{2}+{\beta}_{1}^{2}{\delta}^{2}}\left({y}_{i}-{\beta}_{0}-{\beta}_{1}\alpha \right),\frac{{\sigma}^{2}{\delta}^{2}}{{\sigma}^{2}+{\beta}_{1}^{2}{\delta}^{2}}\right)$ (6)

Then we can easily find the conditional first and second moment of *X _{miss}* given

${M}_{i}^{1}=\alpha +\frac{{\beta}_{1}{\delta}^{2}}{{\sigma}^{2}+{\beta}_{1}^{2}{\delta}^{2}}\left({y}_{i}-{\beta}_{0}-{\beta}_{1}\alpha \right)$ (7)

${M}_{i}^{2}={\left(\alpha +\frac{{\beta}_{1}{\delta}^{2}}{{\sigma}^{2}+{\beta}_{1}^{2}{\delta}^{2}}\left({y}_{i}-{\beta}_{0}-{\beta}_{1}\alpha \right)\right)}^{2}+\frac{{\sigma}^{2}{\delta}^{2}}{{\sigma}^{2}+{\beta}_{1}^{2}{\delta}^{2}}$ (8)

With these terms computed above, the E-step formulation is shown in Equation (9) below.

$\begin{array}{c}Q\left(\theta ,{\theta}^{*}\right)=L\left(\theta ;X,Y\right)\\ =-2n\mathrm{log}\sqrt{2\pi}-n\mathrm{log}\delta -n\mathrm{log}\sigma \\ \text{\hspace{0.17em}}-\frac{1}{2{\sigma}^{2}}\underset{i=1}{\overset{m}{{\displaystyle \sum}}}\left({\left({y}_{i}-{\beta}_{0}\right)}^{2}+{\beta}_{1}^{2}{M}^{2}-2{\beta}_{1}\left({y}_{i}-{\beta}_{0}\right){M}^{1}\right)\\ \text{\hspace{0.17em}}-\frac{1}{2{\sigma}^{2}}\underset{i=m+1}{\overset{n}{{\displaystyle \sum}}}{\left({y}_{i}-{\beta}_{0}-{\beta}_{1}{x}_{i}\right)}^{2}-\frac{1}{2{\delta}^{2}}\underset{i=m+1}{\overset{n}{{\displaystyle \sum}}}{\left({x}_{i}-\alpha \right)}^{2}\\ \text{\hspace{0.17em}}-\frac{1}{2{\delta}^{2}}\underset{i=1}{\overset{m}{{\displaystyle \sum}}}\left({M}^{2}-2\alpha {M}^{1}-{\alpha}^{2}\right)\end{array}$ (9)

where ${M}^{1}$ , ${M}^{2}$ are given in Equations (7) and (8).

2) *M-step*: * *

The M-step maximizes
$Q\left(\theta ,{\theta}^{*}\right)$ calculated in the E-step. Solving
$\frac{\partial Q\left(\theta ,{\theta}^{*}\right)}{\partial \theta}=0$ , we get the following results. The updated estimates of
${\beta}^{\prime}$ is just the OLS solution to the model, *i.e.*,
${\beta}^{\prime}={\left({X}^{\text{T}}X\right)}^{-1}\left({X}^{\text{T}}Y\right)$ * *

$\left[\begin{array}{c}{{\beta}^{\prime}}_{0}\\ {{\beta}^{\prime}}_{1}\end{array}\right]={\left[\begin{array}{cc}n& {\displaystyle {\sum}_{i=1}^{n}\stackrel{\u02dc}{{x}_{i}^{*}}}\\ {\displaystyle {\sum}_{i=1}^{n}\stackrel{\u02dc}{{x}_{i}^{*}}}& {\displaystyle {\sum}_{i=1}^{n}\stackrel{\u02dc}{{x}_{i}^{{2}^{*}}}}\end{array}\right]}^{-1}\left[\begin{array}{c}{\displaystyle {\sum}_{i=1}^{n}{y}_{i}}\\ {\displaystyle {\sum}_{i=1}^{n}\stackrel{\u02dc}{{x}_{i}^{*}}\ast {y}_{i}}\end{array}\right]$

where $\stackrel{\u02dc}{{X}^{*}}=\left({X}_{obs},{M}^{1*}\right)\in {\mathbb{R}}^{n}$ and $\stackrel{\u02dc}{{X}^{{2}^{*}}}=\left({X}_{obs}^{2},{M}^{2*}\right)\in {\mathbb{R}}^{n}$ are estimated completed predictor under current estimated parameter ${\theta}^{*}$ . Similarly, the other updated parameters will be:

${\sigma}^{2}{}^{\prime}=\frac{1}{n}{\displaystyle {\sum}_{i=1}^{n}\left(\left({y}_{i}-{{\beta}^{\prime}}_{0}\right)-2{{\beta}^{\prime}}_{1}{y}_{i}-{{\beta}^{\prime}}_{0}\right)\stackrel{\u02dc}{{x}_{i}^{*}}+{\left({{\beta}^{\prime}}_{1}\right)}^{2}\stackrel{\u02dc}{{x}_{i}^{{2}^{*}}}}$

${\alpha}^{\prime}=\frac{1}{n}{\displaystyle {\sum}_{i=1}^{n}\stackrel{\u02dc}{{x}_{i}^{*}}}$

${\delta}^{2}{}^{\prime}=\frac{1}{n}{\displaystyle {\sum}_{i=1}^{n}\left(\stackrel{\u02dc}{{x}_{i}^{{2}^{*}}}-2{\alpha}^{\prime}\stackrel{\u02dc}{{x}_{i}^{*}}+{{\alpha}^{\prime}}^{2}\right)}$

2.4. Convergence of EM Algorithm

We will discuss the convergence of EM algorithm in a more general setting. Suppose we have dataset of m independent examples and want to fit a parametric model $p\left(x,z\right)$ to the dataset, then the log likelihood function is:

$\begin{array}{c}l\left(\theta \right)={\displaystyle {\sum}_{i=1}^{m}\mathrm{log}p\left({x}^{\left(i\right)};\theta \right)}\\ ={\displaystyle {\sum}_{i=1}^{m}\mathrm{log}p\left({x}^{\left(i\right)},z;\theta \right)}\end{array}$

where *z* are the latent random variables. Explicitly finding the maximum likelihood estimate of the parameter
$\theta $ is quite hard, but if
${z}^{\left(i\right)}$ is observed, the estimation would be easy. Let
${Q}_{i}\left(z\right)\ge 0$ ,
${\sum}_{z}{Q}_{i}\left(z\right)}=1$ . Since
$f\left(x\right)=\mathrm{log}\left(x\right)$ is a concave function and by Jensen’s Inequality, we get the lower-bound of
$l\left(\theta \right)$ :

$\sum}_{i}\mathrm{log}p\left({x}^{\left(i\right)};\theta \right)}={\displaystyle {\sum}_{i}{\displaystyle {\sum}_{{z}^{\left(i\right)}}p\left({x}^{\left(i\right)},{z}^{\left(i\right)};\theta \right)$ (10)

$={\displaystyle {\sum}_{i}\mathrm{log}}{\displaystyle {\sum}_{{z}^{\left(i\right)}}{Q}_{i}\left({z}^{\left(i\right)}\right)\frac{p\left({x}^{\left(i\right)},{z}^{\left(i\right)};\theta \right)}{{Q}_{i}\left({z}^{\left(i\right)}\right)}}$ (11)

$\ge {\displaystyle {\sum}_{i}{\displaystyle {\sum}_{{z}^{\left(i\right)}}{Q}_{i}\left({z}^{\left(i\right)}\right)\mathrm{log}\frac{p\left({x}^{\left(i\right)},{z}^{\left(i\right)};\theta \right)}{{Q}_{i}\left({z}^{\left(i\right)}\right)}}}$ (12)

Note that this inequality holds for any distribution
${Q}_{i}$ , it gives the lower bound on
$l\left(\theta \right)$ . Later we will show that the
$l\left(\theta \right)$ increases monotonically with successive iterations of EM if the lower-bound is tight at *θ*. We know that the Jensen’s Inequality holds with equality if the random variables are constant. So, it suffices to satisfy:

$\frac{p\left({x}^{\left(i\right)},{z}^{\left(i\right)};\theta \right)}{{Q}_{i}\left({z}^{\left(i\right)}\right)}=c$

$i.e.\text{\hspace{0.17em}}\text{\hspace{0.05em}}{Q}_{i}\left({z}^{\left(i\right)}\right)\propto p\left({x}^{\left(i\right)},{z}^{\left(i\right)};\theta \right)$

where *c* is constant and does not depend on
${z}^{\left(i\right)}$ . Under this assumption, since
${\sum}_{z}{Q}_{i}\left(z\right)}=1$ , we get:

${Q}_{i}\left({z}^{\left(i\right)}\right)=\frac{p\left({x}^{\left(i\right)},{z}^{\left(i\right)};\theta \right)}{{\displaystyle {\sum}_{z}p\left({x}^{\left(i\right)},{z}^{\left(i\right)};\theta \right)}}=\frac{p\left({x}^{\left(i\right)},{z}^{\left(i\right)};\theta \right)}{p\left({x}^{\left(i\right)},z;\theta \right)}=p\left({z}^{\left(i\right)}|{x}^{\left(i\right)};\theta \right)$

So *Q _{i}’s* is just the posterior distribution of the

While $\Vert {\theta}^{\left(t\right)}-{\theta}^{\left(t-1\right)}\Vert >\u03f5$ do

*E-step*:

*Compute
${Q}_{i}^{\left(t\right)}\left({z}^{\left(i\right)}\right):=p({z}^{\left(i\right)}|{x}^{\left(i\right)};{\theta}^{(\; i\; )}$ *

*M-step*

*Compute *

${\theta}^{\left(t+1\right)}:=\mathrm{arg}{\mathrm{max}}_{\theta}{\displaystyle {\sum}_{i}{\displaystyle {\sum}_{{z}^{\left(i\right)}}{Q}_{i}^{\left(t\right)}\left({z}^{\left(i\right)}\right)\mathrm{log}\frac{p\left({x}^{\left(i\right)},{z}^{\left(i\right)};\theta \right)}{{Q}_{i}^{\left(t\right)}({z}^{(\; i\; )}}}}$

*End *

Consider that:

$l\left({\theta}^{\left(t+1\right)}\right)\ge {\displaystyle {\sum}_{i}{\displaystyle {\sum}_{{z}^{\left(i\right)}}{Q}_{i}^{\left(t\right)}\left({z}^{\left(i\right)}\right)\mathrm{log}\frac{p\left({x}^{\left(i\right)},{z}^{\left(i\right)};{\theta}^{\left(t+1\right)}\right)}{{Q}_{i}^{\left(t\right)}\left({z}^{\left(i\right)}\right)}}}$ (13)

$\ge {\displaystyle {\sum}_{i}{\displaystyle {\sum}_{{z}^{\left(i\right)}}{Q}_{i}^{\left(t\right)}\left({z}^{\left(i\right)}\right)\mathrm{log}\frac{p\left({x}^{\left(i\right)},{z}^{\left(i\right)};{\theta}^{\left(t\right)}\right)}{{Q}_{i}^{\left(t\right)}\left({z}^{\left(i\right)}\right)}}}$ (14)

$=l\left({\theta}^{\left(t\right)}\right)$ (15)

The first inequality holds because of (13). By the definition of ${\theta}^{\left(t+1\right)}$ , (15) is obvious. The last equality holds because lower-bound in (13) is tight at $\theta ={\theta}^{\left(t\right)}$ under our previous assumption. So, the sequence ${\left\{l\left({\theta}^{\left(t\right)}\right)\right\}}_{t}$ is both upper bounded (by 0) and increasing. Hence, in EM algorithm the log likelihood converges monotonically.

3. Application

This section covers the model-based simulation research application of the EM algorithm with least square and resilient least square regression on compositional data. We also analyze the resilience and efficiency of the EM approach and compare its output to two other commonly used imputation techniques, k-Nearest Neighbor (k-NN) and mean imputation ( $\stackrel{\xaf}{x}$ ), to address missing data.

3.1. Data Description

Compositional data is a unique kind of non-negative data that contains the pertinent information not in the actual data values but rather in the ratios between the variables. An observation $x=\left({x}_{1},\cdots ,{x}_{D}\right)$ is a D-part composition if, and only if, ${x}_{i}>0$ , $i=1,\cdots ,D$ , and according to Aitchison [2] , the ratios between the components include all the important information.

${S}^{D}=\left\{\left[{x}_{1},\cdots ,{x}_{D}\right]:{x}_{i}>0\left(i=1,\cdots ,D\right),{x}_{1}+\cdots +{x}_{D}=1\right\}$ and
$\left({x}_{1},\cdots ,{x}_{D}\right)=\frac{\left({w}_{1},\cdots ,{w}_{D}\right)}{{w}_{1}+\cdots +{w}_{D}}$ where (*w*) denote = total weight and
$\left({w}_{1},\cdots ,{w}_{D}\right)$

are the component weights. According to Aitchison [2] , compositional data is not directly represented in Euclidean space. The Aitchison distance
${d}_{A}$ is a suitable way to measure the D-part composition known as (simplex) [3] . According to Egozcue *et al*. [4] , the isometric log-ratio (*ilr*) is used to convert the D-dimensional simplex into the real space
${\mathbb{R}}^{D-1}$ . With this transformation, the Aitchison distance can be expressed as
${d}_{A}\left(x,y\right)={d}_{E}\left(ilr\left(x\right),ilr\left(y\right)\right)$ , where
${d}_{E}$ denotes the Euclidean distance. The data in this simulation is generated by a normal distribution on the simplex, denoted by
${\mathbb{N}}_{s}^{D}\left(\mu ,\Sigma \right)$ (Mateu-Figueras, Pawlowsky-Glahn, and Egozcue) [5] . We generated 10,000 realizations of a random variable [6]
$X~{\mathbb{N}}_{s}^{4}\left(\mu ,\Sigma \right)$ with
$\mu ={\left(0,2,3\right)}^{\text{T}}$ and
$\Sigma =\left({\left(1,-0.5,1.4\right)}^{\text{T}},{\left(-0.5,1,-0.6\right)}^{\text{T}},{\left(1.4,-0.6,2\right)}^{\text{T}}\right)$ [7] .

3.2. Experimental Design

· To assess the effectives of the EM algorithm, we look at the results using least squares (LS) and its robust version (RLS) across a range of missing data rates (contamination levels between 5% and 10%) and outlier rates (1%, 3%, 5%, and 10%) expressed in terms of their Aitchison distance (*d _{A}*) [8] .

· We also look at the EM algorithm’s output in terms of the covariances of various rates of outliers (1%, 3%, 5%, and 10%) and missing data (contamination levels ranging from 5% to 10%).

3.3. Results and Analysis

The detailed results for all experiments are discussed in this section (Table 1, Table 2, Figure 1, Figure 2).

Table 1. Performance metrics for different imputation methods in terms of distance.

^{1}Epsilon denote rate of outlier at (1%, 3%, 5% and 10%); ^{2}NArate denote missing rate (contamination level at 5% and 10%); ^{3}xMean denote arithmetic mean imputation method; ^{4}kNN denote k-Nearest Neighbor imputation method; ^{5}LS denote least square regression version of EM algorithm; ^{6}RLS denote robust least square regression version of EM algorithm; *Bold numbers indicate the best performing imputation method for a given epsilon and missing rate.

Figure 1. Performance for different imputation methods in terms of distance.

Table 2. Performance metrics for different imputation methods in terms of Covariance.

^{1}Epsilon denote rate of outlier at (1%, 3%, 5% and 10%); ^{2}NArate denote missing rate (contamination level at 5% and 10%); ^{3}xMean denote arithmetic mean imputation method; ^{4}kNN denote k-Nearest Neighbor imputation method; ^{5}LS denote least square regression version of EM algorithm; ^{6}RLS denote robust least square regression version of EM algorithm; *Bold numbers indicate the best performing imputation method for a given epsilon and missing rate.

Figure 2. Performance for different imputation methods in terms of Covariance.

4. Conclusions

In this study, it is postulated that the missing rate follows a random pattern. Consequently, both the least square and robust least square EM algorithm approaches are employed to address the issue of missing compositional data. The simulated compositional synthetic data was utilized as the practical dataset in our study. It has been shown that when the occurrence of missing and outlier data decreases, the Aitchison distance approaches zero. Furthermore, the robust least square variant of the EM method yields the smallest values in this regard making it the best performing algorithm. Distance values close to zero represent the most efficacious imputation strategies.

We also compare the Expectation-Maximization (EM) method to arithmetic mean imputation and k-Nearest Neighbor and find that the robust EM version is the best of the bunch when it comes to co-variances. In this study, we provided evidence to support the effectiveness of all strategies when applied under conditions of low missing rates, specifically those below 5%. The EM algorithm demonstrates improved performance as the rate of missing data increases. However, when the incidence of missing data exceeds 10%, most imputation methods become inadequate in generating a dependable approximation, hence exacerbating the difficulty in recovering the data.

Conflicts of Interest

The authors declare no conflicts of interest regarding the publication of this paper.

[1] | Aitchison, J. (2002) Simplicial Inference. In: Viana, M.A.G. and Richards, D.S.P., Eds., Contemporary Mathematics Series, Vol. 287: Algebraic Methods in Statistics and Probability, American Mathematical Society, Providence, 1-22. |

[2] | Aitchison, J. (1986) The Statistical Analysis of Compositional Data. Chapman & Hall, London. |

[3] |
Aitchison, J. (1989) Measures of Location of Compositional Data Sets. Mathematical Geology, 21, 787-790. https://doi.org/10.1007/BF00893322 |

[4] |
Martín-Fernández, J.A., Egozcue, J.J., Olea, R.A. et al. (2021) Units Recovery Methodsin Compositional Data Analysis. Natural Resources Research, 30, 3045-3058. https://doi.org/10.1007/s11053-020-09659-7 |

[5] |
Pawlowsky, V., Olea, R.A. and Davis, J.C. (1995) Estimation of Regionalized Compositions: A Comparison of Three Methods. Mathematical Geosciences, 27, 105-127. https://doi.org/10.1007/BF02083570 |

[6] |
Weltje, G.J. (1997) End-Member Modeling of Compositional Data: Numerical-Statistical Algorithms for Solving the Explicit Mixing Problem. Mathematical Geosciences, 29, 503-549. https://doi.org/10.1007/BF02775085 |

[7] | Rehder, U. and Zier, S. (2001) Comment on “Logratio Analysis and Compositional distance by Aitchison et al. (2000)”. Journal of Mathematical Geology, 32, 741-763. |

[8] |
Dempster, A.P., Laird, N.M. and Rubin, D.B. (1977) Maximum Likelihood from Incomplete Data via the EM Algorithm. Journal of the Royal Statistical Society, 39, 1-22. https://doi.org/10.1111/j.2517-6161.1977.tb01600.x |

Journals Menu

Contact us

+1 323-425-8868 | |

customer@scirp.org | |

+86 18163351462(WhatsApp) | |

1655362766 | |

Paper Publishing WeChat |

Copyright © 2024 by authors and Scientific Research Publishing Inc.

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.