Optimal Estimation of High-Dimensional Covariance Matrices with Missing and Noisy Data

Abstract

The estimation of covariance matrices is very important in many fields, such as statistics. In real applications, data are frequently influenced by high dimensions and noise. However, most relevant studies are based on complete data. This paper studies the optimal estimation of high-dimensional covariance matrices based on missing and noisy sample under the norm. First, the model with sub-Gaussian additive noise is presented. The generalized sample covariance is then modified to define a hard thresholding estimator , and the minimax upper bound is derived. After that, the minimax lower bound is derived, and it is concluded that the estimator presented in this article is rate-optimal. Finally, numerical simulation analysis is performed. The result shows that for missing samples with sub-Gaussian noise, if the true covariance matrix is sparse, the hard thresholding estimator outperforms the traditional estimate method.

Share and Cite:

Wang, M. and Ye, W. (2024) Optimal Estimation of High-Dimensional Covariance Matrices with Missing and Noisy Data. Advances in Pure Mathematics, 14, 214-227. doi: 10.4236/apm.2024.144013.

1. Introduction

The covariance matrix is a key component in various fields, particularly statistics. However, when dealing with many statistical situations, the covariance matrix is usually unknown. As a result, estimating the covariance matrix is extremely significant, and it is frequently utilized in signal processing, genomics, financial mathematics, and other domains. When the dimension p is fixed and the sample size n is sufficiently large, the sample covariance matrix is commonly used to estimate the true covariance matrix. However, with advancements in information technology and various other technologies, there is a growing challenge in estimating large covariance matrices. Issues such as dimensionality and noise can significantly impact the effectiveness of using the sample covariance matrix to estimate the true covariance matrix. Moreover, in the era of big data, missing data is a common occurrence, making research on the estimation of high-dimensional covariance matrices based on missing and noisy data essential.

Bickel and Levina [1] proposed thresholding as a commonly used method for estimating high-dimensional covariance matrices and and proved its consistency under the operator norm. However, there was no discussion on its optimality. Cai and Zhou studied the optimal estimation of sparse covariance matrices under the operator norm and Bregman divergence loss. They also proved that the thresholding estimator can achieve the optimal convergence rate under the spectral norm, see [2] . Cai and Zhou [3] provided the optimal estimation of the sparse covariance matrices under the l 1 norm loss. The thresholding described above is also referred to as hard thresholding, and its counterpart is soft thresholding [4] [5] . On this basis, Rothman, Levina, and Zhu [6] proposed generalized thresholding and proved its consistency. Cai and Liu [7] proposed adaptive thresholding. The adaptive estimation of high-dimensional sparse precision matrices was studied by Cai, Liu, and Zhou [8] . For bandable covariance matrices, [3] , [9] , and [10] conducted in-depth research.

In the case of missing data, Cai and Zhang [11] assumed that the missingness was independent of the data values and studied the optimal estimation of two classes of covariance matrices. Qi [12] explored the optimal estimation of sparse covariance matrices under the l 1 norm and the Fribenius norm, respectively. In addition, the lower bound for estimating bandable covariance matrices under the spectral norm was studied based on noisy and missing data, but its optimality is not considered. Shi [13] studied the optimal estimation of bandable covariance matrices based on missing and noisy sample data.

It is not difficult to find that the research on estimating high-dimensional covariance matrices is primarily based on complete data. However, the correlational research on missing data and noisy models remains critical. The articles listed above served as a tremendous source of inspiration for this paper’s study topic and methods. This paper will provide corresponding research for the aforementioned issues. Sparse covariance matrices are widely employed in a variety of applications, including genomics. As a result, it is necessary to investigate the estimate of this kind of covariance matrix. The research in this paper can help people better estimate the high-dimensional covariance matrix when the sample is noisy and missing. Thus, it is convenient for many fields to better use high-dimensional data to obtain more useful information, and this paper provides them with a reliable theoretical basis.

The remaining sections of this paper are as follows: Section 2 will provide the associated concepts and knowledge of covariance matrix estimation, which serves as the theoretical foundation for the research. In Section 3, we will study the optimal estimation of sparse covariance matrices with missing and noisy data. In Section 4, numerical simulation experiment will be performed to investigate the estimating effect of the estimator presented in Section 3. The fifth section summarizes the research content and discusses existing problems.

2. Theoretical Basis

This paper will primarily study the optimal estimation of covariance matrices under the l 1 norm. For a matrix A = ( a i j ) m × n , σ ( A ) = λ ( A H A ) represents the singular values of A , while λ ( A H A ) represents the eigenvalues of A H A . The operator norm of A is defined as

A a = max x a = 1 , x n A x a = max x 0 , x n A x a x a

There are three common operator norms:

(1) l 1 norm: A l 1 = A 1 = max 1 j n i = 1 m | a i j | ;

(2) spectral norm: A s p = A 2 = σ 1 ( A ) = max i σ i ( A ) ;

(3) l norm: A = max 1 i m j = 1 n | a i j | .

Next, we will introduce the sub-Gaussian random vector. If there is a parameter k > 0 such that E ( e s X ) e k 2 s 2 / 2 , s , the random variable X is considered a sub-Gaussian random variable with parameter k , that is, X S u b ( k ) . It is easy to know that sub-Gaussian random variables include Gaussian random variables whose mean is 0 and all bounded random variables with a mean of 0. Assuming the random variable X is sub-Gaussian, its sub-Gaussian norm is denoted by

X ψ 2 = sup p 1 p 1 / 2 ( E | X | p ) 1 / p .

A p-dimensional random vector X = ( X 1 , X 2 , X p ) T is called the sub-Gaussian random vector if any linear combination of X 1 , X 2 , X p is sub-Gaussian. That is, when τ > 0 , for any t > 0 , v p , and v 2 = 1 , there is

P { | v T ( Χ E Χ ) | > t } e t 2 / 2 τ .

Assume that a p-dimensional random vector Χ p has the mean μ and the covariance matrix Σ . Covariance matrix estimation is the process of computing a covariance matrix Σ ^ based on n independent copies Χ 1 , Χ 2 , Χ n p of Χ and then using Σ ^ to estimate Σ , i.e., making Σ ^ approximate Σ in a certain sense. In this paper, minimax risk is used as a standard to measure the estimation effect. Suppose Χ 1 has a certain class of covariance matrices, and A is a specific collection of Χ 1 ’s distributions. Then, under the specified matrix norm , [3] defines the minimax risk of estimating Σ over A as

When the vector’s dimension p is smaller than the sample size n , the sample covariance matrix is typically utilized to estimate the true covariance matrix. The sample mean is

Χ ¯ = 1 n i = 1 n Χ i ,

and the sample covariance matrix is

Σ ^ = ( σ ^ i j ) p × p = 1 n i = 1 n ( Χ i Χ ¯ ) ( Χ i Χ ¯ ) T . (1)

However, as noted in Section 1, when the dimension p is substantially larger than the sample size n , utilizing the sample covariance matrix for estimating the true covariance matrix becomes inadequate. Based on the work of Cai et al., this paper will study the optimal estimation of high-dimensional covariance matrices based on missing and noisy data.

The missing completely at random (MCR) model is introduced below.MCR indicates that the missingness was random and independent of the data values. Suppose { Χ 1 , Χ 2 , Χ n } is complete random sample from Χ . Introducing vector S k = { 0 , 1 } p , k = 1 , 2 , n as the observation index for Χ k , then

S j k = { 1 , X j k isobserved , 0 , X j k ismissing .

S j k and X j k represent the j th coordinate of vectors Χ k and S k , respectively.

We denote Χ * = { Χ 1 * , Χ 2 * , , Χ n * } as the sample with missing data, where Χ i * = ( X 1 i S 1 i , X 2 i S 2 i , , X p i S p i ) T is the i th observation sample. Additionally, define

n i j * : = k = 1 n S i k S j k , 1 i , j p .

When S i k S j k = 1 , the i th and j th components of vector Χ k * are observed simultaneously, whereas S i k S j k = 0 indicates that they were not observed simultaneously. Thus, n i j * denotes the number of times the i th and j th components of Χ * are simultaneously observed. For convenience, let’s define

n i * = n i i * , n min * = min i , j n i j * . It is simple to know that n min * n i j * min { n i * , n j * } . When the sample data are complete, n i j * n .

For sample Χ * with missing data, we substitute the generalized sample mean and generalized sample covariance matrix for the traditional sample mean and covariance matrix. The generalized sample mean is defined as the following:

Χ ¯ * : = ( Χ ¯ i * ) 1 i p , Χ ¯ i * = 1 n i * k = 1 n Χ i k * = 1 n i * k = 1 n Χ i k S i k ,

the generalized sample covariance matrix is defined as

Σ ^ * : = ( σ ^ i j * ) 1 i , j p , σ ^ i j * = 1 n i j * k = 1 n ( Χ i k Χ ¯ i * ) ( Χ j k Χ ¯ j * ) S i k S j k . (2)

3. Covariance Matrix estimation

This paper assumes that the covariance matrix is sparse, which means that the majority of its components are 0 or insignificant, and the distribution of non-zero elements is irregular. First, we introduce the parameter space G q ( ρ , c n , p ) of the sparse covariance matrices:

G q ( ρ , c n , p ) = { Σ = ( σ i j ) 1 i , j p : max 1 j p { | σ [ k ] j | q } c n , p k , k max σ i i ρ } ,

where 0 q < 1 , and σ [ k ] j represents the element with the k th largest absolute value in the j th column of matrix Σ . When q = 0 , each column of the matrices in G q ( ρ , c n , p ) has at most c n , p non-zero components, usually assuming c n , p 1 .

3.1. Noisy Model

Assuming the complete random vector Χ p has the covariance matrix Σ X = ( σ ^ i j ) p × p . Using a p -dimensional random vector F to represent noisy data, the noisy model can be expressed as

F = X + ε , (3)

where ε p represents noise. In this section, we hope to build a p × p matrix Σ ^ F based on n independent random noisy samples F 1 , F 2 , F n p of F . We next use Σ ^ F to estimate the covariance matrix Σ X of the random vector X .

The noisy sample with missing data are represented by F * = { F 1 * , F 2 * , , F n * } , where F i * = ( F 1 i S 1 i , F 2 i S 2 i , , F p i S p i ) T is the i th observation sample. The definition of the generalized sample mean is as follows:

F ¯ * : = ( F ¯ i * ) 1 i p , F ¯ i * = 1 n i * k = 1 n F i k * = 1 n i * k = 1 n F i k S i k ,

the generalized sample covariance matrix is defined as

Σ ^ F : = ( σ ^ i j * ( F ) ) 1 i , j p , σ ^ i j * ( F ) = 1 n i j * k = 1 n ( F i k F ¯ i * ) ( F j k F ¯ j * ) S i k S j k . (4)

Two new assumptions in [12] are presented below.

Assumption 1. The observation index S : = { S 1 , S 2 , S n } can be random or deterministic, but it is independent of the noisy observation sample F : = { F 1 , F 2 , F n } .

Assumption 2. The random vectors F 1 , F 2 , F n are i.i.d., where F k = X k + ε k , and

X k = Γ Z k + μ , ε k = Γ ε Z k , k = 1 , 2 , n .

μ represents a fixed p -dimensional mean vector. Γ , Γ ε p × q ( p q ) are fixed matrices with Γ Γ T = Σ and Γ ε Γ ε T = Σ ε . Each component of the random vector Z k = ( Z 1 k , Z 2 k , , Z q k ) T i.i.d. sub-Gaussian with a variance of 1 and a mean of 0. For any s > 0 , there exists a parameter τ > 0 such that E ( e s Z i k ) e τ s 2 / 2 , that is, Z i k S u b ( τ ) .

3.2. Upper Bound for Estimating Sparse Covariance Matrix

The hard thresholding estimator based on complete data was proposed by [1] . When most of the elements in each row or column of the true covariance matrix are close to zero or negligible, set the elements of the sample covariance matrix below a certain threshold to 0, and leave the remaining elements unaltered to estimate the true covariance matrix, so as to reduce the error. In [2] ,

P { | σ ^ i j σ i j | t } 1 C p 8

for t = γ log p / n , where C is a constant. The threshold is set to γ log p / n .

In this paper, it is extended to the case of missing and noisy data. According to Lemma 4.6 in [12] , if Assumption 1 and Assumption 2 are both hold, then there are two absolute constants C and c greater than 0, such that

P { | σ ^ i j * ( F ) σ i j | x } 1 C exp { c n min min ( x 2 τ 4 σ i i σ j j , x τ 2 σ i i σ j j ) } (5)

for any x > 0 . Since σ i i σ j j ρ 2 , the above can be simplified to: there are constants C > 0 and γ > 0 , such that

P { | σ ^ i j * ( F ) σ i j | x } 1 C exp ( 8 γ 2 n min * x 2 ) . (6)

Where x ρ , and the constants C and γ only depend on ρ . Note that Inequality can be written as P { | σ ^ i j * ( F ) σ i j | x } 1 C p 8 when x = γ log p / n min * .

The hard thresholding estimator Σ ^ F t h of the covariance matrices Σ X G q ( ρ , c n , p ) is defined by transforming the generalized sample covariance σ ^ i j * ( F ) in Equation (4),

Σ ^ F t h = ( σ ^ i j t h ( F ) ) p × p = ( σ ^ i j * ( F ) I ( | σ ^ i j * ( F ) | λ ) ) p × p , λ = γ log p n min * , (7)

where γ is a constant and γ > 0 .

The following is Lemma 1, which plays an important role in studying the minimax upper bound. Lemma 1 generalizes Lemma 8 in [8] from complete to noisy and missing sample.

Lemma 1. Define event A i j : = { | σ ^ i j t h ( F ) σ i j | > 4 min ( | σ i j | , λ = γ log p / n min * ) } , then there is constant c > 0 , which only depends on ρ , such that

P { A i j } 2 c p 9 2 .

Proof : Firstly, define event B 1 : = { | σ ^ i j t h ( F ) | λ } . It is easy to know that

B 1 { | σ ^ i j t h ( F ) σ i j | | σ ^ i j t h ( F ) | | σ i j | λ | σ i j | } , B 1 c { | σ ^ i j t h ( F ) σ i j | | σ i j | | σ ^ i j t h ( F ) | > | σ i j | λ } . (8)

According to the definition of σ ^ i j t h ( F ) in Equation (7),

| σ ^ i j t h ( F ) σ i j | = | σ ^ i j ( F ) σ i j | I ( B 1 ) + | σ i j | I ( B 1 c ) . (9)

Next, we will prove this lemma in different cases. It can be obtained by simple calculation:

P { | σ ^ i j t h ( F ) σ i j | 4 min ( | σ i j | , λ ) } { 1 C p 9 / 2 , | σ i j | < λ / 4 , 1 C p 8 , λ / 4 | σ i j | 2 λ , 1 2 C p 8 | σ i j | > 2 λ .

Therefore, there exists a constant c > 0 , such that

P { | σ ^ i j t h ( F ) σ i j | > 4 min ( | σ i j | , λ ) } 2 c p 9 / 2 .

Next, we can obtain the upper bound for estimating the sparse covariance matrices by utilizing the risk properties of thresholding estimator.

Theorem 1. If Assumption 1 and Assumption 2 hold, log p = o ( n min * ) and p n min * , then there is a constant C > 0 such that the hard thresholding estimator Σ ^ F t h defined by Equation (7) satisfies

sup Σ X G q ( ρ , c n , p ) E Σ ^ F t h Σ X 1 2 C c n , p 2 ( log p n min * ) 1 q . (10)

Proof: Easy to know, Σ ^ F t h Σ X 1 2 = [ max j i = 1 p | σ ^ i j t h ( F ) σ i j | ] 2 . If event A i j occurs,

i = 1 p | σ ^ i j t h ( F ) σ i j | > i = 1 p 4 min ( | σ i j | , λ ) .

Simple calculations show that

i = 1 p min ( | σ i j | , λ ) = ( i k + i > k ) min ( | σ i j | , λ ) ( i k + i > k ) min ( | σ [ i ] j | , γ log p n min * ) .

According to the definition of G q ( ρ , c n , p ) , we know that max 1 j p { | σ [ i ] j | q } c n , p / i , so | σ [ i ] j | ( c n , p / i ) 1 / q . Select the constant k to satisfy k = c n , p ( n min * / log p ) q / 2 , so

i = 1 p min ( | σ i j | , λ ) i k min ( ( c n , p i ) 1 q , γ log p n min * ) + i > k min ( ( c n , p i ) 1 q , γ log p n min * ) k γ log p n min * + i > k ( c n , p i ) 1 q C 1 c n , p ( log p n min * ) 1 q 2 .

Let the matrix D = ( d i j ) p × p satisfy d i j = | σ ^ i j t h ( F ) σ i j | I ( A i j ) , we have

E Σ ^ F t h Σ X 1 2 2 E Σ ^ F t h Σ X D 1 2 + 2 E D 1 2 .

Then, it is straightforward to acquire

2 E Σ ^ F t h Σ X D 1 2 = 2 E [ max j i = 1 p | σ ^ i j t h ( F ) σ i j ( σ ^ i j t h ( F ) σ i j ) I ( A i j ) | ] 2 2 E [ max j i = 1 p | ( σ ^ i j t h ( F ) σ i j ) I ( A i j c ) | ] 2 C 2 c n , p 2 ( log p n min * ) 1 q .

Therefore, we only need to prove that 2 E D 1 2 is negligible.

Firstly,

E D 1 2 = E [ max j i = 1 p | d i j | ] 2 p 2 E ( max i , j d i j ) 2 = p 2 E [ ( max i , j d i j 2 ) I ( A i j { | σ ^ i j * ( F ) | λ } ) ] + p 2 E [ ( max i , j d i j 2 ) I ( A i j { | σ ^ i j * ( F ) | < λ } ) ] p 2 E [ ( max i , j | σ ^ i j * ( F ) σ i j | ) 2 I ( A i j ) ] + p 2 E [ ( max i , j | σ i j | ) 2 I ( A i j ) ] = E 1 + E 2 .

According to the Cauchy-Schwartz inequality, we know that | σ i j | 2 σ i i σ j j , and because P { A i j } 2 c p 9 / 2 , p n min * , so

E 2 p 2 E ( max i , j σ i i σ j j ) P ( A i j ) p 2 ρ 2 2 c p 9 2 C 3 p 5 2 C 4 n min * .

In addition, E 1 p 2 max i , j E | σ ^ i j * ( F ) σ i j | 2 P ( A i j ) . From Inequality (5),

E | σ ^ i j * ( F ) σ i j | 2 = | σ ^ i j * ( F ) σ i j | 2 d P 0 x P { | σ ^ i j * ( F ) σ i j | x } d x 0 ρ x P { | σ ^ i j * ( F ) σ i j | x } d x + ρ x P { | σ ^ i j * ( F ) σ i j | x } d x ρ 2 + ρ x exp ( C 5 n min x ρ ) d x ρ 2 + ρ 2 C 5 n min exp ( C 5 n min ) .

From Lemma 1, we have P { A i j } 2 c p 9 / 2 and p n min * , so

E 1 p 2 max i , j E | σ ^ i j * ( F ) σ i j | 2 P ( A i j ) p 2 ( ρ 2 + ρ 2 C 5 n min exp ( C 5 n min ) ) 2 c p 9 / 2 p 2 ρ 2 2 c p 9 / 2 + p 2 ρ 2 C 5 n min exp ( C 5 n min ) 2 c p 9 / 2 C 4 n min * + C 6 n min * C 7 n min *

To sum up,

E D 1 2 E 1 + E 2 C 8 n min * = O ( 1 n min * ) .

3.3. Lower Bound for Estimating Sparse Covariance Matrix

Before studying the lower bound, introduce some useful lemmas and symbols.

Lemma 2. Assume P and V are two probability measures, with p and v representing their probability density functions. The total variation distance between P and V is V ( P , V ) = 1 min ( d P , d P ) . Define the total variation affinity as P V : = min ( d P , d P ) = p ( x ) v ( x ) d x . The Kullback divergence between P and V is expressed as K L ( P V ) = p ( x ) log [ p ( x ) / v ( x ) ] d x . Thus, P V and K L ( P V ) satisfy the following inequality:

1 P V K L ( P V ) 2 . (11)

Lemma 2 in [14] and Le Cam’s lemma and its corollary in [2] [12] introduced below are important tools for proving minimax lower bound.

Lemma 3 (Le Cam). Suppose Θ = { θ 0 , θ 1 , , θ m } is a finite set of parameters. Let L be a loss function and l min : = min 1 i m inf t [ L ( t , θ 0 ) + L ( t , θ i ) ] , then

sup θ Θ E [ L ( θ ˜ , θ ) ] 1 2 l min P θ 0 P ¯ .

θ ˜ is any estimator of θ based on the observed values of the probability measure P θ ( θ Θ ) , and P ¯ = 1 m i = 1 m P θ i .

Lemma 4. Suppose Σ ˜ be any estimator of Σ i based on the collection of probability measures { P Σ 0 , P Σ 1 , , P Σ m } . We get

sup 1 i m E Σ ˜ Σ i 1 1 2 P Σ 0 P ¯ inf 1 i m Σ i Σ 0 1 ,

where P ¯ = 1 m i = 1 m P Σ i .

Before studying the minimax risk lower bound, it is advisable to construct a matrix with all off-diagonal elements equal to 0 except the first row or column. Let H be the collection of p × p symmetric matrices in which exactly k non-diagonal elements in the first row or column equal to 1 and all other elements are 0. Let k = c n , p ( n 0 / log p ) q / 2 . Define

G 0 = { Σ = ( σ i j ) 1 i , j p : Σ = I p Σ = I p + a H , H H } , (12)

where I p represents the identity matrix of size p × p , a = δ log p / n 0 , and δ is a constant. Assuming ρ > 1 , 0 < δ < min { 1 , 1 / ( 4 M ) } , it is easy to know that G 0 G q ( ρ , c n , p ) .

Obtaining the lower bound requires two steps. Firstly, the subset of the parameter space constructed above is selected to simplify the proof. Secondly, calculate the total variation affinity between two probability measures.

Theorem 2. Let 1 n 0 n min * , p n 0 ν ( ν > 1 ) , and log p n 0 . Assume c n , p M ( n 0 / log p ) ( 1 q ) / 2 with 0 q < 1 , M > 0 . For any 1 n 0 n min * , there exists a constant c > 0 such that the minimax risk lower bound for estimating the covariance matrix Σ X satisfies

inf Σ ˜ F sup Σ X G q ( ρ , c n , p ) E Σ ˜ F Σ X 1 2 c c n , p 2 ( log p n 0 ) 1 q . (13)

where Σ ˜ F is any estimator of Σ X based on noisy sample.

Proof: Assume G 0 = { Σ 0 , Σ 1 , , Σ m } has m + 1 elements, where Σ 0 represents the identity matrix and Σ i , i = 1 , , m represent the non-identity matrix, then m = C a r d ( G 0 ) 1 = C p 1 k .

Assume that X l , l = 1 , , n ~ i . i . d . N ( 0 , Σ i ) , i = 1 , , m , and the probability measure and probability density function are P Σ i and f i , respectively, that is, Σ X G 0 . Let F l ~ i . i . d . N ( 0 , Σ i ( F ) ) with Σ i ( F ) = Σ i + s 2 I p and P Σ i ( F ) is the probability measure. Since G 0 G q ( ρ , c n , p ) , it is easy to know that

inf Σ ˜ F sup Σ X G q ( ρ , c n , p ) E Σ ˜ F Σ X 1 2 inf Σ ˜ F sup Σ X G 0 E Σ ˜ F Σ X 1 2 .

Therefore, to prove Inequality (13), just prove the following Inequality:

inf Σ ˜ F sup Σ X G 0 E Σ ˜ F Σ X 1 2 c c n , p 2 ( log p n 0 ) 1 q . (14)

Lemma 3.3 shows that

sup Σ X G 0 E Σ ˜ F Σ 1 sup 1 i m E Σ ˜ F Σ i 1 1 2 P Σ 0 ( F ) P ¯ ( F ) inf 1 i m Σ i ( F ) Σ 0 ( F ) 1 .

Since a = δ log p / n 0 and k = c n , p ( n 0 / log p ) q / 2 , there exists a constant c 1 > 0 such that

inf 1 i m Σ i ( F ) Σ 0 ( F ) 1 = inf 1 i m ( Σ i + s 2 I p ) ( Σ 0 + s 2 I p ) 1 = inf 1 i m a H 1 = k a c n , p ( n 0 log p ) q / 2 δ log p n 0 c 1 c n , p ( log p n 0 ) 1 q 2 (15)

Obviously, to prove Inequality (14), we only need to prove that there is a constant c 2 > 0 such that P Σ 0 ( F ) P ¯ ( F ) c 2 .

From F l ~ i . i . d . N ( 0 , Σ i ( F ) ) , we have

K L ( P Σ 0 ( F ) P Σ i ( F ) ) = 1 2 [ t r ( Σ i 1 ( F ) Σ 0 ( F ) ) l o g det ( Σ i 1 ( F ) Σ 0 ( F ) ) p ] .

Let B = a H , it is easy to know that Σ 0 ( F ) = Σ i ( F ) + B . Suppose the eigenvalues of B Σ i 1 ( F ) are ξ 1 , , ξ p , then there are

t r ( Σ i 1 ( F ) Σ 0 ( F ) ) = t r [ Σ i 1 ( F ) ( Σ i ( F ) + B ) ] = t r [ I p + B Σ i 1 ( F ) ] = p + i = 1 p ξ i . (16)

In addition, we can know that

log det ( Σ i 1 ( F ) Σ 0 ( F ) ) = i = 1 p log ( 1 + ξ i ) = t r ( Σ i 1 ( F ) Σ 0 ( F ) ) p i = 1 p 1 2 ( 1 + θ ) ξ i 2 , (17)

where θ is a number between 0 and ξ i . Putting Equation (16) and Equation (17) into K L ( P Σ 0 ( F ) P Σ i ( F ) ) , we can get

K L ( P Σ 0 ( F ) P Σ i ( F ) ) = 1 2 [ p + i = 1 p 1 2 ( 1 + θ ) ξ i 2 p ] = 1 2 i = 1 p 1 2 ( 1 + θ ) ξ i 2 1 4 i = 1 p ξ i 2 .

According to Theorem 1.3 in [12] ,

i = 1 p ξ i 2 = t r ( ( B Σ i 1 ( F ) ) H B Σ i 1 ( F ) ) = B Σ i 1 ( F ) F 2 Σ i 1 ( F ) s p 2 B F 2 2 k a 2 .

It is easy to see that P ¯ ( F ) = 1 / m i = 1 m P Σ i ( F ) , hence

K L ( P Σ 0 ( F ) P ¯ ( F ) ) 1 m i = 1 m K L ( P Σ 0 ( F ) P Σ i ( F ) ) 1 8 .

Lemma 2 implies

P Σ 0 ( F ) P ¯ ( F ) 1 K L ( P Σ 0 ( F ) P ¯ ( F ) ) 2 = 3 4 .

That is, there exists a constant c 2 > 0 such that P Σ 0 ( F ) P ¯ ( F ) c 2 .

It is worth noting that Theorem 2 requires c n , p M ( n 0 / log p ) ( 1 q ) / 2 ( M > 0 ) , which is a necessary condition. If c n , p > M ( n 0 / log p ) ( 1 q ) / 2 , then

inf Σ ˜ F sup Σ X G q ( ρ , c n , p ) E Σ ˜ F Σ X 1 2 inf Σ ˜ F sup Σ X G q ( ρ , M ( n 0 / log p ) ( 1 q ) / 2 ) E Σ ˜ F Σ X 1 2 M 2 .

Σ X does not have a consistent estimator in this case.

Theorem 1 and Theorem 2 show that the estimator Σ ^ F t h we construct is rate-optimal over G q ( ρ , c n , p ) under the l 1 norm.

4. Numerical Analysis

The optimal estimation of sparse covariance matrices based on missing and noisy data is derived in Section 3. This section compares the performance of the hard threshold estimator Σ ^ F t h , as defined in Section 3, against the traditional estimator using numerical simulation.

Some symbols are presented before the numerical simulation begins. Assume the p -dimensional Gaussian random vector Χ has a mean of μ and a covariance matrix of Σ X . n is the number of samples, and p is their dimension.

Here are the specific steps of numerical simulation.

1) Construct the sparse covariance matrix.

Assume μ is a zero vector and Σ X is a sparse matrix ( Σ X G q ( ρ , c n , p ) ). Consult the construction of the sparse matrix in [11] , let

Σ X = I p + ( B + B T ) / ( B + B T 1 + 0.01 ) ,

where B = ( b i j ) p × p , and P ( b i j = 1 ) = 0.1 , P ( b i j = 0 ) = 0.8 , P ( b i j = 1 ) = 0.1 .

2) Generate random samples according to the true covariance matrix.

After Σ X is constructed, n p -dimensional random samples are first generated from the multivariate normal distribution with mean μ and covariance matrix Σ X . The resulting n samples are then subjected to noise with a sub-Gaussian distribution, followed by random missing processing. This method produces sample data with missing and sub-Gaussian noise.

3) Compare the estimation effect of different estimators.

Based on the sample data with missing and sub-Gaussian noise, calculate the generalized sample covariance matrix Σ ^ F and the hard thresholding estimator Σ ^ F t h according to Equation and Equation . Then compute the error between Σ ^ F and the real matrix Σ X , as well as the error between Σ ^ F t h and the real matrix Σ X , under the given norm.

After determining the values of n and p , repeat the above three steps 1), 2), and 3) 50 times, and take the mean value of the fifty error results as the standard for evaluating the estimation effect of different estimators in this case. The performance is better when the outcome is smaller. Table 1 shows the experimental results.

The values of n and p are shown in the first two columns of Table 1. Table 1 shows the average after 50 runs of the processes 1), 2), and 3) with n and p fixed. When the true covariance matrix Σ X is sparse, the hard thresholding estimator Σ ^ F t h has a substantially better performance than the generalized sample covariance matrix Σ ^ F under any norm, especially when p is larger than n , that is, the dimension is high.

Therefore, when the dimension is very small in comparison to the sample size, the sample covariance matrix can be used to estimate the population covariance matrix. When estimating a high-dimensional sparse covariance matrix with sub-Gaussian additive noise and missing data, it is best to choose the hard thresholding estimator Σ ^ F t h given in Equation (7). This section provides some suggestions for application statisticians on how to select estimation methods.

Table 1. Results of estimating sparse covariance matrix.

5. Summary and Outlook

In statistics and other fields, covariance matrix estimation is crucial. The estimation of high-dimensional covariance matrices has always been a hot topic with the rapid growth of numerous technologies.

Based on the missing and noisy sample data, this paper constructs a hard thresholding estimator Σ ^ F t h , and studies its optimality. Section 3 shows that the hard thresholding estimator given in this paper is rate-optimal. The numerical simulation shown in Section 4 demonstrates that the hard thresholding estimator works well in situations where the true covariance matrix is sparse. When the true covariance matrix is not sparse, the estimation effect of the hard thresholding estimator has not been discussed.

This paper’s research has limitations and areas that require more investigation:

1) This paper focuses solely on the optimal estimation of sparse covariance matrices based on noisy and missing data. More research is needed on the optimal estimation of other common high-dimensional covariance matrices.

2) If the sub-Gaussian distribution used in this article is replaced with the sub-exponential distribution with a larger range, the relevant issues merit additional investigation.

Conflicts of Interest

The authors declare no conflicts of interest regarding the publication of this paper.

References

[1] Bickel, P.J. and Levina, E. (2008) Covariance Regularization by Thresholding. The Annals of Statistics, 36, 2577-2604.
https://doi.org/10.1214/08-AOS600
[2] Cai, T.T. and Zhou, H.H. (2012) Optimal Rates of Convergence for Sparse Covariance Matrix Estimation. The Annals of Statistics, 40, 2389-2420.
https://doi.org/10.1214/12-AOS998
[3] Cai, T.T. and Zhou, H.H. (2012) Minimax Estimation of Large Covariance Matrices under L1-Norm. Statistica Sinica, 22, 1319-1349.
https://doi.org/10.5705/ss.2010.253
[4] Fan, J. and Li, R. (2001) Variable Selection via Nonconcave Penalized Likelihood and Its Oracle Properties. Journal of the American Statistical Association, 96, 1348-1360.
http://www.jstor.org/stable/3085904
https://doi.org/10.1198/016214501753382273
[5] Zou, H. (2006) The Adaptive Lasso and Its Oracle Properties. Journal of the American Statistical Association, 101, 1418-1429.
https://doi.org/10.1198/016214506000000735
[6] Rothman, A.J., Levina, E. and Zhu, J. (2009) Generalized Thresholding of Large Covariance Matrices. Journal of the American Statistical Association, 104, 177-186.
https://doi.org/10.1198/jasa.2009.0101
[7] Cai, T.T. and Liu, W. (2011) Adaptive Thresholding for Sparse Covariance Matrix Estimation. Journal of the American Statistical Association, 106, 672-684.
http://www.jstor.org/stable/41416401
https://doi.org/10.1198/jasa.2011.tm10560
[8] Cai, T.T., Liu, W. and Zhou, H.H. (2016) Estimating Sparse Precision Matrix: Optimal Rates of Convergence and Adaptive Estimation. The Annals of Statistics, 44, 455-488.
https://doi.org/10.1214/13-AOS1171
[9] Bickel, P.J. and Levina, E. (2008) Regularized Estimation of Large Covariance Matrices. The Annals of Statistics, 36, 199-227.
https://doi.org/10.1214/009053607000000758
[10] Cai, T.T., Zhang, C.H. and Zhou, H.H. (2010) Optimal Rates of Convergence for Covariance Matrix Estimation. The Annals of Statistics, 38, 2118-2144.
https://doi.org/10.1214/09-AOS752
[11] Cai, T.T. and Zhang, A. (2016) Minimax Rate-Optimal Estimation of High-Dimensional Covariance Matrices with Incomplete Data. Journal of Multivariate Analysis, 150, 55-74.
https://doi.org/10.1016/j.jmva.2016.05.002
[12] Qi, X. (2022) Low Rank Matrix Perturbation Analysis and Estimation for Two Classes of Sparse Covariance Matrices. Ph.D. Thesis, Beijing University, Beijing.
[13] Shi, W. (2022) Optimal Estimation of Bandable Covariance Matrices Based on Noised Consored Data. MSc. Thesis, Beijing University, Beijing.
[14] Tsybakov, A.B. (2009) Introduction to Nonparametric Estimation. Springer-Verlag, New York.
https://doi.org/10.1007/b13794

Copyright © 2024 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.