A Perturbation Analysis of Low-Rank Matrix Recovery by Schatten p-Minimization

Abstract

A number of previous papers have studied the problem of recovering low-rank matrices with noise, further combining the noisy and perturbed cases, we propose a nonconvex Schatten p-norm minimization method to deal with the recovery of fully perturbed low-rank matrices. By utilizing the p-null space property (p-NSP) and the p-restricted isometry property (p-RIP) of the matrix, sufficient conditions to ensure that the stable and accurate reconstruction for low-rank matrix in the case of full perturbation are derived, and two upper bound recovery error estimation ns are given. These estimations are characterized by two vital aspects, one involving the best r-approximation error and the other concerning the overall noise. Specifically, this paper obtains two new error upper bounds based on the fact that p-RIP and p-NSP are able to recover accurately and stably low-rank matrix, and to some extent improve the conditions corresponding to RIP.

Share and Cite:

Sun, Z. , Wang, H. and Zhu, Z. (2024) A Perturbation Analysis of Low-Rank Matrix Recovery by Schatten p-Minimization. Journal of Applied Mathematics and Physics, 12, 475-487. doi: 10.4236/jamp.2024.122032.

1. Introduction

Low-rank matrix recovery (LMR) is a fast-growing field nowadays, attracting much attention in numerous applications such as quantum state tomography scanners [1] , deep learning [2] , nonlinear system identification [3] , computer visualization [4] , and medical imaging [5] . The mathematical expression of the low-rank matrix recovery issue is described as

b = A ( X ) (1)

where X m 1 × m 2 is an unknown low-rank matrix or approximate low-rank matrix, which need be recovered and b N is a known observation vector, A : m 1 × m 2 N is a given measurement operator or linear mapping which is defined by the following formula

A ( X ) = [ t r ( X T B ( 1 ) ) , t r ( X T B ( 2 ) ) , , t r ( X T B ( N ) ) ] T (2)

where B ( 1 ) , B ( 2 ) , , B ( N ) are named the measurement matrices, X T is the transpose matrix of X and t r ( ) is the trace function. The main goal of LMR is to recover the low-rank matrix X on the basis of observation vector b and operator A .

Actually, the linear measurement b is affected by the noisy vector y. The noisy LMR model is shown by

b ^ = A ( X ) + y , (3)

b ^ is an observed measurement disturbed by the noise vector y, y is an additive noise that does not depend on X.

Moreover, more LMR models may involve casein which the observed vector b is perturbed noise vector , meanwhile, the linear mapping A is hampered by Φ, i.e., A is replaced by A ^ = A + Φ to bring about multiplicative noise Φ(X) associated with X. In a large number of applications of complete separation problems such as remote sensing [6] , source separation [7] , and telecommunication [8] , complete perturbation problems usually arise. In order to require the optimal solution from this fully perturbed problems, a common approach is to solve a class of nuclear norm minimization issues (NNM) as described below:

min X m 1 × m 2 X s .t . A ^ ( X ) b ^ 2 ϵ ^ A , r , b , (4)

where ϵ ^ A , r , y 0 represents the overall noise, X is the trace norm of the matrix X, namely the sum of its singular values. While m 1 = m 2 and X = d i a g ( x ) ( x m 1 ) is diagonal matrix, issue (4) relegates into a compressed sensing issue

min x m 1 x 1 s .t . A ^ x b ^ 2 ϵ ^ A , r , b , (5)

here x 1 is l 1 the norm of the vector x , in a word, the sum of the absolute values of the elements of x .

Chartrand’s study [9] revealed that nonconvex variants of (5) can produce accurate reconstruction with fewer measurements. Specifically, the l 1 norm minimization is substituted with the l p norm minimization:

min x n 1 x p p s .t . A ^ x b ^ 2 ϵ ^ A , r , b , (6)

in which x p = ( i | x i | p ) 1 p is l p -quasi-norm of vector x . Even though the

l p -quasi-norm is not a norm, but it satisfies the triangle inequality.

Numerous studies have focused on the recovery of vector x by l p minimization ( 0 < p 1 ) [10] [11] [12] . Chartrand [9] conducts numerical experiments using random and non-random Fourier measurements, which acquied fewer measurements are required for accurate restoration than when p = 1. Chartrand [13] extended the result of Restricted Isometry Property (RIP) in Candès and Tao [14] to the case of l p minimization. Kong and Xiu [15] explored nonconvex relaxation methods for recovering the vector x. In summary, the case of (6) where x is free of the noise and perturbation for ( ϵ ^ A , r , b = 0 ) extends to the matrix, referred to as M p -minimization:

min X n 1 × n 2 X p p s .t . A ( X ) = b . (7)

The related work [13] considers the scenario of matrix recovery with noise but without perturbation, i.e., where the linear mapping A is not interfered with by Φ. From an applied and practical perspective, it is crucial to investigate the problem of rank-r matrix recovery in the fully perturbed case.

Therefore, we present the fully perturbed model of LMR through nonconvex Schatten p-norm minimization ( 0 < p 1 ):

min X n 1 × n 2 X p p s .t . A ^ ( X ) b ^ 2 ϵ ^ A , r , b , (8)

a p-norm of matrix X, and X p = ( i σ i p ( X ) ) 1 p ( 0 < p 1 ) , its singular value

decomposition (SVD) is X = U d i a g ( σ ( X ) ) V T , σ i ( X ) being the ith singular value of X. This model characterizes the problem of rank-r matrix recovery in a fully perturbed scenario, with ϵ A ( r ) , η A ( r ) , κ A , α r , β r , ϵ A , and ϵ y as parameters in the model.

Now, unlike the previous concept of restricted isometry constant, this paper follows the notion of restricted isometry property (p-RIP) given by Zhang in the article [13] , which is viewed below:

Definition 1.1 (p-RIP of the measurement mapping (or operator) A ) For the measurement operator A : m × n M , a positive integer r and 0 < p 1 , the restricted p-isometry constant (p-RIC) of order r denoted by δ r and for any matrix X m × n and R ( X ) r , which has

( 1 δ r ) X F p A ( X ) p p ( 1 + δ r ) X F p . (9)

If δ ( 0 , 1 ) , then A meets the restricted p-isometry property of order r.

The restricted isometry property (RIP) of matrix is an essential tool for LMR theory analysis. For the accurate LMR (i.e., y = 0 , Φ = 0 in (1)) or a noisy/partly perturbed LMR (i.e., Φ = 0 in (7)), there are some sufficient conditions based on RIP, which include δ k + θ δ 2 r + k < θ 1 , when θ > 1 and

k = 2 r θ 2 2 p ( 0 < p 1 ) [16] , 2 δ max { r + 3 k 2 , 2 k } k 2 r 1 p 1 2 δ 2 r + k < k 2 r 1 p 1 2 [17] .

Moreover, another crucial tool for analyzing low-rank matrix recovery is the null space property (NSP) of the linear mapping A . Gao and Peng et al. [18] extended the general null space property to obtain the sparse vector p-null space property (p-NSP). Furthermore, we further extend sparse vector x recovery to the low-rank matrix, the notion is defined like this:

Definition 1.2 (p-NSP of measurement operator A ) For the measured operator A : m × n M , with a constant 0 < s < 1 and v > 0 , for any matrix X m × n and r a n k ( X ) r , there exist

X r p p s X r c p p + v A ( X ) p p , (10)

so the operator A fulfills the p-null space property of order r.

2. Symbols and Main Results

Before presenting the key results of this paper, some notations similar to those of the article [19] which quantize disturbances Φ and y with different upper bounds are given like such:

Φ p A p ϵ A , Φ p ( r ) A p ( r ) ϵ A r , Φ 2 A 2 ϵ b , (11)

Here A p = sup { A ( X ) p X F , X m 1 × m 2 with X 0 } is the Schatten p-norm of the linear mapping A and A p ( r ) = sup { A ( X ) p X F , X m 1 × m 2 , X 0 , R ( X ) = r } , A ( r ) is the norm of its initial image set of A consisting of nonzero matrix of order r, furthermore

α r = X r c F X r F , β r = X r c p r 1 p 1 2 X r F , η A ( r ) = 1 + δ r 1 δ r p , κ A = A p 1 δ r p , (12)

X r c = X X r , X r stands for the best r-rank approximation of X whose singular values consist of the r-largest singular values of X.

Secondly, two theorems are obtained based on the matrix of restricted p-isometry property (p-RIP) (see Definition 1.1) and the p-null space property (p-NSP) (see Definition 1.2) defined in section 1. Two theorems derive sufficient conditions guaranteeing stable and accurate recovery of rank matrices, and offer recovery error upper bounds. The content of two theorems is descripted as below:

Theorem 2.1: Let there be a linear operator A : m × n M , y ^ M and 0 < p 1 . Let t > 1 and k = 2 t r , if the restricted p-isometry constant of the linear operator A fulfills

( 1 + δ 2 r ( t + 1 ) ) ( 1 + ϵ A 2 r ( t + 1 ) ) p + t p 2 1 δ 2 t r ( 1 + ϵ A 2 t r ) p < 2 t p 2 1 ( 1 + ϵ A 2 t r ) p , (13)

a general matrix X with r-rank meets

α r + β r < 1 η A ( r ) , (14)

and the overall noise is

ϵ ^ A , r , b = ( ϵ A ( r ) η A ( r ) + ϵ A κ A α r 1 η A ( r ) ( α r + β r ) + ϵ y ) b 2 . (15)

X m × n , r a n k ( X ) r , A ^ = A + Φ and A ^ ( X ) b ^ 2 ϵ ^ A , r , b . X * is the feasible solution of the fully perturbed Schatten p-norm minimization problem

min X m 1 × m 2 X p p s .t . A ^ ( X ) b ^ 2 ϵ ^ A , r , b , (16)

The error estimation of X and X * fulfills

X X * p p C ( ϵ ^ A , r , b ) p + D X r c p p , (17)

where

C = 2 p + 1 ( 2 r M ) 1 p 2 2 ( 1 + δ 2 r ( t + 1 ) ) ( 1 + ϵ A 2 r ( t + 1 ) ) p ( 1 + δ 2 t r ) ( 1 + ϵ A 2 t r ) p t p 2 1 ,

D = 2 + 4 ( 1 + δ 2 t r ) t p 2 1 ( 1 + ϵ A 2 t r ) p 2 ( 1 + δ 2 r ( t + 1 ) ) ( 1 + ϵ A 2 r ( t + 1 ) ) p ( 1 + δ 2 t r ) ( 1 + ϵ A 2 t r ) p t p 2 1 .

Theorem 2.2: For a given ϵ A , supposing that the linear measured operator A fulfills the p-null space property (p-NSP) with constants 0 < s < 1 2 τ ϵ A p A p p

and 0 < v < 1 ϵ A p A p p and conditions of (13) and (14) hold. Suppose X m × n , r a n k ( X ) r , A ^ = A + Φ , A ^ ( X ) b ^ 2 ϵ ^ A , r , b holds. X * is the feasible solution to the completely perturbed Schatten p-norm minimization problem

min X m 1 × m 2 X p p s .t . A ^ ( X ) b ^ 2 ϵ ^ A , r , b , (18)

then error estimation of X and X * satisfies

X X * p p C ( ϵ ^ A , r , b ) p + D X r c p p , (19)

where

C = v M 1 p 2 2 p + 1 1 s 2 v ϵ A p A p p , D = 2 ( 1 + s ) 1 s 2 v ϵ A p A p p .

3. Proof of Key Results

Proofs of our key results are presented in this section. To prove theorem 2.1 and theorem 2.2 such that we need the support of the following five lemmas and their proofs. We start this section with a lemma with respect to Schatten p norm.

Lemma 3.1: If 0 < p 1 . Presume that D , E R m × n are matrices with D T E = 0 and D E T = 0 . Then

1) D + E p p = D p p + E p p ; 2) D + E p D p + E p .

When p = 1 , D p p and D p are the trace(or nuclear ) norm of matrix D.

Lemma 3.2: For any p ( 0 , 1 ] , there is

Z r c p p Z r p p + 2 X r c p p . (20)

Proof of the lemma 3.2 suppose Z = X X * and X * is the optimal solution to the problem (8), which yields

X p p X * p p = X Z p p . (21)

Applying the inverse triangle inequality to the above Equation (21), we get

X Z p p = X r Z r c + X r c Z r p p X r Z r c p p X r c Z r p p . (22)

Again, by Lemma 3.1 and inequality (22), we obtain

X r Z r c p p X r c Z r p p = X r p p + Z r c p p X r c Z r p p X r p p + Z r c p p X r c p p Z r p p . (23)

Combining (21), (22), (23) and integrating their shifted terms, it is easy to show that

Z r c p p X p p X r p p + X r c p p + Z r p p X X r p p + X r c p p + Z r p p Z r p p + 2 X r c p p (24)

which finishes the proof of the above lemma.

Lemma 3.3: For any vector y b , there is

y p p b 1 p 2 y 2 p . (25)

Proof of the lemma 3.3 By definition of the l p norm, there is y p = ( i = 1 b | y i | p ) 1 p , and exploiting the Holder’s inequality, we suffice to obtain

y p p = i = 1 b | y i | p 1 p ( i = 1 b ( | y i | p ) 2 p ) p 2 ( i = 1 b 1 ) 1 p 2 = b 1 p 2 y 2 p .

Therefore, the lemma 3.3 is proved.

Next, the restricted isometry constant RIC δ r and the relative disturbance upper bound ϵ A ( r ) of the measured operator A which does not have perturbations are already present, and Lemma 3.4 gives p-RIP condition of the perturbed measurement operator A ^ and δ ^ r .

Lemma 3.4: (The perturbed measurement operator A ^ of the P-RIP) Assume that the r-order RIC of the A is denoted as δ r , and the upper bound on the relative perturbation corresponding to the operator Φ is ϵ A ( r ) and fix constant δ ^ r , max = ( 1 + δ r ) ( 1 + ϵ A ( r ) ) p 1 , then A ^ = A + Φ of the RIC δ ^ r δ ^ r , max , δ ^ r as the smallest and positive number that obeys

( 1 δ r ) X F p A ^ ( X ) p p ( 1 + δ r ) X F p (26)

for any matrices X which are r-rank.

Proof of the lemma 3.4 inspired by [20] , first we define t r and μ r are the smallest positive or zero numbers that satisfy

( 1 t r ) X F p A ^ ( X ) p p ( 1 + μ r ) X F p (27)

for any matrix X with rank at most r.

Using the triangle inequality, (9) and (11), we acquire

A ^ ( X ) p p = ( A ( X ) p + Φ ( X ) p ) p ( 1 + δ r p + Φ p ( r ) ) p X F p ( 1 + δ r p + ϵ A ( r ) A p ( r ) ) p X F p ( 1 + δ r p + ϵ A ( r ) 1 + δ r p ) p X F p ( 1 + δ r ) ( 1 + ϵ A ( r ) ) p X F p (28)

Because of the concept of μ r , it means that

1 + μ r ( 1 + δ r ) ( 1 + ϵ A ( r ) ) p , (29)

by applying the above inequality (29), we obtain a minimum upper bound

μ r = ( 1 + δ r ) ( 1 + ϵ A ( r ) ) p 1. (30)

Similarly, taking advantage of the inverse triangle inequality, combined with the concept of RIC and (11) yields

t r = 1 ( 1 δ r ) ( 1 ϵ A ( r ) ) p . (31)

Observe carefully that 1 μ r 1 t r and 1 + t r 1 + μ r . Based on the given δ r and ϵ A ( r ) , we choose μ r = δ ^ r , max the smallest nonnegative constant that makes (27) symmetric. Clearly, the true RIC of A ^ δ ^ r satisfies δ ^ r δ ^ r , max . The proof of Lemma 3.4 is completed.

Finally, the following lemma 3.5 clarifies that the perturbed measurement operator A ^ can also comply with p-NSP, provided that the constants s and τ satisfy specific conditions and the measurement operator A ^ satisfies p-NSP.

Lemma 3.5: For the given ϵ A , the measured operator A ^ satisfies the p-null space property with a constant 0 < s < 1 2 v ϵ A p A p p and a constant

0 < v < 1 ϵ A p A p p , which satisfies the condition of (10) for any X m × n , and fix two constants v ^ = v 1 v ϵ A p A p p and s ^ = s + v ϵ A p ( ρ + 1 ) 1 v ϵ A p A p p . Then two constants

0 < s ^ < 1 and v ^ > 0 , and the perturbed measurement operator A ^ satisfies p-NSP.

Proof of the lemma 3.5 Utilizing (11) and the inequality, there are

A ^ ( X ) p p A ( X ) p p + Φ ( X ) p p A ( X ) p p + Φ p p X p p A ( X ) p p + ϵ A p A p p X p p . (32)

Since A satisfies the p-null space property and X p p X r p p + X r c p p holds, we achieve

X r p p s X r c p p + v A ( X ) p p s X r c p p + v A ^ ( X ) p p + v ϵ A p A p p X p p s X r c p p + v A ^ ( X ) p p + v ϵ A p A p p ( X r p p + X r c p p ) . (33)

Then we collapse the inequality (33) to conclude that

X r p p s + v ϵ A p A p p 1 v ϵ A p A p p X r c p p + v 1 v ϵ A p A p p A ^ ( X ) p p .

Let v ^ = v 1 v ϵ A p A p p and s ^ = s + v ϵ A p A p p 1 v ϵ A p A p p , basing the fact that v > 0 to make v ^ > 0 , we need to solve the following inequality v 1 v ϵ A p A p p > 0 , which imitates 0 < v < 1 ϵ A p A p p .

In view of ϵ A in (11), and it is also known that s > 0 to make s ^ > 0 , we must solve

0 < s + v ϵ A p A p p 1 v ϵ A p A p p < 1 ,

after sorting out the above inequality, we obtain

v ϵ A p A p p < s < 1 2 v ϵ A p A p p .

Combining the above, while two constants 0 < v < 1 ϵ A p A p p and

0 < s < 1 2 v ϵ A p A p p , we are able to come true v ^ > 0 and 0 < s ^ < 1 , hence the measurement operator A ^ obeys p-RNSP. In summary, we prove the Lemma 3.5.

After the previous preparations, we now prove two theorems. The upper bound estimation of the error of a real matrix X to be recovered with the optimal solution X * of problem (8) is derived from the restricted isometry property, i.e., theorem 2.1.

Proof of the theorem 2.1 Let Z = X X * , X be the real matrix that we expect to recover, X * be the optimal solution to the problem (8). We apply the block decomposition of the SVD of the matrix Z given by

U T Z V = ( Z 11 Z 12 Z 21 Z 22 ) ,

where Z i j m i × n j with m 1 = n 1 = r and m 2 = n 2 = m r , Z = Z r + Z r c ,

Z r = U ( Z 11 Z 12 Z 21 0 ) V T , Z r c = U ( 0 0 0 Z 22 ) V T ,

it is clear that r a n k ( Z r ) r a n k [ ( Z 11 Z 12 ) ] + r a n k [ ( Z 21 0 ) ] 2 r , and X r ( Z r c ) T = 0 and ( X r ) T Z r c = 0 .

Let SVD of Z 22 ( m r ) × ( m r ) be described by

Z 22 = H d i a g ( σ ( Z 22 ) ) G T ,

where matrices H , G ( m r ) × ( m r ) are orthogonal, σ ( Z 22 ) = ( σ 1 ( Z 22 ) , σ 2 ( Z 22 ) , , σ m r ( Z 22 ) ) T denotes a vector consisting of the singular values of Z 22 and σ 1 ( Z 22 ) σ 2 ( Z 22 ) σ m r ( Z 22 ) 0 . We divide σ ( Z 22 ) into the sum of the vectors σ T i ( Z 22 ) ( i = 1 , 2 , ) , T i = 2 r ( i 1 ) + 1 , , 2 r i , T 1 T 2 T J = 1 , 2 , , m r , each T i has sparsity 2r (except possibly T J ). Then, Z T 1 is the portion of Z r c that corresponds to the k ( k = 2 t r ) largest singular values, Z T 2 is the part that corresponds to the next k largest singular values, then so forth. Obviously, for all i j , Z T i T Z T j = 0 and Z T i Z T j T = 0 , and r a n k ( Z T i ) k .

The following results can be derived from [17]

Z T j F p ( 2 t r ) p 2 1 Z T j 1 p p , (34)

Z r p p ( 2 r ) p 2 1 Z r F p ( 2 r ) p 2 1 Z r + Z T 1 F p . (35)

According to (34), we get

j 2 Z T j F p k p 2 1 j 2 Z T j 1 p p = k p 2 1 Z r c p p . (36)

Applying Lemma 3.2 to (36), we can have

j 2 Z T j F p k p 2 1 ( Z r p p + 2 X r c p p ) k p 2 1 ( 2 r ) p 2 1 Z r + Z T 1 F p + 2 k p 2 1 X r c p p = t p 2 1 Z r + Z T 1 F p + 2 ( 2 t r ) p 2 1 X r c p p . (37)

Since A ^ ( X ) b ^ 2 ϵ ^ A , r , b and the triangular inequality, it suffices to

A ^ ( Z ) 2 A ^ ( X ) b ^ 2 + A ^ ( X ) b ^ 2 2 ϵ ^ A , r , b .

Therefore, by combining the above equation with lemma 3.3, we have

A ^ ( Z ) p p M 1 p 2 A ^ ( Z ) 2 p M 1 p 2 ( 2 ϵ ^ A , r , b ) p . (38)

Moreover, by Lemma 3.4 and (37), we get

A ^ ( Z ) p p = A ^ ( Z r + Z T 1 ) + A ^ ( Z r c Z T 1 ) p p A ^ ( Z r + Z T 1 ) p p j 2 A ^ ( Z T j ) p p ( 1 δ 2 r + 2 t r ) Z r + Z T 1 F p ( 1 + δ 2 t r ) j 2 Z T j F p [ ( 1 δ 2 r ( t + 1 ) ) ( 1 + δ 2 t r ) t p 2 1 ] Z r + Z T 1 F p 2 ( 1 + δ 2 t r ) ( 2 t r ) p 2 1 X r c p p [ 2 ( 1 + δ 2 r ( t + 1 ) ) ( 1 + ϵ A 2 r ( t + 1 ) ) p ( 1 + δ 2 t r ) ( 1 + ϵ A 2 t r ) p t p 2 1 ] Z r + Z T 1 F p 2 ( 1 + δ 2 t r ) ( 1 + ϵ A 2 t r ) p ( 2 t r ) p 2 1 X r c p p , (39)

in that case

Z r + Z T 1 F p M 1 p 2 ( 2 ϵ ^ A , r , b ) p + 2 ( 2 t r ) p 2 1 ( 1 + δ 2 t r ) ( 1 + ϵ A 2 t r ) p X r c p p 2 ( 1 + δ 2 r ( t + 1 ) ) ( 1 + ϵ A 2 r ( t + 1 ) ) p ( 1 + δ 2 t r ) ( 1 + ϵ A 2 t r ) p t p 2 1 (40)

From (40) and Lemma 3.2, we have that

Z p p Z r + Z r c p p Z r p p + Z r c p p 2 Z r p p + 2 Z r c p p 2 ( 2 r ) 1 p 2 Z r + Z T 1 F p + 2 X r c p p 2 p + 1 ( 2 r M ) 1 p 2 2 ( 1 + δ 2 r ( t + 1 ) ) ( 1 + ϵ A 2 r ( t + 1 ) ) p ( 1 + δ 2 t r ) ( 1 + ϵ A 2 t r ) p t p 2 1 ( ϵ ^ A , r , b ) p + ( 2 + 4 ( 1 + δ 2 t r ) t p 2 1 ( 1 + ϵ A 2 t r ) p 2 ( 1 + δ 2 r ( t + 1 ) ) ( 1 + ϵ A 2 r ( t + 1 ) ) p ( 1 + δ 2 t r ) ( 1 + ϵ A 2 t r ) p t p 2 1 ) X r c p p (41)

which finishes the proof of theorem 2.1.

When constants 0 < p 1 and t > 1 , setting k = 2 t r , a new RIP condition that can robustly and accurately recover low-rank matrices is obtained i.e., (2 - 3) in Theorem 2.1, and it is slightly weaker than the sufficient condition of Zhang M [16] δ k + θ δ 2 r + k < θ 1 .

Theorem2.2 is based on this element of the NSP of matrix, and provides an error-bound estimation of the actual r-rank matrix X to be found and the optimal solution X * of problem (8). Its proof steps as below:

Proof According to Lemma 3.4, while A ^ meets the p-null space property (pNSP), there exist

X r p p s ^ X r c p p + v ^ A ^ ( X ) p p . (42)

Let Z = X X * , it is obvious that we get that by the lemma 3.2 and (42), we

( X X * ) r c p p ( X X * ) r p p + 2 X r c p p s ^ X r c p p + v ^ A ^ ( X ) p p + 2 X r c p p , (43)

holds through the lemma 3.2 and (42). After simplifying, we further obtain

( X X * ) r c p p 1 1 s ^ ( v ^ A ^ ( X ) p p + 2 X r c p p ) . (44)

Finally, we conclude from (38), (43) and (44) that

X X * p p ( X X * ) r p p + ( X X * ) r c p p s ^ X r c p p + v ^ A ^ ( X ) p p + ( X X * ) r c p p ( 1 + s ^ ) X r c p p + v ^ A ^ ( X ) p p ( 1 + s ^ 1 s ^ + 1 ) v ^ A ^ ( X ) p p + 2 ( 1 + s ^ ) 1 s ^ X r c p p 2 v ^ 1 s ^ M 1 p 2 ( 2 ϵ ^ A , r , b ) p + 2 ( 1 + s ^ ) 1 s ^ X r c p p v M 1 p 2 2 p + 1 1 s 2 v ϵ A p ( ϵ ^ A , r , b ) p + 2 ( 1 + s ) 1 s 2 v ϵ A p X r c p p .

holds for 0 < p 1 . Meanwhile, we finish the proof of the theorem 2.2.

In brief, the above completes all proofs of the two theorems.

4. Conclusion

We primarily investigate mainly study fully perturbed problem of reconstructing a low-rank matrix through nonconvex Schatten p-norm minimization and give sufficient conditions for recovering the error along with corresponding upper bound estimations. These results show that nonconvex Schatten p-minimization provides a stable and accurate guarantee for reconstructing low-rank matrix in the existence of overall noise. The obtained results involve two implications, firstly, it suffices to guide the selection of measurement operators for low-rank matrix reconstruction, i.e., operators that satisfy weaker RIC sufficient conditions can also better promote the recovery capacity; secondly, it provides theoretical support for boundary error by taking advantage of the following two properties, respectively p-RIP and p-NSP.

Conflicts of Interest

The authors declare no conflicts of interest regarding the publication of this paper.

References

[1] Recht, B., Fazel, M. and Parrilo, P.A. (2010) Guaranteed Minimum Rank Solutions of Linear Matrix Equations via Nuclear Norm Minimization. SIAM Review, 52, 471-501.
https://doi.org/10.1137/070697835
[2] Chang, X., Zhong, Y., Wang, Y. and Lin, S. (2019) Unified Low-Rank Matrix Estimate via Penalized Matrix Least Squares Approximation. IEEE Transactions on Neural Networks and Learning Systems, 30, 474-485.
https://doi.org/10.1109/TNNLS.2018.2844242
[3] Batselier, K. (2022) Low-Rank Tensor Decompositions for Nonlinear System Identification: A Tutorial with Examples. IEEE Control Systems Magazine, 42, 54-74.
https://doi.org/10.1109/MCS.2021.3122268
[4] Dass, J., Wu, S., Shi, H., et al. (2023) Vitality: Unifying Low-Rank and Sparse Approximation for Vision Transformer Acceleration with a Linear Taylor Attention. 2023 IEEE International Symposium on High-Performance Computer Architecture (HPCA). Montreal, 25 February-01 March 2023, 415-428.
https://doi.org/10.1109/HPCA56546.2023.10071081
[5] Wang, Z., Qian, C., Guo, D., et al. (2022) One-Dimensional Deep Low-Rank and Sparse Network for Accelerated MRI. IEEE Transactions on Medical Imaging, 42, 79-90.
https://doi.org/10.1109/TMI.2022.3203312
[6] Wang, Y., Lin, L., Zhao, Q., et al. (2017) Compressive Sensing of Hyperspectral Images via Joint Tensor Tucker Decomposition and Weighted Total Variation Regularization. IEEE Geoscience and Remote Sensing Letters, 14, 2457-2461.
https://doi.org/10.1109/LGRS.2017.2771212
[7] Gundupalli, S.P., Hait, S. and Thakur, A. (2017) A Review on Automated Sorting of Source-Separated Municipal Solid Waste for Recycling. Waste Management, 60, 56-74.
https://doi.org/10.1016/j.wasman.2016.09.015
[8] Ye, G., Pan, C., Dong, Y., et al. (2021) A Novel Multi-Image Visually Meaningful Encryption Algorithm Based on Compressive Sensing and Schur Decomposition. Transactions on Emerging Telecommunications Technologies, 32, e4071.
https://doi.org/10.1002/ett.4071
[9] Chartrand, R. (2007) Exact Reconstruction of Sparse Signals via Nonconvex Minimization. IEEE Signal Processing Letters, 14, 707-710.
https://doi.org/10.1109/LSP.2007.898300
[10] Lai, M.J., Xu, Y. and Yin, W. (2013) Improved Iteratively Reweighted Least Squares for Unconstrained Smoothed Minimization. SIAM Journal on Numerical Analysis, 51, 927-957.
https://doi.org/10.1137/110840364
[11] Wang, Y., Wang, J. and Xu, Z. (2014) Restricted P-Isometry Properties of Nonconvex Block-Sparse Compressed Sensing. Signal Processing, 104, 188-196.
https://doi.org/10.1016/j.sigpro.2014.03.040
[12] Wang, J., Zhang, J., Wang, W., et al. (2015) A Perturbation Analysis of Nonconvex Block-Sparse Compressed Sensing. Communications in Nonlinear Science and Numerical Simulation, 29, 416-426.
https://doi.org/10.1016/j.cnsns.2015.05.022
[13] Chartrand, R. (2007) Nonconvex Compressed Sensing and Error Correction. 2007 IEEE International Conference on Acoustics, Speech and Signal Processing-ICASSP’07. Honolulu, 15-20 April 2007, III-889-III-892.
https://doi.org/10.1109/ICASSP.2007.366823
[14] Candes, E.J. and Tao, T. (2005) Decoding by Linear Programming. IEEE Transactions on Information Theory, 51, 4203-4215.
https://doi.org/10.1109/TIT.2005.858979
[15] Kong, L.C. and Xiu, N.H. (2011) Exact Low-Rank Matrix Recovery via Nonconvex Mp-Minimization. Optimization.
[16] Zhang, M., Huang, Z.H. and Zhang, Y. (2013) Restricted P-Isometry Properties of Nonconvex Matrix Recovery. IEEE Transactions on Information Theory, 59, 4316-4323.
https://doi.org/10.1109/TIT.2013.2250577
[17] Kong, L. and Xiu, N. (2013) Exact Low-Rank Matrix Recovery via Nonconvex Schatten P-Minimization. Asia-Pacific Journal of Operational Research, 30, Article 1340010.
https://doi.org/10.1142/S0217595913400101
[18] Gao, Y., Peng, J., Yue, S., et al. (2015) On the Null Space Property of lp-Minimization for 0<q≤1 in Compressed Sensing. Journal of Function Spaces, 2015, Article ID 579853.
https://doi.org/10.1155/2015/579853
[19] Herman, M.A. and Strohmer, T. (2010) General Deviants: An Analysis of Perturbations in Compressed Sensing. IEEE Journal of Selected Topics in Signal Processing, 4, 342-349.
https://doi.org/10.1109/JSTSP.2009.2039170
[20] Huang, J., Wang, J., Zhang, F., et al. (2021) Perturbation Analysis of Low-Rank Matrix Stable Recovery. International Journal of Wavelets, Multiresolution and Information Processing, 19, Article 2050091.
https://doi.org/10.1142/S0219691320500915

Copyright © 2024 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.