L1/2 -Regularized Quantile Method for Sparse Phase Retrieval

Abstract

The sparse phase retrieval aims to recover the sparse signal from quadratic measurements. However, the measurements are often affected by outliers and asymmetric distribution noise. This paper introduces a novel method that combines the quantile regression and the L1/2-regularizer. It is a non-convex, non-smooth, non-Lipschitz optimization problem. We propose an efficient algorithm based on the Alternating Direction Methods of Multiplier (ADMM) to solve the corresponding optimization problem. Numerous numerical experiments show that this method can recover sparse signals with fewer measurements and is robust to dense bounded noise and Laplace noise.

Share and Cite:

Shen, S. , Xiang, J. , Lv, H. and Yan, A. (2022) L1/2 -Regularized Quantile Method for Sparse Phase Retrieval. Open Journal of Applied Sciences, 12, 2135-2151. doi: 10.4236/ojapps.2022.1212147.

1. Introduction

Phase retrieval is a problem in recovering the unknown signal x p from the following model:

y = | A x | 2 + ϵ (1)

where A = [ a 1 , a 2 , , a n ] T n × p is the known measurements matrix, n is the number of measurements, y = [ y 1 , y 2 , , y n ] n is the squared-magnitude measurements, ϵ = [ ϵ 1 , ϵ 2 , , ϵ n ] n is noise or outliers [1], | | 2 denotes the element-wise absolute-squared value. Especially, when both A and x belong to the real field, the problem is called real-valued phase retrieval [2] [3]. Phase retrieval problem has many important applications, including X-ray crystallography [4], optics [5], astronomy [6], and blind ptychography [7]. The methods to solve PR problems can be roughly divided into two categories: the first is based on alternating projection proposed by Gerchberg and Saxton. Examples include Hybrid Input-Output (HIO), Hybrid Projective Reflection, and so on. In recent years, some scientists have proposed more advanced alternating projection algorithms. The second is the convex method of semidefinite programming (SDP) or convex relaxation for quadratic equations in the dirt (1). The most representative is PhaseLift [8], which was proposed by Cand, Strohmer, and Voroninski. This is an algorithm for minimizing convex trajectories (kernels) using an SDP-enhanced technique. In many applications, especially those related to imaging, the signal x p allows sparse representation under certain known and deterministic linear transformations. Without losing generality, we assume that signal x itself is sparse in the rest of this paper. In this case, model (1) is called the sparse phase recovery model.

To solve the sparse problem, L p ( 0 < p < 1 ) norm regularization is a common approach in the field of compression perception. In further study of the phase diagram, [9] results are as follows: 1) As the p-value decreases, the solution obtained by L regularization becomes more sparse. 2) When 1 / 2 < p < 1 , L1/2 regularization always gets the most sparse solution, and when 0 < p < 1 / 2 , there is no significant difference in L p regularization performance. Therefore, the L1/2 regularization can be used as a representative of the L p ( 0 < p < 1 ) regularization.

For asymmetric noise or outliers, some stable results have been obtained. For example, [10] developed for minimizing the least squares empirical loss and designed a two stages algorithm, which starts with a weighted maximal correlation initialization and then follows by the reweighted gradient iterations. However, most of the above methods are built upon the least squares (LS) criterion which is optimal for Gaussian noise, but may not be optimal if the noise is not Gaussian or asymmetric. To enhance the robustness against asymmetric noise or outliers, introduced quantile regression (QR) method which includes LAD method as a special case. Compared to LAD method, QR method involves minimizing asymmetrically-weighted absolute residuals, which permits a much more accuracy portrayal of the relationship between the observed covariates and the response variables. Therefore, it is more approproate in certain non-Gaussian settings to use QR method. For these reasons, QR has attracted tremendous interest in the literature. Recently, the penalized QR method has also gained a lot of attention in the high dimensional linear models, see for example [11] [12] [13] [14] [15].

On the other hand, recall that L1/2-regularization introduced by [9] and [16] has been widely used in many fields since it can generate more sparse solutions under fewer measurements than L1-regularization. Inspired by these works, we here propose a novel method which consists of QR and an L1/2-regularization. We call this method L1/2-regularized quantile regression phase retrieval (L1/2 QR PR). Because the L1/2-regularized problem is non-convex, non-smooth, non-lipschitz, it is difficult to solve directly. Therefore, we design an ADMM algorithm to solve the correspoding optmization. Fortunately, all subproblems have closed solutions and convergence is guaranteed. Numerical experiments show that the proposed method can recover sparse signals with fewer measurements and is robust to asymmetrically distributed noises such as dense bounded noises and Laplacian noises.

The rest of this article is organized as follows. In Section 2, we design a novel algorithm based on ADMM to solve our method L1/2 QR PR and discuss the convergence of proposed algorithm. In Section 3, we use a large number of numerical experiments to prove the effectiveness and robustness of our algorithm. Conclusions and future work are presented in Section 4.

2. Optimization Algorithm

2.1. The Problem Formulation

The optimization problem that we consider is minimizing the problem as follow:

min x R 1 n i = 1 n ρ τ ( a i , x 2 y i ) + λ x 1 / 2 1 / 2 , (2)

where

ρ τ ( t ) = { τ t , t 0 ( 1 τ ) t , otherwise ,

x , y i , a i , n have been described in (1), τ ( 0,1 ) , , stands for the inner product. λ > 0 is the regularized parameter, and x 1 / 2 1 / 2 = i = 1 p | x i | 1 / 2 .

Noting that lim x 2 x 1 / 2 1 / 2 and λ > 0 , we obtain that

lim x 2 1 n i = 1 n ρ τ ( ( a i T x ) 2 y i ) + λ x 1 / 2 1 / 2

which together with the continuity of the object function yields that Problem (2) has a bounded solution. To design an appropriate iterated algorithm, we give the following two lemmas.

Lemma 1. (see [16] [17]) The global solution t * of following problem has analytic expression,

arg min t ( t α ) 2 + μ | t | 1 / 2

t * = { f μ , 1 / 2 ( α ) , | α | > 54 3 4 μ 2 / 3 0, otherwise ,

where

α , f μ , 1 / 2 ( α ) = 2 3 α ( 1 + cos ( 2 π 3 2 3 φ μ ( α ) ) )

with

φ μ ( α ) = arccos ( μ 8 ( | α | 3 ) 3 2 ) .

Lemma 2. The global solution h * of following problem has analytic expression,

h * = arg min h μ 2 ( h α 1 ) 2 + ρ τ ( h 2 α 2 ) ,

where α 1 , α 2 , μ > 0 is a constant. It can be derived

h * = l μ ( α 1 , α 2 ) ,

where

l μ ( α 1 , α 2 ) = { α 1 1 + 2 τ μ , if α 2 0 or α 2 > 0 and | α 1 | > α 2 ( 1 + 2 τ μ ) α 2 , if α 2 > 0 and 0 α 1 α 2 ( 1 + 2 τ μ ) α 2 , if α 2 > 0 and α 2 ( 1 + 2 τ μ ) α 1 0 ,

0 < μ 2 ( 1 τ ) and

l μ ( α 1 , α 2 ) = { α 1 1 + 2 τ μ , if α 2 0 or α 2 > 0 and | α 1 | > α 2 ( 1 + 2 τ μ ) α 1 1 2 ( 1 τ ) μ , if α 2 > 0 and | α 1 | < α 2 ( 1 2 ( 1 τ ) μ ) α 2 , if α 2 > 0 and α 2 ( 1 2 ( 1 τ ) μ ) α 1 α 2 ( 1 + 2 τ μ ) α 2 , if α 2 > 0 and α 2 ( 1 + 2 τ μ ) α 1 α 2 ( 1 2 ( 1 τ ) μ ) , (3)

as μ > 2 ( 1 τ ) .

One can get this lemma in the similar way of [18]. So, we omit the details.

2.2. Solving the Objective with ADMM

We now employ the alternating direction method of multipliers (ADMM) algorithm to solve Problem (2). By introducing a pair of new variables, we reformulate Problem (2) as

min x , q , z 1 n j = 1 n ρ τ ( z i 2 y i ) + λ q 1 / 2 1 / 2 s .t . ( A I ) x ( z q ) = 0, (4)

where z n and q p . Then, the augmented Lagrangian function of the above problem is

L r ( x , q , z ; Λ 1 , Λ 2 ) = 1 n i = 1 n ρ τ ( z i y i ) + λ q 1 / 2 1 / 2 + Λ 1 , x q + r 2 x q 2 2 + Λ 2 , A x z + r 2 A x z 2 2 , (5)

where Λ 1 , Λ 2 are the Lagrange multipliers, r is a positive constant.

Based on the framework of ADMM, the j + 1 th iteration can be calculated as

x j + 1 = arg min x L r ( x , q j , z j ; Λ 1 j , Λ 2 j ) , q j + 1 = arg min q L r ( x j + 1 , q , z j ; Λ 1 j , Λ 2 j ) , z j + 1 = arg min z L r ( x j + 1 , q j + 1 , z ; Λ 1 j , Λ 2 j ) , Λ 1 j + 1 = Λ 1 j + r ( x j + 1 q j + 1 ) , Λ 2 j + 1 = Λ 2 j + r ( A x j + 1 z j + 1 ) . (6)

In the following, we discuss the solution to each sub-minimization problem with respect to (w.r.t.) x , q , z .

1) Sub-minimization problem with respect to x: This problem can be written as

min x n Λ 1 j , x q j + r 2 x q j 2 2 + Λ 2 j , A x z j + r 2 A x z j 2 2 , (7)

Notice that

Λ 1 j , x q j + r 2 x q j 2 2 + Λ 2 , A x z j + r 2 A x z j 2 2 = Λ 2 j , A x + Λ 1 j , x + r 2 A x z j 2 2 + r 2 x q j 2 2 . (8)

By the first order optimal condition, we can obtain the optimal solution. After analysis and comparison, we can conclude that

( r I + r A T A ) x j + 1 = r q j A T Λ 2 j + r A T z j Λ 1 j

where I stands for the identity matrix.

2) Sub-minimization problem with respect to q: This problem can be written as

min q n λ q 1 / 2 1 / 2 + Λ 1 j , x j + 1 q + r 2 q x j + 1 2 2 . (9)

By simple calculation, one can see that the above problem is equivalent to the following minimization,

min q p λ q 1 / 2 1 / 2 + r 2 q x j + 1 Λ 1 j 2 2 2 . (10)

According to the Lemma 1, we get

q d j + 1 = { f λ , 1 / 2 ( u d j ) , | u d j | > 54 3 4 λ 2 / 3 0, otherwise ,

where u d j = ( x j + 1 + Λ 1 j / 2 ) d is the d-th element of the vector x j + 1 + Λ 1 j / 2 , and d = 1,2, , p .

3) Sub-minimization problem with respect to z: This problem can be written as

min z i , i = 1,2, , n 1 n i = 1 n ρ τ ( z i 2 y i ) + r 2 z W j 2 2 . (11)

where W j = A x j + 1 + Λ 2 j / r . For each i = 1 , 2 , , n , the following problem

min z i 1 n ρ τ ( z i 2 y i ) + r 2 ( z i W i j ) 2 (12)

can be solved by l n r ( 1, y i ( W i j ) 2 ) W i j based upon Lemma 2. To the ease of presentation, we introduce the notation L n r ( 1, y / | W | 2 ) n whose i-th element is defined as l n r ( 1, y i ( W i j ) 2 ) . Therefore, the problem (11) can be solved by

z j + 1 = L n r ( 1, y / | W j | 2 ) W j ,

where denote the Hadamard product, respectively.

According to the above analysis, the iterative scheme for solving (4) can be given in Algorithm 1.

2.3. Convergence Analysis

As discussed above, one can see that each subproblem of the proposed algorithm is well defined. Therefore, we discuss its convergence. Consider q-sub-problem

Algorithm 1. L1/2QR PR: ADMM method for solving (4).

min q n 1 2 q u 2 2 + μ q 1 / 2 1 / 2 (16)

where μ > 0 . According to [19], the first-order stationary point definition of (16) is given as follow.

Definition 1. Let q ^ be a vector in p and Q = Diag ( q ^ ) where Diag ( q ^ ) denotes a p × p diagonal matrix whose diagonal is formed by the vector q ^ . The vector q ^ is a first-order stationary point of (16) if

Q ( q ^ u ) + μ 2 | q ^ | 1 / 2 = 0.

Similar to [20], we introduce the Karush-Kuhn-Tucker (KKT) conditions of the Lagrangian L r ( x , q , z ; Λ 1 , Λ 2 ) in (2) as follows

{ x L r ( x ˜ , q ˜ , z ˜ , Λ ˜ 1 , Λ ˜ 2 ) = 0, r Q ˜ ( q ˜ x ˜ + Λ ˜ 1 r ) + λ 2 | q ˜ | 1 / 2 = 0, z L r ( x ˜ , q ˜ , z ˜ , Λ ˜ 1 , Λ ˜ 2 ) = 0, Λ 1 L r ( x ˜ , q ˜ , z ˜ , Λ ˜ 1 , Λ ˜ 2 ) = 0, Λ 2 L r ( x ˜ , q ˜ , z ˜ , Λ ˜ 1 , Λ ˜ 2 ) = 0 (17)

where ( x ˜ , q ˜ , z ˜ , Λ ˜ 1 , Λ ˜ 2 ) is a saddle point, represents the partial derivative, q ˜ = Diag ( q ˜ ) , q ˜ is the first-order stationary point of q-subproblem (10).

Since the Lagrangian L r ( x , q , z ; Λ 1 , Λ 2 ) is nonconvex w.r.t. x , q , z . After analyzing the first-order optimality conditions for the variable z and the subproblem (12), the above KKT conditions corresponding to these three variables can be described as

A H Λ ˜ 2 + Λ ˜ 1 = 0,

Λ ˜ 2 | q ˜ | 2 Q ˜ Λ ˜ 1 = 0,

( 2 n r s i g n ( | z ˜ m | 2 ) y m ) z ˜ m ( Λ ˜ 2 r + A x ˜ ) m 0, m = 1,2,3 , , n ,

z ˜ = A x ˜ ,

x ˜ = q ˜ ,

where

s i g n ( x ¯ ) = { 1, x ¯ > 0, 1, x ¯ < 0, 0, x ¯ < 0.

Similar to Theorem 2.4 in [18], we show that our proposed algorithm converges to a saddle point satisfying KKT conditions.

Theorem 3. Assume that the successive differences of the two multipliers { Λ 1 j Λ 1 j 1 , Λ 2 j Λ 2 j 1 } converge to zero and { x j } is bounded. Then there exists a subsequence of iterative sequence of Algorithm 1 converging to an accumulation point that satisfies the KKT conditions of the saddle point problem (5).

3. Numerical Experiments

All simulations were performed on a 64-bit laptop computer running Windows 11 system with an AMD A8-6410 APU and 4 GB of RAM. To demonstrate the efficiency of our proposed method, we calculate the relative error between x o r i g and x ^ as

Relative error : = min c = ± 1 x o r i g x ^ 2 x o r i g 2 ,

where x o r i g is the true signal and x ^ is the solution obtained by a solver. In all experiments, we carry out 100 Monte Carlo runs.

3.1. Experimental Parameters and Initialization

We take p = 128 in all experiments, generating the true signal as Gaussian random sparse vector. The measurements matrix A generates from a i i . i . d . N ( 0, I ) . Regularization parameter λ is given by a fixed 10−4. The penalty parameter r is chosen as 10−2.

For nonconvex problems, ADMM can converge to different (and in particular, nonoptimal) points, depending on the initial values and the penalty parameter [21]. We take Wirtinger flow [22] with initial point q 0 , which obeys dist ( q 0 , x ) 1 8 x , where dist ( q , x ) = min c = ± 1 q c x 2 . Hence the algorithm proposed converges from the neighborhood of the global minimizer.

Like many phase retrieval methods, we use spectral initial values to achieve better recovery. To evaluate the efficiency of our method, we compare the successful recovery rate with 5 kinds of algorithm listed in Table 1.

Table 1. Comparison of reconstruction methods.

3.2. Success Rate Comparisons

We compare the recovery success rates of L1/2QR PR at different values of τ in the noise-free case and noisy case including dense bounded noise and Laplace noise.

1) When not disturbed by noise, let τ = 0.1 , 0.2 , 0.4 , 0.5 , 0.6 , 0.8 , 0.9 . We report the success rates on Table 2 where the recovery is considered successful if the relative error is less than 0.0001.

2) When in dense bounded noise, let τ = 0.1 , 0.2 , 0.4 , 0.5 , 0.6 , 0.8 , 0.9 . The entries of the dense bounded noise are generated independently from U ( 0 , η max ) , where η max / x 2 = 0.01 . We say that the recovery is successful if the relative error is less than 0.001. Since the recovery success rate is 0 as τ = 0.1 , we here report the other cases on Table 3.

3) When in Laplace noise, let τ = 0.4 , 0.5 , 0.6 , 0.8 , 0.9 and n / p = 4 , 5 , 6 . The entries of Laplace noise are generated from Laplace ( 0 , μ max / 2 ) , where μ max y 2 / n = 0.001 . We say that the recovery is successful if the relative error is less than 0.005 and report the success rate on Table 4.

From Tables 2-4, one can see that the recovery success rate of L1/2QR PR with τ = 0.5 is usually better than when τ equals other values. For Laplace noise, L1/2QR PR with τ = 0.5 and τ = 0.8 both perform well when n / p = 6 .

Table 2. Success rate of L1/2QR PR (Without noise).

Table 3. Success rate of L1/2QR PR (Dense bounded noise).

Table 4. Success rate of L1/2QR PR (Laplace noise).

Next, let’s continue to compare recovery success rate with the other five algorithms under different measurement fractions (n/p). We continue to use the same parameters and recovery success criteria as above. And let τ = 0.5 . Figure 1 shows how these algorithms behave without noise. Through repeated experiments, we find that median-MRWF has a better recovery success rate when measurement fractions (n/p) = 2, 3 and no noise interference in real field. The recovery success rate of L1/2QR PR is slightly lower than that of median-RWF and median-MRWF, and higher than that of other algorithms. Figure 1 also shows how these algorithms behave when disturbed by noise. When receiving interference from dense bounded noise, the recovery success rate of L1/2QR PR is higher than that of the other five algorithms in the real field. When receiving interference from laplace noise, the recovery success rate of L1/2QR PR is higher than that of the other five algorithms in the real field.

In the following chapters, we make τ = 0.5 . And we fix the sparsity to s = 8 and use the competing algorithms listed in L1/2QR PR the L1 norm loss function, and L0L1PR adds L0 regularization. The median-RWF and the median-MRAF of the L2 norm loss function are highly robust to outliers through heuristic truncation rules. We compare the relative errors relative to the iteration count T under different measurement fractions (n/p), x is the true signal, x (t) is the tth iteration point.

Remark 3.1. We take n = 2p, 3p, 4p, 5p, 6p in real case. In detail, n = 2p and n = 4p are approximate theoretical sample complexity. When lad-admm returns to stability, we actually use n = 6p.

3.3. Exact Recovery for Noise-Free Data

In the noise-free case, Figure 2 shows in real case, when n = 2p, L1/2QR PR and L1/2LAD PR have good recovery performance. When n = 3p, 4p, 5p, 6p, L1/2QR PR is slightly better than other 5 algorithms.

3.4. Stable Recovery with Dense Bounded Noise

Now, we consider the existence of dense bounded noise. The entries of the dense bounded noise are generated independently from U ( 0 , η max ) , where η max / x 2 = 0.001 , 0.01 . It can be seen from Figure 3, L1/2QR PR shows great robustness to dense bounded noise in real case, while LAD-ADMM shows poor performance. L1/2QR PR and L1/2LAD PR have similar performance when n = 4p

(a)(b)(c)

Figure 1. Recovery success rate. (a) No noise; (b) Dense bounded noise; (c) Laplace noise.

(a) (b) (c) (d)

Figure 2. The relative error with respect to the iteration count for LAD-ADMM, median-RWF, median-MRAF, L0L1PR, L1/2LAD PR, L1/2QR PR with noise-free data in real case. (a) n = 2p; (b) n = 3p; (c) n = 5p; (d) n = 6p.

in real case. Median-RWF and median-MRAF have similar performance when n ≥ 4p in real case. Another reasonable observation, we find the relative reconstruction error has 10 times increase as η shrinks by a factor of 10 for all algorithms.

3.5. Stable Recovery with Laplace Noise

Finally, we consider the presence of Laplace noise, the entries of Laplace noise are generated from Laplace ( 0 , μ max / 2 ) , where μ max y 2 / n = 0.001 , 0.01 . As can be observed in Figure 4, surprisingly, L1/2QR PR and L1/2LAD PR are very robust to Laplace noise, especially in real case, no matter when n = 2p, 3p, 4p, 5p, 6p. However, other methods show poor performance, even when n = 6p, LAD-ADMM, L0L1PR, median-RWF, median-MRAF don’t have satisfactory recovery. Another logical observation, we find the relative reconstruction error has 10 times increase as μ max shrinks by a factor of 10 for all algorithms.

(a) (b) (c) (d) (e) (f)

Figure 3. The relative error with respect to the iteration count for LAD-ADMM, median-RWF, median-MRAF, L0L1PR, L1/2LAD PR, and L1/2QR PR with dense bounded noise in real case. ( η = η max x 2 = 0.001 , 0.01 ). (a) n = 2p; η = 0.001; (b) n = 3p; η = 0.001; (c) n = 6p; η = 0.001; (d) n = 2p; η = 0.01; (e) n = 3p; η = 0.01; (f) n = 6p; η = 0.01.

(a) (b) (c) (d) (e) (f)

Figure 4. The relative error with respect to the iteration count for LAD-ADMM, median-RWF, median-MRAF, L0L1PR, L1/2LAD PR, L1/2QR PR with Laplace noise in real case. ( μ = μ max y 2 / n = 0.001 , 0.01 ). (a) n = 2p; η = 0.001; (b) n = 3p; η = 0.001; (c) n = 6p; η = 0.001; (d) n = 2p; η = 0.01; (e) n = 3p; η = 0.01; (f) n = 6p; η = 0.01.

The simulation results show that the L1/2QR PR algorithm with quantile loss function and L1/2 regularization has two significant advantages over other methods. One is the ability to recover the signal with fewer measured values, and the other is the robustness to asymmetrically distributed noise (such as dense bounded noise and Laplacian noise).

4. Conclusion

We proposed the L1/2 QR PR method and designed an efficient algorithm based on the framework of ADMM. A series of numerical experiments show that the proposed method can recover sparse signals with fewer measurements and is robust to asymmetrically distributed noises such as dense bounded noises and Laplace noises. An interesting future research direction is to consider the complex situations, such as Fourier basic measurements, which is an application of coded diffraction patterns [26].

Acknowledgements

The authors would like to thank the editor and anonymous reviewers for their insight and helpful comments and suggestions which greatly improve the quality of the paper. The research was supported by the National Natural Science Foundation of China (NSFC) (11801130), the Natural Science Foundation of Hebei Province (A2019202135), the National Social Science Fund of China (17BSH140), “The Fundamental Research Funds for the Central Universities” in UIBE (16YB05) and “The Fundamental Research Funds for the Central Universities” (2022QNYL29).

NOTES

*Si Shen and Jiayao Xiang are joint first authors.

#Corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest regarding the publication of this paper.

References

[1] Zhang, H., Chi, Y. and Liang, Y. (2018) Median-Truncated Nonconvex Approach for Phase Retrieval with Outliers. IEEE Transactions on Information Theory, 64, 7287-7310.
https://doi.org/10.1109/TIT.2018.2847695
[2] Cai, T.T., Li, X. and Ma, Z. (2016) Optimal Rates of Convergence for Noisy Sparse Phase Retrieval via Thresholded Wirtinger Flow. The Annals of Statistics, 44, 2221-2251.
https://doi.org/10.1214/16-AOS1443
[3] Ohlsson, H. and Eldar, Y.C. (2014) On Conditions for Uniqueness in Sparse Phase Retrieval. 2014 IEEE International Conference on Acoustics, Speech and Signal Processing, Florence, 4-9 May 2014, 1841-1845.
https://doi.org/10.1109/ICASSP.2014.6853917
[4] Hauptman, H.A. (1991) The Phase Problem of X-Ray Crystallography. Reports on Progress in Physics, 54, 1427-1454.
https://doi.org/10.1088/0034-4885/54/11/002
[5] Walther, A. (1963) The Question of Phase Retrieval in Optics. Optica Acta, 10, 41-49.
https://doi.org/10.1080/713817747
[6] Fan, J., Kong, L., Wang, L. and Xiu, N (2018) Variable Selection in Sparse Regression with Quadratic Measurements. Statistica Sinica, 28, 1157-1178.
https://doi.org/10.5705/ss.202015.0335
[7] Chang, H., Enfedaque, P. and Marchesini, S. (2019) Blind Ptychographic Phase Retrieval via Convergent Alternating Direction Method of Multipliers. SIAM Journal on Imaging Sciences, 12, 153-185.
https://doi.org/10.1137/18M1188446
[8] Candes, E.J., Strohmer, T. and Voroninski, V. (2013) PhaseLift: Exact and Stable Signal Recovery from Magnitude Measurements via Convex Programming. Communications on Pure and Applied Mathematics, 66, 1241-1274.
https://doi.org/10.1002/cpa.21432
[9] Xu, Z.B., Guo, H.L., Wang, Y. and Zhang, H. (2012) Representative of L1/2 Regularization Among Lq (0 ≤ q ≤ 1) Regularizations: An Experimental Study Based on Phase Diagram, Automatica Sinica, 38, 1225-1228. (In Chinese)
https://doi.org/10.3724/SP.J.1004.2012.01225
[10] Zhang, H., Zhou, Y., Liang, Y. and Chi, Y. (2017) A Nonconvex Approach for Phase Retrieval: Reshaped Wirtinger Flow and Incremental Algorithms. Journal of Machine Learning Research, 18, 1-35.
[11] Yan, A. and Song, F. (2019) Adaptive Elastic Net-Penalized Quantile Regression for Variable Selection. Communications in Statistics-Theory and Methods, 48, 5106-5120.
https://doi.org/10.1080/03610926.2018.1508711
[12] Song, Y., Jian, L. and Lin, L. (2016) Robust Exponential Squared Loss-Based Variable Selection for High-Dimensional Single-Index Varying-Coefficient Model. Journal of Computational and Applied Mathematics, 308, 330-345.
https://doi.org/10.1016/j.cam.2016.05.030
[13] Kurnaz, F. S., Hoffmann, I. and Filzmoser, P. (2018) Robust and Sparse Estimation Methods for High Dimensional Linear and Logistic Regression. Chemometrics and Intelligent Laboratory Systems, 172, 211-222.
https://doi.org/10.1016/j.chemolab.2017.11.017
[14] Ciuperca, G. (2017) Adaptive Fused LASSO in Grouped Quantile Regression. Journal of Statistical Theory and Practice, 11, 107-125.
https://doi.org/10.1080/15598608.2016.1258601
[15] Gu, Y., Fan, J., Kong, L., Ma, S. and Zou, H. (2018) ADMM for High-Dimensional Sparse Penalized Quantile Regression. Technometrics, 60, 319-331.
https://doi.org/10.1080/00401706.2017.1345703
[16] Xu, Z., Chang, X., Xu, F. and Zhang, H. (2012) L1/2 Regularization: A Thresholding Representation Theory and a Fast Solver. IEEE Transactions on Neural Networks and Learning Systems, 23, 1013-1027.
https://doi.org/10.1109/TNNLS.2012.2197412
[17] Zeng, J.S., Fang, J. and Xu, Z.B. (2012) Sparse SAR Imaging Based on L1/2 Regularization. Science China Information Sciences, 55, 1755-1775.
https://doi.org/10.1007/s11432-012-4632-5
[18] Kong, L.J., Yan, A.L., Li, Y. and Fan, J. (2022) L1/2-Regularized Least Absolute Deviation Method for Sparse Phase Retrieval. Pacific Journal of Optimization, 18, 213-232.
[19] Dhifallah, O. and Lu, Y.M. (2017) Fundamental Limits of PhaseMax for Phase Retrieval: A Replica Analysis. 2017 IEEE 7th International Workshop on Computational Advances in Multi-Sensor Adaptive Processing (CAMSAP), Curacao, 10-13 December 2017, 1-5.
https://doi.org/10.1109/CAMSAP.2017.8313210
[20] Chang, H., Lou, Y., Duan, Y. and Marchesini, S. (2018) Total Variation-Based Phase Retrieval for Poisson Noise Removal. SIAM Journal on Imaging Sciences, 11, 24-55.
https://doi.org/10.1137/16M1103270
[21] Boyd, S., Parikh, N., Chu, E., Peleato, B. and Eckstein, J. (2010) Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers. Foundations and Trends in Machine Learning, 3, 1-122.
https://doi.org/10.1561/2200000016
[22] Candes, E. J., Li, X. and Soltanolkotabi, M. (2015) Phase Retrieval via Wirtinger Flow: Theory and Algorithms. IEEE Transactions on Information Theory, 61, 1985-2007.
https://doi.org/10.1109/TIT.2015.2399924
[23] Duan, Y., Wu, C., Pang, Z. F. and Chang, H. (2016) L0-Regularized Variational Methods for Sparse Phase Retrieval. ArXiv: 1612.02538.
[24] Jiang, X., So, H.C. and Liu, X. (2020) Robust Phase Retrieval with Outliers. ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Barcelona, 4-8 May 2020, 5320-5324.
https://doi.org/10.1109/ICASSP40776.2020.9053060
[25] Zhang, Q., Liu, D., Hu, F., Li, A. and Cheng, H. (2021) Median Momentum Reweighted Amplitude Flow for Phase Retrieval with Arbitrary Corruption. Journal of Modern Optics, 68, 374-381.
https://doi.org/10.1080/09500340.2021.1897171
[26] Candes, E.J., Li, X. and Soltanolkotabi, M. (2015) Phase Retrieval from Coded Diffraction Patterns. Applied and Computational Harmonic Analysis, 39, 277-299.
https://doi.org/10.1016/j.acha.2014.09.004

Copyright © 2024 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.