Convergence Analysis of a Kind of Deterministic Discrete-Time PCA Algorithm

Abstract

We proposed a generalized adaptive learning rate (GALR) PCA algorithm, which could be guaranteed that the algorithm’s convergence process would not be affected by the selection of the initial value. Using the deterministic discrete time (DDT) method, we gave the upper and lower bounds of the algorithm and proved the global convergence. Numerical experiments had also verified our theory, and the algorithm is effective for both online and offline data. We found that choosing different initial vectors will affect the convergence speed, and the initial vector could converge to the second or third eigenvectors by satisfying some exceptional conditions.

Keywords

Share and Cite:

Zhu, Z. , Ye, W. and Kuang, H. (2021) Convergence Analysis of a Kind of Deterministic Discrete-Time PCA Algorithm. Advances in Pure Mathematics, 11, 408-426. doi: 10.4236/apm.2021.115028.

1. Introduction

At present, Principal Component Analysis is widely used in data processing, image recognition and even video recognition [1]. In general, QR decomposition and SVD decomposition are most commonly used when calculating principal components. However, when the data dimension is large, the calculation cost of the traditional decomposition matrix method is very high. And the approach is not suitable for real applications where data come incrementally or in the on-line way. The PCA algorithm can overcome these problems.

Regarding the PCA algorithm, the Hebbian learning rules were first proposed by Hebbian, but the Hebbian rules are unstable and easy to diverge. Oja [2] added a delayed item under the Hebbian learning rule premise, called Oja’s learning rule. The Oja’s rule could always converge to the principal component of the covariance matrix.

Many optimization methods could also obtain the principal components. Xu [3] proposed the LMSER PCA algorithm to extract the principal components based on the least mean square error reconstruction. Chatterjee [4] proposed an unconstrained objective function and obtained various new PCA adaptive algorithms using gradient descent method, steepest descent method, conjugate gradient method, and Newton method. The results showed that other methods converge faster than the gradient descent method. When there are outliers in the data, Xu [5] used a Gibbs distribution derivation, added a coefficient to the learning rate to detect outliers, and adjusted their weights called the robust PCA algorithm. Then Song [6] used a Cauchy distribution to redefine the weights. Yang [7] proposed the FRPCA algorithm, which automatically selects relevant parameters compared to Xu [5].

All of these PCA learning algorithms are described by stochastic discrete time systems, and convergence is crucial for these algorithms to be useful in applications. It is not easy to study the behavior of these PCA algorithms directly. In order to indirectly prove the convergence of the Oja’s learning algorithm, traditionally, the algorithms are first transformed into some deterministic continuous time (DCT) systems [8] [9] [10]. This transformation is based on a fundamental theorem for the study of the stochastic approximation algorithm [10], relating the discrete time stochastic model to a DCT formulation. The stochastic approximation theory requires strict assumptions, including the algorithm’s learning rate to converge to 0. However, these conditions are difficult to achieve in practice and affect convergence speed [11] [12]. This makes it challenging to apply the DCT method to analyze the PCA algorithm’s convergence in practice. Subsequently, Zufiria [12] proposed to study Oja’s algorithm through a deterministic discrete time (DDT) system. This DDT system is obtained by applying conditional expectation to Oja’s learning algorithm. It preserves the original learning algorithm’s discrete time nature and can shed some light on the characteristics of the constant learning rate. Zhang [13] proposed that when $\eta =0.618{\lambda }_{1}$, the convergence speed is the fastest. Based on this, Lv [14] proposed an adaptive non-zero learning rate algorithm and proved its convergence. [15] summarized this.

Although Lv [14] proposed an adaptive non-zero learning rate algorithm and proved its convergence. When we choose an inappropriate initial value will cause the algorithm to fail to iterate. This paper we proposed a generalized adaptive learning rate (GALR) PCA algorithm which can ensure the convergence of the algorithm.

2. Oja’s Algorithm, the DCT Method and the DDT Method

2.1. Oja’s Algorithm

Consider a linear single neuron network with input–output relation

$y\left(k\right)={w}^{\text{T}}\left(k\right)x\left(k\right),\left(k=0,1,2,\cdots \right)$ (1)

where y(k) is the network output, the input sequence $\left\{x\left(k\right)|x\left(k\right)\in {R}^{n}\left(k=0,1,2,\cdots \right)\right\}$ is a zero mean stationary stochastic process, each $w\left(k\right)\in {R}^{n}\left(k=0,1,2,\cdots \right)$ is a weight vector. The basic Oja’s PCA learning algorithm for weight updating is described as

$w\left(k+1\right)=w\left(k\right)+\eta y\left(k\right)\left[x\left(k\right)-y\left(k\right)w\left(k\right)\right]$ (2)

for $k\ge 0$, where $\eta \left(k\right)>0$ is the learning rate. It is very difficult to study the convergence of (2) directly.

2.2. DCT Method

To indirectly interpret the convergence of (2), traditionally, the DCT method is used. This method can be described as follows: applying a fundamental stochastic approximation theorem [8] to (2), a DCT system can be obtained

$\frac{\text{d}w\left(t\right)}{\text{d}t}=Cw\left(t\right)-{w}^{\text{T}}Cw\left(t\right)\cdot w\left(t\right),t\ge 0$ (3)

where $C=E\left[x\left(k\right){x}^{\text{T}}\left(k\right)\right]$ denotes the correlation matrix of $\left\{x\left(k\right)|x\left(k\right)\in {R}^{n}\left(k=0,1,2,\cdots \right)\right\}$. The convergence of this DCT system is then used to interpret the convergence of (2). To transform (2) into the previous DCT system, the following restrictive conditions are required:

1) ${\sum }_{k=1}^{+\infty }{\eta }^{2}\left(k\right)<+\infty$ ;

2) ${\mathrm{sup}}_{k}E\left\{{|y\left(k\right)\left[x\left(k\right)-y\left(k\right)w\left(k\right)\right]|}^{2}\right\}<+\infty$.

By the condition 1), clearly, it implies that $\eta \left(k\right)\to 0$ as $k\to +\infty$. However, in many practical applications, $\eta \left(k\right)$ is often set to a small constant due to the round off limitation and speed requirement [11] [12]. The condition 2) is also difficult to be satisfied [12]. Thus, these conditions are unrealistic in practical applications and the previous DCT analysis is not directly applicable [11] [12].

2.3. DDT Method

To overcome the problem of learning rate converging toward zero, a DDT method is proposed in [12]. Unlike the DCT method to transform (2) into a continuous time deterministic system, the DDT method transform (2) into a discrete time deterministic system. The DDT system can be formulated as follows. Applying the conditional expectation operator $E\left\{w\left(k+1\right)/w\left(0\right),x\left(i\right),i to (2) and identifying the conditional expected value as the next iterate in the system [11], a DDT system can be obtained and given as

$w\left(k+1\right)=w\left(k\right)+\eta \left[Cw\left(k\right)-{w}^{\text{T}}\left(k\right)Cw\left(k\right)w\left(k\right)\right]$ (4)

for $k\ge 0$, where $C=E\left[x\left(k\right){x}^{\text{T}}\left(k\right)\right]$ is the correlation matrix. The DDT method has some obvious advantages. First, it preserves the discrete time nature as that of the original learning algorithm. Second, it is not necessary to satisfy some unrealistic conditions of the DCT approach. Thus, it allows the learning rate to be constant. The main propose of this paper is to study the convergence of (4) subject to the learning rate $\eta$ being some constant.

[13] proved the convergence of (4) when $\eta ,{\lambda }_{1},w\left(0\right)$ satisfied certain conditions. And it is proved that when $\eta =0.618{\lambda }_{1}$, the algorithm has the fastest convergence speed. According to the proof of [13] [14] proposed an adaptive learning rate PCA algorithm and gave a theoretical proof:

$w\left(k+1\right)=w\left(k\right)+\frac{\xi }{{w}^{\text{T}}\left(k\right)Cw\left(k\right)}\left(Cw\left(k\right)-{w}^{\text{T}}\left(k\right)Cw\left(k\right)w\left(k\right)\right)$ (5)

3. The Generalized Adaptive Learning Rate (GALR) PCA Algorithm

In this section, details associated with a convergence analysis of the proposed learning algorithm will be provided systematically.

3.1. GALR PCA Algorithm Formulation

Although [14] has given the convergence proof of (5) under certain conditions, when the data is online, we cannot obtain the covariance matrix C in advance, which may lead to ${w}^{\text{T}}\left(0\right)Cw\left(0\right)=0$ and affect the convergence process of the algorithm. We proposed a generalized adaptive learning rate (GALA) PCA algorithm

$w\left(k+1\right)=w\left(k\right)+\frac{\xi }{{w}^{\text{T}}\left(k\right)Aw\left(k\right)}\left(Cw\left(k\right)-{w}^{\text{T}}\left(k\right)Aw\left(k\right)w\left(k\right)\right)$ (6)

for $k\ge 0$, where $C=E\left[x\left(k\right){x}^{\text{T}}\left(k\right)\right],0<\xi <0.8$ is the correlation matrix and $A=aC+bI$, $a,b$ are the coefficients and I is an identity matrix.

For convenience of analysis, next, some preliminaries are given. It is known that the correlation matrix C is a symmetric nonnegative definite matrix. There exists an orthonormal basis of ${R}^{n}$ composed by eigenvectors of C. Let ${\lambda }_{1},\cdots ,{\lambda }_{n}$ be all the eigenvalues of C order by ${\lambda }_{1}\ge \cdots \ge {\lambda }_{n}\ge 0$, Suppose that ${\lambda }_{p}$ is the smallest nonzero eigenvalue, i.e., ${\lambda }_{p}>0$ but ${\lambda }_{j}=0\left(j=p+1,\cdots ,n\right)$. Denote by $\sigma$, the largest eigenvalue of C. Suppose that the multiplicity of $\sigma$ is $m\left(1\le m\le p\le n\right)$, then $\sigma ={\lambda }_{1}=\cdots ={\lambda }_{m}$. Suppose that $\left\{{v}_{i}|i=1,\cdots ,n\right\}$ is an orthonormal basis in ${R}^{n}$ such that each ${v}_{i}$ is a unit eigenvector of C associated with the eigenvalue ${\lambda }_{i}$. Denote by ${V}_{\sigma }$ the eigensubspace of the largest eigenvalue $\sigma$, i.e. ${V}_{\sigma }=span\left\{{v}_{1},\cdots ,{v}_{m}\right\}$ Denoting by ${V}_{\sigma }^{\text{T}}$ the subspace which is perpendicular to ${V}_{\sigma }$, clearly ${V}_{\sigma }^{\text{T}}=span\left\{{v}_{m+1},\cdots ,{v}_{n}\right\}$.

Since the vector set $\left\{{v}_{1},\cdots ,{v}_{n}\right\}$ is an orthonormal basis of ${R}^{n}$, for each $k\ge 0$, $w\left(k\right)\in {R}^{n}$ can be represented as

$w\left(k\right)=\underset{i=1}{\overset{n}{\sum }}{z}_{i}\left(k\right){v}_{i}$ (7)

where ${z}_{i}\left(k\right)\left(i=1,\cdots ,n\right)$ are some constants, and

$Cw\left(k\right)=\underset{i=1}{\overset{n}{\sum }}{\lambda }_{i}{z}_{i}\left(k\right){v}_{i}$ (8)

${w}^{\text{T}}\left(k\right)Aw\left(k\right)=a\underset{i=1}{\overset{n}{\sum }}{\lambda }_{i}{z}_{i}^{2}\left(k\right)+b\underset{i=1}{\overset{n}{\sum }}{z}_{i}^{2}\left(k\right)=\underset{i=1}{\overset{n}{\sum }}\left(a{\lambda }_{i}+b\right){z}_{i}^{2}\left(k\right)$ (9)

Substitute (7) to (6), we have

${z}_{i}\left(k+1\right)=\left[1+\frac{\xi }{{w}^{T}\left(k\right)Aw\left(k\right)}\left({\lambda }_{i}-{w}^{\text{T}}\left(k\right)Aw\left(k\right)\right)\right]{z}_{i}\left(k\right),\left(i=1,\cdots ,n\right)$ (10)

for $k\ge 0$, where $0<\xi <0.8$.

Next, a lemma is presented that will be used for the subsequent algorithm analysis.

Lemma 1. It holds that

$\left\{\begin{array}{l}{\left[1+\xi \left(\frac{\sigma }{s}-1\right)\right]}^{2}s\ge 4\left(1-\xi \right)\xi \sigma ,\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}s>0\hfill \\ {\left[1+\xi \left(\frac{\sigma }{s}-1\right)\right]}^{2}s\le \mathrm{max}\left\{\sigma ,{\left[1+\xi \left(\frac{\sigma }{{s}_{\ast }}-1\right)\right]}^{2}{s}_{\ast }\right\},\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}s\in \left[{s}_{\ast },\sigma \right]\hfill \end{array}$

where $0<\xi <0.8$ and ${s}_{*}>0$ is a constant.

Its proof can be found in [14].

3.2. Boundedness

We will analyze the boundedness of (6). The lower and upper bounds will be given in the following theorems.

Lemma 2. Suppose that $0<\xi <0.8$, if $0<{w}^{\text{T}}\left(k\right)A\text{\hspace{0.17em}}w\left(k\right)<{\lambda }_{p}$ then ${w}^{\text{T}}\left(k\right)A\text{\hspace{0.17em}}w\left(k\right)$ is increasing.

Proof. If ${w}^{\text{T}}\left(k\right)A\text{\hspace{0.17em}}w\left(k\right)<{\lambda }_{p}$ it can be checked that

${\left[1+\xi \left(\frac{{\lambda }_{i}}{{w}^{\text{T}}\left(k\right)A\text{\hspace{0.17em}}w\left(k\right)}-1\right)\right]}^{2}\ge {\left[1+\xi \left(\frac{{\lambda }_{p}}{{w}^{\text{T}}\left(k\right)A\text{\hspace{0.17em}}w\left(k\right)}-1\right)\right]}^{2}\ge 1$ (11)

for $k\ge 0$. Then, from (6), (8), (10), we have

(12)

$\square$

Theorem 1. If ${w}^{\text{T}}\left(0\right)A\text{\hspace{0.17em}}w\left(0\right)>0$, it holds that

${w}^{\text{T}}\left(k+1\right)A\text{\hspace{0.17em}}w\left(k+1\right)\ge 4\left(1-\xi \right)\xi {\lambda }_{p}>0$ (13)

for all $k\ge 0$, where $0<\xi <0.8$.

Proof. It can be checked that

${\left[1+\xi \left(\frac{{\lambda }_{i}}{{w}^{\text{T}}\left(k\right)A\text{\hspace{0.17em}}w\left(k\right)}-1\right)\right]}^{2}\ge {\left[1+\xi \left(\frac{{\lambda }_{p}}{{w}^{\text{T}}\left(k\right)A\text{\hspace{0.17em}}w\left(k\right)}-1\right)\right]}^{2}\ge 0$ (14)

Similar to the proof of Lemma 2, we have

$\begin{array}{c}{w}^{\text{T}}\left(k+1\right)A\text{\hspace{0.17em}}w\left(k+1\right)\ge {\left[1+\xi \left(\frac{{\lambda }_{p}}{{w}^{\text{T}}\left(k\right)A\text{\hspace{0.17em}}w\left(k\right)}-1\right)\right]}^{2}{w}^{\text{T}}\left(k\right)A\text{\hspace{0.17em}}w\left(k\right)\\ \ge {\mathrm{min}}_{s>0}\left\{{\left[1+\xi \left(\frac{{\lambda }_{p}}{s}-1\right)\right]}^{2}s\right\}\end{array}$ (15)

for $k\ge 0$. By Lemma 1, it holds that

${w}^{\text{T}}\left(k+1\right)A\text{\hspace{0.17em}}w\left(k+1\right)\ge 4\left(1-\xi \right)\xi {\lambda }_{p}>0$ $\square$

The above theorem shows that if ${w}^{\text{T}}\left(0\right)A\text{\hspace{0.17em}}w\left(0\right)\ne 0$, the trajectory starting from $w\left(0\right)$ is lower bounded, then it will never converge to zero.

Theorem 2. For any $w\left(0\right)$, there is a constant $H\left(w\left(0\right)\right)$. The specific expression is

$H\left(w\left(0\right)\right)=\left\{\sigma ,{\left[1+\xi \left(\frac{\sigma }{{w}^{\text{T}}\left(0\right)A\text{\hspace{0.17em}}w\left(0\right)}-1\right)\right]}^{2}{w}^{\text{T}}\left(0\right)A\text{\hspace{0.17em}}w\left(0\right)\right\}$ (16)

for all $k\ge 0$. Then ${w}^{\text{T}}\left(k\right)A\text{\hspace{0.17em}}w\left(k\right)\le H\left(w\left(0\right)\right)$ is always true.

Proof. Let’s proved it by mathematical induction. For all $k\ge 0$, $0<{w}^{\text{T}}\left(k\right)A\text{\hspace{0.17em}}w\left(k\right)\le H\left(w\left(0\right)\right)$ holds. Next, we proved $0<{w}^{\text{T}}\left(k+1\right)A\text{\hspace{0.17em}}w\left(k+1\right)\le H\left(w\left(0\right)\right)$ holds too. Discuss in two situations:

Case 1: $\sigma \le {w}^{\text{T}}\left(k\right)A\text{\hspace{0.17em}}w\left(k\right)\le H\left(w\left(0\right)\right)$. Then

${\left[1+\xi \left(\frac{{\lambda }_{i}}{{w}^{\text{T}}\left(k\right)A\text{\hspace{0.17em}}w\left(k\right)}-1\right)\right]}^{2}\le 1,\text{\hspace{0.17em}}\left(i=1,\cdots ,p\right)$ (17)

It follows that

$\begin{array}{c}{w}^{\text{T}}\left(k+1\right)A\text{\hspace{0.17em}}w\left(k+1\right)=\underset{i=1}{\overset{p}{\sum }}{\left[1+\xi \left(\frac{{\lambda }_{i}}{{w}^{\text{T}}\left(k\right)A\text{\hspace{0.17em}}w\left(k\right)}-1\right)\right]}^{2}\left(a{\lambda }_{i}+b\right){z}_{i}^{2}\left(k\right)\\ \le \underset{i=1}{\overset{p}{\sum }}\left(a{\lambda }_{i}+b\right){z}_{i}^{2}\left(k\right)={w}^{\text{T}}\left(k\right)A\text{\hspace{0.17em}}w\left(k\right)\le H\left(w\left(0\right)\right)\end{array}$ (18)

Case 2: $0<{w}^{\text{T}}\left(k\right)A\text{\hspace{0.17em}}w\left(k\right)\le \sigma$. Then

${\left[1+\xi \left(\frac{{\lambda }_{i}}{{w}^{\text{T}}\left(k\right)A\text{\hspace{0.17em}}w\left(k\right)}-1\right)\right]}^{2}\le {\left[1+\xi \left(\frac{\sigma }{{w}^{\text{T}}\left(k\right)A\text{\hspace{0.17em}}w\left(k\right)}-1\right)\right]}^{2}$ (19)

It follows that

(20)

By lemma 1 and lemma 2, denote ${s}^{\ast }=\frac{\xi \sigma }{1-\xi }$

When ${s}^{\ast }>\sigma$

${w}^{\text{T}}\left(k+1\right)A\text{\hspace{0.17em}}w\left(k+1\right)\le {\left[1+\xi \left(\frac{\sigma }{{w}^{\text{T}}\left(0\right)A\text{\hspace{0.17em}}w\left(0\right)}-1\right)\right]}^{2}{w}^{\text{T}}\left(0\right)A\text{\hspace{0.17em}}w\left(0\right)$ (21)

When ${s}^{\ast }<\sigma$

$\begin{array}{l}{w}^{\text{T}}\left(k+1\right)A\text{\hspace{0.17em}}w\left(k+1\right)\\ \le \left\{\begin{array}{l}\sigma ,\text{\hspace{0.17em}}\text{\hspace{0.17em}}{s}^{\ast } (22)

So ${w}^{\text{T}}\left(k+1\right)A\text{\hspace{0.17em}}w\left(k+1\right)\le H\left(w\left(0\right)\right)$ is always true. $\square$

3.3. Global Convergence

Lemma 3. If $w\left(0\right)\notin {V}_{\sigma }^{\perp }$ there exists constants ${\theta }_{1}>0$ and ${\Pi }_{1}\ge 0$ such that

$\underset{j=m+1}{\overset{n}{\sum }}{z}_{j}^{2}\left(k\right)\le {\Pi }_{1}\cdot {\text{e}}^{-{\theta }_{1}k}$ (23)

for all $k\ge 0$, where

${\theta }_{1}=\mathrm{ln}{\left(\frac{\xi \sigma +\left(1-\xi \right)H\left(w\left(0\right)\right)}{\xi {\lambda }_{m+1}+\left(1-\xi \right)H\left(w\left(0\right)\right)}\right)}^{2}>0$ (24)

Proof. Since $w\left(0\right)\notin {V}_{\sigma }^{\perp }$ there must exist some $i\left(1\le i\le m\right)$ such that ${z}_{i}\left(0\right)\ne 0$. Without loss of generality, assume that ${z}_{1}\left(0\right)\ne 0$.

From (10), it follows that

${z}_{i}\left(k+1\right)=\left[1+\xi \left(\frac{\sigma }{{w}^{\text{T}}\left(k\right)A\text{\hspace{0.17em}}w\left(k\right)}-1\right)\right]{z}_{i}\left(k\right),\text{\hspace{0.17em}}\left(i=1,\cdots ,m\right)$ (25)

${z}_{j}\left(k+1\right)=\left[1+\xi \left(\frac{{\lambda }_{j}}{{w}^{\text{T}}\left(k\right)A\text{\hspace{0.17em}}w\left(k\right)}-1\right)\right]{z}_{j}\left(k\right),\text{\hspace{0.17em}}\left(j=m+1,\cdots ,n\right)$ (26)

By Theorem 2, ${w}^{\text{T}}\left(k\right)A\text{\hspace{0.17em}}w\left(k\right)$ has an upper bound for all $k\ge 0$. Given any $i,\text{\hspace{0.17em}}\left(i=1,\cdots ,n\right)$, it holds that

$\left[1+\xi \left(\frac{{\lambda }_{i}}{{w}^{\text{T}}\left(k\right)A\text{\hspace{0.17em}}w\left(k\right)}-1\right)\right]=1-\xi +\xi \left(\frac{{\lambda }_{i}}{{w}^{\text{T}}\left(k\right)A\text{\hspace{0.17em}}w\left(k\right)}\right)\ge \left(1-\xi \right)>0$ (27)

for $k\ge 1$. Then from (25) and (26), for each $j\left(j=m+1,\cdots ,n\right)$, it follows that

$\begin{array}{c}{\left[\frac{{z}_{j}\left(k+1\right)}{{z}_{1}\left(k+1\right)}\right]}^{2}={\left[\frac{1+\xi \left(\frac{{\lambda }_{j}}{{w}^{\text{T}}\left(k\right)A\text{\hspace{0.17em}}w\left(k\right)}-1\right)}{1+\xi \left(\frac{\sigma }{{w}^{\text{T}}\left(k\right)A\text{\hspace{0.17em}}w\left(k\right)}-1\right)}\right]}^{2}\cdot {\left[\frac{{z}_{j}\left(k\right)}{{z}_{1}\left(k\right)}\right]}^{2}\\ ={\left[\frac{{w}^{\text{T}}\left(k\right)A\text{\hspace{0.17em}}w\left(k\right)+\xi \left({\lambda }_{j}-{w}^{\text{T}}\left(k\right)A\text{\hspace{0.17em}}w\left(k\right)\right)}{{w}^{\text{T}}\left(k\right)A\text{\hspace{0.17em}}w\left(k\right)+\xi \left(\sigma -{w}^{\text{T}}\left(k\right)A\text{\hspace{0.17em}}w\left(k\right)\right)}\right]}^{2}\cdot {\left[\frac{{z}_{j}\left(k\right)}{{z}_{1}\left(k\right)}\right]}^{2}\\ \le {\left[\frac{\xi {\lambda }_{j}+\left(1-\xi \right)H\left(w\left(0\right)\right)}{\xi \sigma +\left(1-\xi \right)H\left(w\left(0\right)\right)}\right]}^{2}\cdot {\left[\frac{{z}_{j}\left(k\right)}{{z}_{1}\left(k\right)}\right]}^{2}\\ \le {\left[\frac{\xi {\lambda }_{m+1}+\left(1-\xi \right)H\left(w\left(0\right)\right)}{\xi \sigma +\left(1-\xi \right)H\left(w\left(0\right)\right)}\right]}^{2}\cdot {\left[\frac{{z}_{j}\left(k\right)}{{z}_{1}\left(k\right)}\right]}^{2}\\ ={\left[\frac{{z}_{j}\left(0\right)}{{z}_{1}\left(0\right)}\right]}^{2}\cdot {\text{e}}^{-{\theta }_{1}\left(k+1\right)}\end{array}$ (28)

for all $k\ge 1$.

Since ${w}^{\text{T}}\left(k\right)A\text{\hspace{0.17em}}w\left(k\right)$ is bounded, there exists a constant $d>0$ such that ${z}_{1}^{2}\left(k\right)\le d$ for all $k\ge 0$. Then

$\underset{j=m+1}{\overset{n}{\sum }}{z}_{j}^{2}\left(k\right)=\underset{j=m+1}{\overset{n}{\sum }}{\left[\frac{{z}_{j}\left(k\right)}{{z}_{1}\left(k\right)}\right]}^{2}\cdot {z}_{1}^{2}\left(k\right)\le {\Pi }_{1}{\text{e}}^{-{\theta }_{1}k}$ (29)

for $k\ge 1$, where ${\Pi }_{1}=d\underset{j=m+1}{\overset{n}{\sum }}{\left[\frac{{z}_{j}\left(0\right)}{{z}_{1}\left(0\right)}\right]}^{2}\ge 0$ $\square$

Next, for convenience, denote

$P\left(k\right)={\left[1+\xi \left(\frac{\sigma }{{w}^{\text{T}}\left(k\right)A\text{\hspace{0.17em}}w\left(k\right)}-1\right)\right]}^{2}{w}^{\text{T}}\left(k\right)A\text{\hspace{0.17em}}w\left(k\right)$ (30)

$Q\left(k\right)=\underset{i=m+1}{\overset{n}{\sum }}\left(a{\lambda }_{i}+b\right)\left[2\left(1-\xi \right)+\frac{\xi \left({\lambda }_{i}+\sigma \right)}{{w}^{\text{T}}\left(k\right)A\text{\hspace{0.17em}}w\left(k\right)}\right]\left[\frac{\xi \left(\sigma -{\lambda }_{i}\right)}{{w}^{\text{T}}\left(k\right)A\text{\hspace{0.17em}}w\left(k\right)}\right]{z}_{i}^{2}\left(k\right)$ (31)

for all $k\ge 0$. Clearly, $P\left(k\right)\ge 0$ and $Q\left(k\right)\ge 0$ for all $k\ge 0$.

Lemma 4. It holds that

${w}^{\text{T}}\left(k+1\right)A\text{\hspace{0.17em}}w\left(k+1\right)=P\left(k\right)-Q\left(k\right)$ (32)

for $k\ge 0$.

Proof. From (9) and (10), it follows that

$\begin{array}{l}{w}^{\text{T}}\left(k+1\right)A\text{\hspace{0.17em}}w\left(k+1\right)=\underset{i=1}{\overset{n}{\sum }}{\left[1+\xi \left(\frac{{\lambda }_{i}}{{w}^{\text{T}}\left(k\right)A\text{\hspace{0.17em}}w\left(k\right)}-1\right)\right]}^{2}\left(a{\lambda }_{i}+b\right){z}_{i}^{2}\left(k\right)\\ =\underset{i=1}{\overset{n}{\sum }}{\left[1+\xi \left(\frac{\sigma }{{w}^{\text{T}}\left(k\right)A\text{\hspace{0.17em}}w\left(k\right)}-1\right)\right]}^{2}\left(a{\lambda }_{i}+b\right){z}_{i}^{2}\left(k\right)\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{ }-\underset{i=m+1}{\overset{n}{\sum }}\left(a{\lambda }_{i}+b\right)\left[2\left(1-\xi \right)+\frac{\xi \left({\lambda }_{i}+\sigma \right)}{{w}^{\text{T}}\left(k\right)A\text{\hspace{0.17em}}w\left(k\right)}\right]\left[\frac{\xi \left(\sigma -{\lambda }_{i}\right)}{{w}^{\text{T}}\left(k\right)A\text{\hspace{0.17em}}w\left(k\right)}\right]{z}_{i}^{2}\left(k\right)\\ ={\left[1+\xi \left(\frac{\sigma }{{w}^{\text{T}}\left(k\right)A\text{\hspace{0.17em}}w\left(k\right)}-1\right)\right]}^{2}{w}^{\text{T}}\left(k\right)A\text{\hspace{0.17em}}w\left(k\right)\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{ }-\underset{i=m+1}{\overset{n}{\sum }}\left(a{\lambda }_{i}+b\right)\left[2\left(1-\xi \right)+\frac{\xi \left({\lambda }_{i}+\sigma \right)}{{w}^{\text{T}}\left(k\right)A\text{\hspace{0.17em}}w\left(k\right)}\right]\left[\frac{\xi \left(\sigma -{\lambda }_{i}\right)}{{w}^{\text{T}}\left(k\right)A\text{\hspace{0.17em}}w\left(k\right)}\right]{z}_{i}^{2}\left(k\right)\\ =P\left(k\right)-Q\left(k\right)\end{array}$ (33)

$\square$

Lemma 5. If $w\left(0\right)\notin {V}_{\sigma }^{\perp }$, it holds that

$P\left(k-1\right)\ge 4\left(1-\xi \right)\xi \sigma$ (34)

Proof. From (30), it follows that

$\begin{array}{c}P\left(k-1\right)={\left[1+\xi \left(\frac{\sigma }{{w}^{\text{T}}\left(k-1\right)A\text{\hspace{0.17em}}w\left(k-1\right)}-1\right)\right]}^{2}{w}^{\text{T}}\left(k-1\right)A\text{\hspace{0.17em}}w\left(k-1\right)\\ \ge {\mathrm{min}}_{s>0}\left\{{\left[1+\xi \left(\frac{\sigma }{s}-1\right)\right]}^{2}s\right\}\end{array}$ (35)

for $k\ge 1$. By Lemma 1, it holds that

$P\left(k-1\right)\ge 4\left(1-\xi \right)\xi \sigma ,k\ge 1$ $\square$

Lemma 6. There exists a positive constant ${\Pi }_{2}>0$ such that

$Q\left(k\right)\le {\Pi }_{2}\cdot {\text{e}}^{-{\theta }_{1}k}$ (36)

for $k\ge 0$.

Proof. From (31)

$\begin{array}{c}Q\left(k\right)=\underset{i=m+1}{\overset{n}{\sum }}\left(a{\lambda }_{i}+b\right)\left[2\left(1-\xi \right)+\frac{\xi \left({\lambda }_{i}+\sigma \right)}{{w}^{\text{T}}\left(k\right)A\text{\hspace{0.17em}}w\left(k\right)}\right]\left[\frac{\xi \left(\sigma -{\lambda }_{i}\right)}{{w}^{\text{T}}\left(k\right)A\text{\hspace{0.17em}}w\left(k\right)}\right]{z}_{i}^{2}\left(k\right)\\ \le \underset{i=m+1}{\overset{n}{\sum }}\left(a{\lambda }_{i}+b\right)\left[2+\frac{2\xi \sigma }{{w}^{\text{T}}\left(k\right)A\text{\hspace{0.17em}}w\left(k\right)}\right]\left[\frac{\xi \sigma }{{w}^{\text{T}}\left(k\right)A\text{\hspace{0.17em}}w\left(k\right)}\right]{z}_{i}^{2}\left(k\right)\\ \le 2\left[1+\frac{\xi \sigma }{{w}^{\text{T}}\left(k\right)A\text{\hspace{0.17em}}w\left(k\right)}\right]\left[\frac{\xi \sigma }{{w}^{\text{T}}\left(k\right)A\text{\hspace{0.17em}}w\left(k\right)}\right]\underset{i=m+1}{\overset{n}{\sum }}\left(a{\lambda }_{i}+b\right){z}_{i}^{2}\left(k\right)\end{array}$ (37)

for $k\ge 0$. By Theorem1 and Lemma 3,

$\begin{array}{c}Q\left(k\right)\le 2\left[1+\frac{\xi \sigma }{4\left(1-\xi \right)\xi {\lambda }_{p}}\right]\left[\frac{\xi \sigma }{4\left(1-\xi \right)\xi {\lambda }_{p}}\right]\underset{i=m+1}{\overset{n}{\sum }}\left(a{\lambda }_{i}+b\right){z}_{i}^{2}\left(k\right)\\ \le 2{\Pi }_{1}\left[1+\frac{\sigma }{4\left(1-\xi \right){\lambda }_{p}}\right]\left[\frac{\sigma }{4\left(1-\xi \right){\lambda }_{p}}\right]{\text{e}}^{-{\theta }_{1}k}\le {\Pi }_{2}\cdot {\text{e}}^{-{\theta }_{1}k}\end{array}$ (38)

for $k\ge 0$, where

${\Pi }_{2}=2{\Pi }_{1}\left[1+\frac{\sigma }{4\left(1-\xi \right){\lambda }_{p}}\right]\left[\frac{\sigma }{4\left(1-\xi \right){\lambda }_{p}}\right]$ $\square$

Lemma 7. If $w\left(0\right)\notin {V}_{\sigma }^{\perp }$, then there exist constants ${\theta }_{2}>0$ and ${\Pi }_{5}>0$ such that

$|\sigma -{w}^{\text{T}}\left(k+1\right)A\text{\hspace{0.17em}}w\left(k+1\right)|\le k{\Pi }_{5}\left[\left\{{\text{e}}^{-{\theta }_{2}\left(k+1\right)}+\mathrm{max}\left\{{\text{e}}^{-{\theta }_{2}k},{\text{e}}^{-{\theta }_{1}k}\right\}\right\}\right]$ (39)

for all $k\ge 1$, where

$\left\{\begin{array}{l}{\theta }_{2}=-\mathrm{ln}\delta \hfill \\ \delta =\mathrm{max}\left\{{\left(1-\xi \right)}^{2},\frac{\xi }{4\left(1-\xi \right)}\right\}\hfill \end{array}$ (40)

and $0<\xi <0.8,0<\delta <1$.

Proof. From (30), (31) and Lemma 4, it follows that

$\begin{array}{l}\sigma -{w}^{\text{T}}\left(k+1\right)A\text{\hspace{0.17em}}w\left(k+1\right)=\sigma -P\left(k\right)+Q\left(k\right)\\ =\sigma -{\left[1+\xi \left(\frac{\sigma }{{w}^{\text{T}}\left(k\right)A\text{\hspace{0.17em}}w\left(k\right)}-1\right)\right]}^{2}{w}^{\text{T}}\left(k\right)A\text{\hspace{0.17em}}w\left(k\right)+Q\left(k\right)\\ =\left(\sigma -{w}^{\text{T}}\left(k\right)A\text{\hspace{0.17em}}w\left(k\right)\right)\left[{\left(1-\xi \right)}^{2}-\frac{{\xi }^{2}\sigma }{{w}^{\text{T}}\left(k\right)A\text{\hspace{0.17em}}w\left(k\right)}\right]+Q\left(k\right)\\ =\left(\sigma -{w}^{\text{T}}\left(k\right)A\text{\hspace{0.17em}}w\left(k\right)\right)\left[{\left(1-\xi \right)}^{2}-\frac{{\xi }^{2}\sigma }{P\left(k-1\right)-Q\left(k-1\right)}\right]+Q\left(k\right)\\ =\left(\sigma -{w}^{\text{T}}\left(k\right)A\text{\hspace{0.17em}}w\left(k\right)\right)\left[{\left(1-\xi \right)}^{2}-\frac{{\xi }^{2}\sigma }{P\left(k-1\right)}\right]\end{array}$

$\begin{array}{l}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{ }\text{ }-\frac{\left(\sigma -{w}^{\text{T}}\left(k\right)A\text{\hspace{0.17em}}w\left(k\right)\right){\xi }^{2}\sigma Q\left(k-1\right)}{P\left(k-1\right)\left(P\left(k-1\right)-Q\left(k-1\right)\right)}+Q\left(k\right)\\ =\left(\sigma -{w}^{\text{T}}\left(k\right)A\text{\hspace{0.17em}}w\left(k\right)\right)\left[{\left(1-\xi \right)}^{2}-\frac{{\xi }^{2}\sigma }{P\left(k-1\right)}\right]\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{ }\text{ }-\frac{\left(\sigma -{w}^{\text{T}}\left(k\right)A\text{\hspace{0.17em}}w\left(k\right)\right){\xi }^{2}\sigma }{P\left(k-1\right){w}^{\text{T}}\left(k\right)A\text{\hspace{0.17em}}w\left(k\right)}Q\left(k-1\right)+Q\left(k\right)\\ =\left(\sigma -{w}^{\text{T}}\left(k\right)A\text{\hspace{0.17em}}w\left(k\right)\right)\left[{\left(1-\xi \right)}^{2}-\frac{{\xi }^{2}\sigma }{P\left(k-1\right)}\right]\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{ }\text{ }-{\xi }^{2}\sigma \left[\frac{\sigma }{{w}^{\text{T}}\left(k\right)A\text{\hspace{0.17em}}w\left(k\right)}-1\right]\frac{Q\left(k-1\right)}{P\left(k-1\right)}+Q\left(k\right)\end{array}$ (41)

for $k\ge 1$.

Denote

$V\left(k\right)=|\sigma -{w}^{\text{T}}\left(k\right)A\text{\hspace{0.17em}}w\left(k\right)|$ (42)

for $k\ge 1$. It follows that,

$\begin{array}{c}V\left(k+1\right)\le V\left(k\right)\cdot |{\left(1-\xi \right)}^{2}-\frac{{\xi }^{2}\sigma }{P\left(k-1\right)}|\\ \text{\hspace{0.17em}}+{\xi }^{2}\sigma \left[\frac{\sigma }{{w}^{\text{T}}\left(k\right)A\text{\hspace{0.17em}}w\left(k\right)}+1\right]\frac{Q\left(k-1\right)}{P\left(k-1\right)}+Q\left(k\right)\\ \text{}\le \mathrm{max}\left\{{\left(1-\xi \right)}^{2},\frac{{\xi }^{2}\sigma }{P\left(k-1\right)}\right\}V\left(k\right)\\ \text{\hspace{0.17em}}+{\xi }^{2}\sigma \left[\frac{\sigma }{{w}^{\text{T}}\left(k\right)A\text{\hspace{0.17em}}w\left(k\right)}+1\right]\frac{Q\left(k-1\right)}{P\left(k-1\right)}+Q\left(k\right)\end{array}$ (43)

for $k\ge 1$. By Lemma5, it holds that

$\begin{array}{c}V\left(k+1\right)\le \mathrm{max}\left\{{\left(1-\xi \right)}^{2},\frac{\xi }{4\left(1-\xi \right)}\right\}V\left(k\right)\\ \text{\hspace{0.17em}}+\frac{\xi }{4\left(1-\xi \right)}\left[\frac{\sigma }{{w}^{\text{T}}\left(k\right)A\text{\hspace{0.17em}}w\left(k\right)}+1\right]Q\left(k-1\right)+Q\left(k\right)\end{array}$ (44)

for $k\ge 1$. Denote

$\delta =\mathrm{max}\left\{{\left(1-\xi \right)}^{2},\frac{\xi }{4\left(1-\xi \right)}\right\}$ (45)

Clearly, if $0<\xi <0.8,0<\delta <1$. Then

$V\left(k+1\right)\le \delta \cdot V\left(k\right)+\frac{\xi }{4\left(1-\xi \right)}\left[\frac{\sigma }{{w}^{\text{T}}\left(k\right)A\text{\hspace{0.17em}}w\left(k\right)}+1\right]Q\left(k-1\right)+Q\left(k\right)$ (46)

Denote

${\Pi }_{3}=\frac{\xi }{4\left(1-\xi \right)}\left[\frac{\sigma }{4\left(1-\xi \right)\xi {\lambda }_{p}}+1\right]$ (47)

By Theorem 1 and Lemma 6,

$\begin{array}{c}V\left(k+1\right)\le \delta \cdot V\left(k\right)+{\Pi }_{3}\cdot Q\left(k-1\right)+Q\left(k\right)\\ \le \delta \cdot V\left(k\right)+{\Pi }_{4}\cdot {\text{e}}^{-{\theta }_{1}k}\end{array}$ (48)

where

${\Pi }_{4}={\Pi }_{2}\cdot \left({\Pi }_{3}{\text{e}}^{{\theta }_{1}}+1\right)$ (49)

Denote.

$\left\{\begin{array}{l}{\theta }_{2}=-\mathrm{ln}\delta \hfill \\ \delta =\mathrm{max}\left\{{\left(1-\xi \right)}^{2},\frac{\xi }{4\left(1-\xi \right)}\right\}\hfill \end{array}$ (50)

Then,

$\begin{array}{c}V\left(k+1\right)\le {\delta }^{k+1}V\left(0\right)+{\Pi }_{4}\underset{r=0}{\overset{k}{\sum }}{\delta }^{r}{\text{e}}^{-{\theta }_{1}\left(k-r\right)}\\ \le {\delta }^{k+1}V\left(0\right)+k{\Pi }_{4}\cdot \mathrm{max}\left\{{\delta }^{k},{\text{e}}^{-{\theta }_{1}k}\right\}\\ \le k{\Pi }_{5}\left[{\text{e}}^{-{\theta }_{2}\left(k+1\right)}+\mathrm{max}\left\{{\text{e}}^{-{\theta }_{2}k},{\text{e}}^{-{\theta }_{1}k}\right\}\right]\end{array}$ (51)

for all $k\ge 1$. $\square$

Lemma 8. Suppose there exists constants $\theta >0$ and $\Pi >0$ such that

$\frac{\xi }{{w}^{\text{T}}\left(k\right)A\text{\hspace{0.17em}}w\left(k\right)}|\left(\sigma -{w}^{\text{T}}\left(k\right)A\text{\hspace{0.17em}}w\left(k\right)\right){z}_{i}\left(k\right)|\le k\cdot \Pi {\text{e}}^{-\theta k},\text{\hspace{0.17em}}\text{\hspace{0.17em}}\left(i=1,\cdots ,m\right)$ (52)

for $k\ge 1$. Then,

$\underset{k\to \infty }{\mathrm{lim}}{z}_{i}\left(k\right)={z}_{i}^{\ast },\text{\hspace{0.17em}}\text{\hspace{0.17em}}\left(i=1,\cdots ,m\right)$ (53)

where ${z}_{i}^{\ast }\left(i=1,\cdots ,m\right)$ are constants.

Proof. When ${z}_{i}\left(k\right)\le 1$

$\begin{array}{l}\frac{\xi }{{w}^{\text{T}}\left(k\right)A\text{\hspace{0.17em}}w\left(k\right)}|\left(\sigma -{w}^{\text{T}}\left(k\right)A\text{\hspace{0.17em}}w\left(k\right)\right){z}_{i}\left(k\right)|\\ \le \frac{\xi }{{w}^{\text{T}}\left(k\right)A\text{\hspace{0.17em}}w\left(k\right)}|\left(\sigma -{w}^{\text{T}}\left(k\right)A\text{\hspace{0.17em}}w\left(k\right)\right)|\\ \le k\cdot \Pi {\text{e}}^{-\theta k}\end{array}$ (54)

when ${z}_{i}\left(k\right)>1$

$\begin{array}{l}\frac{\xi }{{w}^{\text{T}}\left(k\right)A\text{\hspace{0.17em}}w\left(k\right)}|\left(\sigma -{w}^{\text{T}}\left(k\right)A\text{\hspace{0.17em}}w\left(k\right)\right){z}_{i}\left(k\right)|\\ \le \xi \cdot \frac{{z}_{i}\left(k\right)}{\underset{i=1}{\overset{m}{\sum }}\left(a\sigma +b\right){z}_{i}{\left(k\right)}^{2}}|\left(\sigma -{w}^{\text{T}}\left(k\right)A\text{\hspace{0.17em}}w\left(k\right)\right)|\\ \le \frac{\xi }{\left(a\sigma +b\right)}|\left(\sigma -{w}^{\text{T}}\left(k\right)A\text{\hspace{0.17em}}w\left(k\right)\right)|\\ \le k\cdot \Pi {\text{e}}^{-\theta k}\end{array}$ (55)

Next, we can prove that each sequence $\left\{{z}_{i}\left(k\right)\right\}$ is a Cauchy sequence. See [14] for the proof. By Cauchy Convergence Principle, there exist constants ${z}_{i}^{\ast }\left(i=1,\cdots ,m\right)$ such that

$\underset{k\to \infty }{\mathrm{lim}}{z}_{i}\left(k\right)={z}_{i}^{\ast },\text{\hspace{0.17em}}\text{\hspace{0.17em}}\left(i=1,\cdots ,m\right)$ $\square$

Theorem 3. Suppose that $0<\xi <0.8$, if $w\left(0\right)\notin {V}_{\sigma }^{\perp }$ then the trajectory of (4) starting from $w\left(0\right)$ will converge to a linear combination of $\sigma$ and coefficients $a,b$. This linear combination is the eigenvector corresponding to the largest eigenvalue of its covariance matrix.

Proof. By Lemma 3, there exists constants ${\theta }_{1}>0$ and ${\Pi }_{1}\ge 0$ such that

$\underset{j=m+1}{\overset{n}{\sum }}{z}_{j}^{2}\left(k\right)\le {\Pi }_{1}\cdot {\text{e}}^{-{\theta }_{1}k}$ (56)

for all $k\ge 0$. By Lemma 7, there exists constants ${\theta }_{2}>0$ and ${\Pi }_{2}>0$ such that

$|\sigma -{w}^{\text{T}}\left(k+1\right)A\text{\hspace{0.17em}}w\left(k+1\right)|\le k\cdot {\Pi }_{2}\cdot \left[{\text{e}}^{-{\theta }_{2}\left(k+1\right)}+\mathrm{max}\left\{{\text{e}}^{-{\theta }_{2}k},{\text{e}}^{-{\theta }_{1}k}\right\}\right]$ (57)

for all $k\ge 0$.

Obviously, there exists constants $\theta >0$ and $\Pi >0$ such that

$\frac{\xi }{{w}^{\text{T}}\left(k\right)A\text{\hspace{0.17em}}w\left(k\right)}|\left(\sigma -{w}^{\text{T}}\left(k\right)A\text{\hspace{0.17em}}w\left(k\right)\right){z}_{i}\left(k\right)|\le k\cdot \Pi {\text{e}}^{-\theta k}$ (58)

for $k\ge 0$ and $i\left(i=1,\cdots ,m\right)$

Using Lemmas 3 and 8, it follows that

$\left\{\begin{array}{l}\underset{k\to \infty }{\mathrm{lim}}{z}_{i}\left(k\right)={z}_{i}^{\ast },\text{\hspace{0.17em}}\left(i=1,\cdots ,m\right)\hfill \\ \underset{k\to \infty }{\mathrm{lim}}{z}_{i}\left(k\right)=0,\text{\hspace{0.17em}}\left(i=m+1,\cdots ,n\right)\hfill \end{array}$ (59)

Then,

$\underset{k\to \infty }{\mathrm{lim}}w\left(k\right)=\underset{i=1}{\overset{m}{\sum }}{z}_{i}^{\ast }{v}_{i}\in {V}_{\sigma }$ (60)

When the algorithm converges, we have

$\underset{k\to \infty }{\mathrm{lim}}C\text{ }w\left(k\right)=\underset{k\to \infty }{\mathrm{lim}}{w}^{\text{T}}\left(k\right)A\text{\hspace{0.17em}}w\left(k\right)w\left(k\right)$ (61)

$\underset{i=1}{\overset{m}{\sum }}\sigma {z}_{i}^{\ast }{v}_{i}=\underset{i=1}{\overset{m}{\sum }}\left(a\sigma +b\right){\left({z}_{i}^{\ast }\right)}^{2}\cdot \underset{i=1}{\overset{m}{\sum }}{z}_{i}^{\ast }{v}_{i}$ (62)

$\underset{i=1}{\overset{m}{\sum }}{\left({z}_{i}^{\ast }\right)}^{2}=\frac{\sigma }{a\sigma +b}$ (63)

Consider the most common case where the largest eigenvalue has only one. After the algorithm converges, ${\left({z}_{1}^{\ast }\right)}^{2}=\sigma /\left(a\sigma +b\right)$. We have $w={\left[\sigma /\left(a\sigma +b\right)\right]}^{1/2}v$. Finally, we can get the largest eigenvector

$v=w/{\left[\sigma /\left(a\sigma +b\right)\right]}^{1/2}=w/{\left[{w}^{\text{T}}A\text{\hspace{0.17em}}w/\left(a{w}^{\text{T}}A\text{\hspace{0.17em}}w+b\right)\right]}^{1/2}$ $\square$

4. Numerical Experiment

Some examples will be provided in this section to illustrate the proposed theory.

4.1. Offline Data Experiment

The numerical experiment is mainly to observe the convergence of the algorithm, and we randomly generate a covariance matrix

$C=\left[\begin{array}{cccccc}1.090719& 0.154061& 0.109432& 0.089424& 0.05406& 0.125653\\ 0.154061& 1.261628& 0.185839& 0.151862& 0.091805& 0.213386\\ 0.109432& 0.185839& 1.132004& 0.10787& 0.065211& 0.151571\\ 0.089424& 0.151862& 0.10787& 1.088148& 0.053288& 0.12386\\ 0.05406& 0.091805& 0.065211& 0.053288& 1.032214& 0.074877\\ 0.125653& 0.213386& 0.151571& 0.12386& 0.074877& 1.174038\end{array}\right]$

According to the existing related methods, we can directly find C’s maximum eigenvalue is 1.778753, and the maximum eigenvector is [0.341311, 0.579619, 0.411713 0.336439, 0.203388, 0.472741]. Next, we select three different initial vectors, take $\xi =0.5,a=0.5,b=0.5$ and use $\epsilon =0.0001$ as the stopping criterion (That is, in the iterative process of the algorithm, when the absolute value of each element of the i-th eigenvector minus each element of the i-1-th step is less than $\epsilon$, the algorithm is considered to have converged and the iteration is stopped) to observe the convergence of the algorithm.

The first initial vector we randomly generated is [0.5488, 0.7152, 0.6028, 0.5449, 0.4237, 0.6459]. The trajectory of the eigenvector iteration and the learning rate iteration trajectory is shown in Figure 1. The algorithm converged after 23 iterations, ${w}^{\text{T}}A\text{\hspace{0.17em}}w$ converged to the maximum eigenvalue 1.778753, and the maximum eigenvector from the final iteration was [0.341436, 0.579398, 0.411745, 0.336571, 0.203655, 0.472685], and the learning rate converged to 0.281096. It can be seen that the error from the true maximum eigenvector is less than 0.0001.

The second initial vector we randomly generated is [0.0055, 0.0072, 0.006, 0.0054, 0.0042, 0.0065]. The trajectory of the eigenvector iteration and the learning rate iteration trajectory are shown in Figure 2. The algorithm converges after 27 iterations, ${w}^{\text{T}}A\text{\hspace{0.17em}}w$ converges to the maximum eigenvalue 1.778753, and the maximum eigenvector from the final iteration is [0.341423, 0.579428, 0.411735, 0.336547, 0.203620, 0.472697], and the learning rate converges to 0.281096. It can be seen that the error with the true maximum eigenvector is also less than 0.0001.

Figure 1. Convergence process of (6) with median initial vector.

Figure 2. Convergence process of (6) with smaller initial vector.

The third initial vector we randomly generated is [1142.75, 1458.86, 1245.25, 1135.28, 904.94, 1327.2]. The trajectory of the eigenvector iteration and the learning rate iteration trajectory are shown in Figure 3. The algorithm converges after 35 iterations, ${w}^{\text{T}}A\text{\hspace{0.17em}}w$ converges to the maximum eigenvalue 1.778753, and the maximum eigenvector from the final iteration is [0.341414, 0.579436, 0.411738, 0.336548, 0.203616, 0.472693], and the learning rate converges to 0.281096.It can be seen that the error with the real maximum eigenvector is still less than 0.0001.

We can see that the algorithm can converge to the maximum eigenvector faster from three different initial value experiments. Of course, the closer the initial vector is to the maximum eigenvector, the faster the convergence will be. When the initial value is vast, the initial learning rate is close to 0, and the vector will gradually come down to convergence. When the initial value is minimal, since the initial learning rate will be huge, the vector will have a more significant jump and gradually converge. When $\xi$ in the algorithm is determined, no matter how the initial vector is taken, the learning rate after the final algorithm converges is a fixed value, which is consistent with our theoretical proof.

Although in the theoretical proof, $w\left(0\right)\notin {V}_{\sigma }^{\perp }$ will converge, but after many numerical experiments, we found that an initial vector $w\left(0\right)$ is chosen arbitrarily, the results will converge. This is because the probability that $w\left(0\right)\in {V}_{\sigma }^{\perp }$ is almost 0. Then we select a low-dimensional covariance matrix with integer eigenvectors for experiment. We found that when $w\left(0\right)\in {V}_{\sigma }^{\perp }$, $w\left(k\right)$ is not divergent, it will converge to other eigenvectors. Suppose the multiplicity of the largest eigenvalue (denoted by $\sigma$ ) of C is $m\left(1\le m\le n\right)$, i.e., ${\lambda }_{1}={\lambda }_{2}=\cdots ={\lambda }_{m}=\sigma$ ; Suppose the multiplicity of the second largest eigenvalue (denoted by $\tau$ ) of C is $t\left(m, i.e., ${\lambda }_{m+1}={\lambda }_{m+2}=\cdots ={\lambda }_{m+t}=\tau$. Denote ${V}_{\sigma }^{\perp }=span\left\{{v}_{m+1},\cdots ,{v}_{n}\right\}$, ${V}_{\tau }^{\perp }=span\left\{{v}_{m+\tau +1},\cdots ,{v}_{n}\right\}$ when $w\left(0\right)\notin {V}_{\sigma }^{\perp }$, $w\left(k\right)$, will converge to the largest eigenvector of C; when $w\left(0\right)\in {V}_{\sigma }^{\perp }$ and $w\left(0\right)\notin {V}_{\tau }^{\perp }$, $w\left(k\right)$ will converge to the second largest eigenvector of C.

Figure 3. Convergence process of (6) with larger initial vector.

4.2. Comparison Algorithm (5) and (6)

In order to compare the difference between (5) and (6), we did some experiments. We randomly generated an initial vector as [571.37, 729.43, 622.63, 567.64, 452.47, 663.6]. The two algorithms use the same $\xi =0.2$. For (6) we use a = 0.001, b = 0.001. We still use $\epsilon =0.0001$ as the stopping criterion. The results are shown in Figure 4 and Figure 5.

From the results, we can see that Algorithm (5) converges after 83 iterations, and Algorithm (6) converges after 69 iterations. Algorithm (6) can quickly converge to near the true value within the first ten iterations, while Algorithm (5) requires more iterations. We found that when we have a larger initial value, choosing smaller coefficients a, b can help the algorithm converge more quickly.

Figure 4. Convergence of algorithm (5) with larger initial vector.

Figure 5. Convergence of algorithm (6) with larger initial vector and smaller a, b.

4.3. Selection of Parameters a, b

The most significant difference between Algorithm (6) and Algorithm (5) is the newly introduced two parameters a and b. Next, we used numerical experiments to discuss the influence of parameters a and b on the algorithm’s convergence process.

We take [0.5488, 0.7152, 0.6028, 0.5449, 0.4237, 0.6459] as the initial vector, choose $\xi =0.2$, and changea, b to observe the convergence of the algorithm (6). The number of iterations required by the algorithm (6) under different parameters is shown in Table 1.

We can see that when one parameter is much larger than the other, the smaller parameter will lose its effect. We must keep the two parameters in the same order of magnitude. This is because the covariance matrix C and the identity matrix I are in the same order of magnitude in the experiment. If one coefficient is larger, the effect of the other coefficient will be reduced.

From Table 1, we can see that when $a,b\in \left[0.01,0.1\right]$ the convergence speed is the fastest. Next, we take a value in this interval, and the convergence rate hardly changes. This shows that the algorithm is not sensitive to changes of a and b in the same order of magnitude.

We use [0.0055, 0.0072, 0.006, 0.0054, 0.0042, 0.0065] as the initial vector, we found when $a,b\in \left[100,1000\right]$ the convergence speed is the fastest. Then we use [59.3932, 74.367, 64.2487, 59.0395, 48.1289, 68.1305] as the initial vector, we found when $a,b\in \left[{10}^{-6},{10}^{-5}\right]$ the convergence speed is the fastest. This shows that when we choose a larger initial vector, we should choose smaller a, b and vice versa. The above three situations are all satisfied $w{\left(0\right)}^{\text{T}}A\text{\hspace{0.17em}}w\left(0\right)\in \left(0.01,1\right)$. It means when we choose $w\left(0\right)$ a and b satisfied $w{\left(0\right)}^{\text{T}}A\text{\hspace{0.17em}}w\left(0\right)\in \left(0.01,1\right)$, the convergence speed of the algorithm (6) will reach the fastest.

4.4. Online Data Experiment

Next, we consider the case of online data. Suppose the input sequence $\left\{x\left(k\right)|x\left(k\right)\in {R}^{n}\left(k=0,1,2,\cdots \right)\right\}$ is a zero mean stationary stochastic process. Because the data come in one by one online, we cannot get the covariance matrix C in advance. According to [4] C can be expressed as

${C}_{k}=\beta {C}_{k-1}+\frac{\left(x\left(k\right)x{\left(k\right)}^{\text{T}}-\beta {C}_{k-1}\right)}{k}$ (64)

where $\beta$ is a coefficient, when $x\left(k\right)$ comes from a stationary process, then $\beta =1$, else $\beta \in \left(0,1\right)$. According to The Law of Large Numbers, $P\left({‖C-{C}_{k}‖}_{F}\le \epsilon \right)$ converges with probability 1.

We randomly generate data and add a new data every iteration. We still use $\epsilon =0.0001$ as the stopping criterion, and find that different initial vectors and different $\xi$ will affect the convergence speed of the algorithm. Experiments with different initial values are just like offline data.

We studied the influence of parameter $\xi$ on the convergence of the algorithm and selected the same initial value. The result is shown in Figure 6. It is found that the smaller the $\xi$ is, the faster the algorithm can converge.

Figure 6. Convergence of algorithms with different $\xi$.

Table 1. Number of iterations with different a, b.

5. Conclusion

In this paper, we proposed a Generalized Adaptive Learning Rate (GALR) PCA Algorithm, which could be guaranteed that the algorithm’s convergence process will not be affected by the selection of the initial value. Using the DDT method, we gave the upper and lower bounds of the algorithm and proved the global convergence. We no longer need the learning rate to tend to zero. Numerical experiments have also verified our theory. We discussed the relationship between the initial vector and the parameters a, b. Our algorithm is effective for both online and offline data.

Conflicts of Interest

The authors declare no conflicts of interest regarding the publication of this paper.

 [1] Bouwmans, T., Javes, S., et al. (2018) On the Applications of Robust PCA in Image and Video Processing. Proceedings of the IEEE, 106, 1427-1457. https://doi.org/10.1109/JPROC.2018.2853589 [2] Oja, E. (1982) Simplified Neuron Model as a Principal Component Analyzer. Journal of Mathematical Biology, 15, 267-273. https://doi.org/10.1007/BF00275687 [3] Xu, L. (1993) Least Mean Square Error Reconstruction Principle for Self-Organizing Neural-Nets. Neural Networks, 6, 627-648. https://doi.org/10.1016/S0893-6080(05)80107-8 [4] Chatterjee, C., Kang, Z. and Roychowdhury, V.P. (2000) Algorithms for Accelerated Convergence of Adaptive PCA. IEEE Transactions on Neural Networks, 11, 338-355. https://doi.org/10.1109/72.839005 [5] Xu, L. and Yuille, A.L. (1995) Robust Principal Component Analysis by Self-Organizing Rules Based on Statistical Physics Approach. IEEE Transactions on Neural Networks, 6, 131-143. https://doi.org/10.1109/72.363442 [6] Wang, S., Liang, Y.L. and Ma, F. (1998) An Adaptive Robust PCA Neural Network. The 1998 IEEE International Joint Conference on Neural Networks Proceedings, Vol. 3, 2288-2293. https://doi.org/10.1109/IJCNN.1998.687218 [7] Yang, T.-N. and Wang, S.D. (1999) Robust Algorithms for Principal Component Analysis. Pattern Recognition Letters, 20, 927-933. https://doi.org/10.1016/S0167-8655(99)00060-4 [8] Chen, T.P., Hua, Y.B. and Yan, W.-Y. (1998) Global Convergence of Oja’s Subspace Algorithm for Principal Component Extraction. IEEE Transactions on Neural Networks, 9, 58-67. https://doi.org/10.1109/72.655030 [9] Zhang, Q. and Bao, Z. (1995) Dynamical Systems for Computing the Eigenvectors Associated with the Largest Eigenvalue of a Positive Definite Matrix. IEEE Transactions on Neural Networks, 6, 790-791. https://doi.org/10.1109/72.377989 [10] Zhang, Q. and Leung, Y.-W. (2000) A Class of Learning Algorithms for Principal Component Analysis and Minor Component Analysis. IEEE Transactions on Neural Networks, 11, 529-533. https://doi.org/10.1109/72.839022 [11] Zhang, Q.F. (2003) On the Discrete-Time Dynamics of a PCA Learning Algorithm. Neurocomputing, 55, 761-769. https://doi.org/10.1016/S0925-2312(03)00439-9 [12] Zufiria, P.J. (2002) On the Discrete-Time Dynamics of the Basic Hebbian Neural-Network Nods. IEEE Transactions on Neural Networks, 13, 1342-1352. https://doi.org/10.1109/TNN.2002.805752 [13] Yi, Z., Ye, M., Lv, J.C. and Tan, K.K. (2005) Convergence Analysis of a Deterministic Discrete Time System of Oja’s PCA Learning Algorithm. IEEE Transactions on Neural Networks, 16, 1318-1328. https://doi.org/10.1109/TNN.2005.852236 [14] Jian, C.L., Zhang, Y. and Tan, K.K. (2006) Global Convergence of Oja’s PCA Learning Algorithm with a Non-Zero-Approaching Adaptive Learning Rate. Theoretical Computer Science, 367, 286-307. https://doi.org/10.1016/j.tcs.2006.07.012 [15] Kong, X.Y., Hu, C.H. and Duan, Z.S. (2017) Principal Component Analysis Networks and Algorithms. Springer Nature, Berlin. https://doi.org/10.1007/978-981-10-2915-8