Synchronization of Stochastic Memristive Neural Networks with Retarded and Advanced Argument

Abstract

In this paper, we discuss the driving-response synchronization problem for two memristive neural networks with retarded and advanced arguments under the condition of additional noise. The control law is related to the linear time-delay feedback term, and the discontinuous feedback term. Moreover, the random different equation is used to prove the stability of this theory. At the end, the simulation results verify the correctness of the theoretical results.

Share and Cite:

Xian, R. (2021) Synchronization of Stochastic Memristive Neural Networks with Retarded and Advanced Argument. Journal of Intelligent Learning Systems and Applications, 13, 1-14. doi: 10.4236/jilsa.2021.131001.

1. Introduction

In the past 10 years, neural networks have shown great potential application in pattern classification, associative memory. Also, the neural memory network has attracted extensive attention. In [1] [2] [3], under the influence of memristors, the relationship between each single or double neural is replacing the traditional one. Dynamical analysis of neural network has been widely studied. The existence and uniqueness of the equilibrium point are certified. And RMNN (recursive memristive neural network) was proposed in 1990 and is regarded as the generalization of the recurrent neural network. When each parent node of a recursive neural network is connected to only one child node, its structure is equivalent to that of a fully connected cyclic neural network. Recursive neural networks can introduce a gating mechanism to learn long-distance dependency.

By comparing with the traditional neural networks, based on the former research in [4] [5] [6], the micro neural has more information storage capacity and can extend the application of neural networks in memory and information processing in [7] [8]. In 1990, Pecora and Carroll initiated a coupled chaotic system synchronized by the master and slave systems. Since then, researchers from different academic directions around the world have studied the synchronization of chaotic systems extensively. Due to the occurrence of time delay in a nonlinear complex system is unavoidable, it is essential to take advantage of the delayed memristive neural networks (DMNN) to build a brain-like machine to implement the synapses of biological brains. In [3] [9] [10] [11], it is worth noting that the synchronization of complex dynamic networks has been extensively studied and the references. Synchronization control has been using in many dynamic control methods, including feedback control, adaptive control, and pulse control. Besides, memristive neural network (MNN) extends the control problem of neural networks. Because of these characteristics, the memristor becomes a switching system with state control. Two synchronous MNN are designing in some new control laws. The linearity of the feedback term has been considering in control, and the discontinuous feedback term is proposing to ensure the global synchronization of the two MNNs. Recently, in [12] [13] [14] DMNN has provoked considerable attention for the sake of both theoretical interest and practical applications. Numerical theoretical analysis and simulated experiments have demonstrated that MNN can possess more computation power and information capacity, which would significantly broaden the application of neural networks in information processing, associative memory and pattern recognition. Furthermore, in the past decades, the state-dependent nonlinear system family has received little attention in DMNN. Considering the developments and applications of memristors, we will pay more attention to such nonlinear systems with its various generalizations to allow the memristors to be widely used in emerging technologies.

In this paper, we continue to discuss the master-slave synchronization of memristor neural network (MNN) with retarded and advanced argument. The problem of additional noise in the stochastic differential equation model is studied. First, we design a control law that consists of discontinuous feedback and deviating functions. Based on the mean square, the sufficient condition of global synchronization is linear matrix inequality (LMI).

Also, the extended feedback term is constituted by the adaptive control law, it makes the control gain which is a discontinuous feedback term. In this paper, as described in [15], under the condition of the linear matrix inequality (LMI), it will make the results more practical. The surplus of this paper, Section II presents MNN (memristor neural network) with random disturbances in some preparations. Section III concludes the master-slave (or driving-response) synchronization of two different control laws with retarded and advanced arguments under the condition of the mean square. Section IV, the two numerical examples are giving in this part according to the theoretical results. Last, section V includes the conclusion.

2. Preliminaries and Mode

Firstly, we present the prepared form of memristor which about the RMNN (recursive memristival neural network) and DMNN (delayed memristive neural networks). Besides, we recommend some definitions, remarks, and lemmas. Moreover, we define the compression lag feature in [13] [14].

U ( h ( t ) ) = ( U ( h ( t ) ) , D h ( t ) < 0 ; U ( h ( t ) ) , D h ( t ) > 0 ; U ( h ( t ) ) , D h ( t ) = 0. (1)

where h ( t ) presents the voltage which applied to the memristor, U ( h ( t ) ) is related to the memristor (voltage-controlled), the left Dini-derivatisation of h ( t ) is D h ( t ) in t, U ( h ( t ) ) which is defiled to the left limit U ( h ( t ) ) . U ( h ( t ) ) or U ( h ( t ) ) and U ( h ( t ) ) is equal to each other. The memductance function may be discontinuous.

As described in [11] [12], only two kinds of memristive states are needed, among which the memristors shows R 0 and R 1 are two completely different equilibrium states where R 0 R 1 in [3]. Among them, the high resistive state can be quickly switching to the low resistive state, and likewise, the low resistive state can be quickly switching to the high resistive state. The low resistive state should consume as little energy as possible. Therefore, the memristors with this characteristic can be defined as:

U ( h ( t ) ) = ( U , D h ( t ) < 0 ; U , D h ( t ) > 0 ; U ( h ( t ) ) , D h ( t ) = 0 , (2)

where U and U are constants.

In the traditional memory neural network, MNN is constructing by a memristor instead of resistance in [3]. We consider the dynamic system of recursive memory neural network with retarded and advance argument in [15]:

d x ( t ) = [ B x ( t ) + C ( x ( t ) ) f ( x ( t ) ) + A ( x ( t ) ) f ( x ( γ ( t ) ) ) ] d t + σ ( t , x ( t ) , x ( γ ( t ) ) ) d ω ( t ) , (3)

x ( t ) R n is the state of this networks; The active functions of neurons is correspond by f ( ) ; the deviating function is γ ( t ) ; C ( x ) = [ c i j ( f j ( x j ( t ) ) x i ( t ) ) ] n × n and A ( x ) = [ a i j ( f j ( x j ( γ ( t ) ) ) x j ( t ) ) ] n × n which are the two type of memristive connection weight matrix about the delay feedback. And the functions c i j ( ) and a i j ( ) are defined as (2), c i j means synaptic strengths at time t and a i j denote synaptic strengths at γ ( t ) . The two different values can be switched between connection weight freely. Here, c i j ( ) and a i j ( ) which is the value of the functions, denote as { c ˙ i j , c ˙ i j } , { a ˙ i j , a ˙ i j } . Besides, c ^ i j = max { c ˙ i j , c ˙ i j } , c i j = min { c ˙ i j , c ˙ i j } , a ^ i j = max { a ˙ i j , a ˙ i j } , a i j = min { a ˙ i j , a ˙ i j } .

Let L F 0 2 ( [ r ,0 ] ; R n ) be the the family of R n -valued stochastic process ξ ( s ) { s : [ r ,0 ] } . And ξ ( s ) is F 0 -measuarable and r 0 E ξ 2 d s < , which the W ˙ is the mathematic expectation.

The initional condition of (3) is t ( 0 , r ] , φ ( t ) = x ( t ) , φ L F 0 2 ( [ r ,0 ] ; R n ) . The x ( t ; φ ) is satisfied in (3) which is continuity. This equality about x which is shown x ( s ; φ ) = φ ( s ) , and satisfy to the s [ r ,0 ] .

Throughout this paper, the assumptions are used to support our proof.

Assumption 1. f i ( 0 ) = g i ( 0 ) = σ i ( 0 ) = 0 { l i 0 } . And | f i ( u ) | τ i , u R , τ > 0 . For u , v R , there exist a positive constances about this F i > 0 , G i > 0 , K i > 0 (K is called as the convergence rate) and this inequality about f , g , σ shown as followed:

| f i ( v ) f i ( u ) | l i | v u | ,

| g i ( v ) g i ( u ) | G i | v u | ,

| σ i ( v ) σ i ( u ) | k i | v u | .

Assumption 2. (G1) There exist a constant θ * > 0 , θ k * θ k 1 * θ * , for k N ,

(G2) 2 θ 2 [ ( N 1 + N 2 ) 2 + N 3 2 ] < 1 , 6 θ 2 ( N 1 2 + N 2 2 + N 3 2 ) e 6 θ 2 ( N 1 2 + N 3 2 ) < 1 ,

(G3) N 4 μ N 5 > 0 . Moreover, | f i ( u ) | γ i hold for u R , where γ i > 0 ,

(G4) The matrix diag ( a 1 , a 2 , , a n ) ( | a i j F j + b i j G j | ) n × n ,

N 1 = max 1 i n { j = 1 n | B i j | ( B G 1 + F j ) } ,

N 2 = max 1 i n { j = 1 n | a i j | G 2 } ,

N 3 = max 1 i n { k i } .

Assumption 3. { φ : R n × n R n × R n × R + } , φ is derived from the inner trace product and is a matrix of uniform Lipschitz continuous norm.

t r a c e [ ( φ ( t , v 1 , u 1 ) φ ( t , v 2 , u 2 ) ) T × ( φ ( t , v 1 , u 1 ) φ ( t , v 2 , u 2 ) ) ] M 1 ( v 1 v 2 ) 2 + M 2 ( u 1 u 2 ) 2 ,

where N 1 and N 2 are constant matrix, and it has a consist dimension.

Assumption 4. Based on C 2,1 ( R n × [ r , + ] ; R + ) . It is connected to the family of all the negative function on V ( t , x ) on ( [ r , + ] × R n ) , it is twice differentiable in x, t is once differentiable. If V C 2,1 ( [ r , + ] × R n ; R + ) , and LV is the weak infinitesimal operators, which is related to the following error system (5):

L V ( t , x ) = V t + V x [ B e ( t ) + C ( y ( t ) ) f ( y ( t ) ) C ( x ( t ) ) f ( x ( t ) ) + A ( y ( t ) ) f ( y ( γ ( t ) ) ) A ( x ( t ) ) f ( x ( γ ( t ) ) ) + u ( t ) ] + t r a c e [ σ T V x x σ ] . (4)

which V x = V ( t , x ) x , V t = V ( t , x ) t , V x x = ( 2 V ( t , x ) x i x j ) n × n and

σ ( t , e ( t ) , e ( γ ( t ) ) ) = σ .

Remark 1. Relating to k N , t [ θ k + 1 , ξ k ) , γ ( t ) = ξ k , the γ ( t ) is the deviating function. The (1) is a retarded system once t ( ξ , θ k + 1 ) satisfy t > γ ( t ) . The (1) is an advanced system if t < γ ( t ) satisfy t [ θ k , ξ k ] . Therefore, the deviating function γ ( t ) is influnced by the mixed system (1). In the driving-response system, the two identical RMN systems with different initial conditions called the driving system of RMNN (recursive memristival neural network) and the response system of RMNN. When the state of the variables of the two RMN in driving-response system will be synchronized as the mean square approaches 0 as time elapses. In this article, RMNN (3) is considering a master (or driver) system. The slave (or response) system is:

d y ( t ) = [ B y ( t ) + C ( y ˙ ( t ) ) f ( y ˙ ( t ) ) + A ( y ˙ ( t ) ) f ( y ˙ ( γ ( t ) ) ) + u ( t ) ] d t + σ ( t , y ( t ) , y ( γ ( t ) ) ) d ω ( t ) . (5)

where u ( t ) R n , and the control vector is U in this system. The initional condition which is in (4) shown that: ϕ ( t ) = y ( t ) , t [ r , 0 ] , and ϕ L F 0 2 ( [ r ,0 ] ; R n ) . Desiging u ( t ) is the control vector. And the core of this article is to synchronize the master system with the slave system. So we use this condition y ( t ) x ( t ) = e ( t ) and subtract (3) from (4) to get this error system:

d e ( t ) = [ B e ( t ) + C ( y ( t ) ) f ( y ( t ) ) C ( x ( t ) ) f ( x ( t ) ) + A ( y ( t ) ) f ( y ( γ ) ) A ( x ( t ) ) f ( x ( γ ( t ) ) ) + u ( t ) ] d t + σ ( t , e ( t ) , e ( γ ( t ) ) ) d ω ( t ) . (6)

It is obvious to see that the system (6) is equivalent to the following integral equation:

x i ( t ) = x i ( t 0 ) + t 0 t [ ( B G 1 ) e ( s ) + G 2 ( e ( γ ( s ) ) ) + G 3 s i g n ( e ( s ) ) + C ( y ( s ) ) f ( y ( s ) ) C ( x ( s ) ) f ( x ( s ) ) + A ( y ( s ) ) f ( y ( γ ( s ) ) ) A ( x ( s ) ) f ( x ( γ ( s ) ) ) + u ( s ) ] d s + t 0 t σ i ( γ ( s ) , e ( s ) , s ) d w ( s ) . (7)

For ( i = 1 , 2 , , n ) , { t t 0 } , we have this equation:

y i ( t ) = y i ( t 0 ) + t 0 t [ B y i ( s ) + C ( y ( s ) f ( y ( s ) ) ) + A ( y ( s ) f ( y ( γ ( s ) ) ) ) + I i ] d s + t 0 t σ ( s , y ( s ) , y ( γ ( s ) ) ) d B ( s ) .

Remark 2. Since the right-hand side of the system (1) at θ n is discontinuous, the deviating function of γ ( t ) that does not apply to stochastic differential equations. One of the solution x ( t ) = ( x 1 ( t ) , x 2 ( t ) , , x n ( t ) ) T in system (1) is a continuous function. Each point θ k , k N exists one-side derivative of x ( t ) , and each one is exist in the [ θ k , θ k + 1 ] , which exists in the derivation of x ( t ) , where σ ( x ( t ) , x ( γ ) , t ) = σ ( y ( t ) , y ( γ ( t ) ) , t ) σ ¯ ( e ( t ) , e ( γ ( t ) ) , t ) .

t [ r ,0 ] , ϕ ϕ ( t ) = ψ ( t ) = e ( t ) is the initional condition of (5) is ψ L F 0 2 ( [ r ,0 ] ; R n ) . U ( t ) (control vector) represents the synchronization problem associated with RMNN (3) and (4) for this U ( t ) . In the mean square is t + and e ( t ) 0 . The influence of noise is considering. Next, we will define the mean square stability as follows:

Lemma 1. x , y R n and define matrix S R n × n , the inequality matrix are shown: 2 x T y x T S x + y T S 1 y .

3. Main Results

In this part of the paper, we are giving a control law that is discontinuous. The response systems in RMNN (4) with time-delay feedback and in RMNN (3) are globally synchronized exponentially.

A Time-Delay Control Law with Constant Feedback Gains

The D ( t ) (control vector) is designed in this equation:

D ( t ) = G 1 ( e ( t ) ) + G 2 ( e ( γ ( t ) ) ) + G 3 s i g n ( e ( t ) ) . (8)

where G 1 , G 2 and G 3 R n × n are constant gain matrix which will be defined laterly. G 3 is a diagonal matrix and G 3 = diag { g 31 , g 32 , , g 3 n } . Subtituting (9) into error system (5), the result is shown:

d e ( t ) = [ D ( G 1 ) e ( t ) + G 2 ( e ( γ ( t ) ) ) + G 3 s i g n ( e ( t ) ) + C ( y ( t ) ) f ( y ( t ) ) C ( x ( t ) ) f ( x ( t ) ) + A ( y ( t ) ) f ( y ( γ ( t ) ) ) A ( x ( t ) ) f ( x ( γ ( t ) ) ) + u ( t ) ] d t + φ ( t , e ( t ) , e ( γ ( t ) ) ) d w ( t ) . (9)

We can define two matrix Q = diag { q 1 , q 2 , , q n } , J = diag { j 1 , j 2 , , j n } with j i = 2 j = 1 n ( | c i j c i j | + | a i j a i j | ) γ j , C ¨ = [ c i j ] n × n with A ¨ = [ a i j ] n × n which the first one with c ˜ i j [ c i j , c ^ i j ] and the second one with a i j [ a i j , a ^ i j ] . We have shown the theorem as followed.

Theorem 1. Let G 1 and G 2 hold, which is for any solution y ( t ) = ( y 1 ( t ) , y 2 ( t ) , , y n ( t ) ) T of (9). And we have the inequation:

E y ( γ ( t ) ) 2 μ E y ( t ) 2 .

And t [ 0, + ) , where μ is defined as it in (2).

Proof. Fix k N , t [ θ k 1 , θ k ) , it follows that

y i ( t ) = y i ( t 0 ) + ξ t t [ B y i ( s ) + C ( y ( s ) f ( y ( s ) ) ) + A ( y ( s ) f ( y ( γ ( ξ k ) ) ) ) + I i ] d s + ξ k t σ ( s , y ( s ) , y ( γ ( s ) ) ) d B ( s ) .

then

E y ( t ) 2 = E { i = 1 n | Z i ( t ) | } 2 E { i = 1 n | y i ( ξ k ) | + i = 1 n | ξ k t [ C y i ( s ) + j = 1 n C i j ( y j ( s ) ) f ( y j ( s ) ) + j = 1 n A i j ( y ( s ) ) f ( y ( γ ( ξ k ) ) ) ] d s | + i = 1 n | ξ k t σ i ( y ( s ) , γ ( s ) ) d B ( s ) | } 2 E { y ( ξ k ) + θ λ 2 y ( ξ k ) + N 1 ξ k t y ( t ) d s + N 3 ξ k t 0 y ( s ) d B ( s ) } 2 3 ( 1 + θ λ 2 ) 2 E y ( ξ k ) 2 + 3 θ ( N 1 2 + N 3 2 ) ξ k t E y ( s ) 2 d s ,

E y ( t ) 2 3 ( 1 + λ 2 ) 2 E y ( ξ k ) 2 e 3 ( λ 1 2 + λ 3 2 ) = ζ E y ( ξ k ) 2 ,

hence,

E y ( ξ k ) 2 6 1 6 ( θ N 2 ) 2 + 3 θ 2 ( N 1 2 + N 3 2 ) E y ( t ) 2 = μ E y ( t ) 2 .

Theorem 2. Let Assumptions 1 - 3 hold. In mean square, under the control law (7) and the RMNN (3)-(4) can achieve global asymptotical synchronization if there existed a ρ (positive real number), define positionality diagonal matrix H = diag { h 1 , h 2 , , h n } and R = diag { r 1 , r 2 , , r n } , and we define two positive matrix P = [ P i j ] n × n , O = [ O i j ] n × n , it is shown in this equation:

N = [ Π 1 H G 2 H C ¨ + H R H A ¨ Π 2 0 0 P 2 R 0 ( 1 p ) P ] < 0 , (10)

G 3 + M < 0 , (11)

Q < ρ I . (12)

N = [ Π 1 H G 2 H C ¨ + H R H A ¨ Π 2 0 0 P 2 R 0 ( 1 p ) P ] < 0 , (13)

G 3 + M < 0 , (14)

Q < ρ I . (15)

where Π 1 = H ( ( C + G 1 ) + ( C + G 1 ) T ) H + O + ρ M 1 T M 1 and Π 2 = ρ M 2 T ( 1 P ) O .

Consider the Lyapunov functional in this equation:

V ( t ) = i = 1 3 V i ( t ) , (16)

where

V 1 ( t ) = e T ( t ) P e ( t ) . (17)

and H, O and P are given matrices. x t = x ( t + s ) , and { t 0, s [ 0, r ] } . The weak infinite operator of random process L is the operator at V 1 ( t ) .

According to the control law (7) of Definition 2, this definition only applies to the specificity of the system (8):

L V 1 ( t ) = 2 e T ( t ) H [ ( C + G 1 ) e ( t ) + G 2 e ( γ ( t ) ) + G 3 s i g n ( e ( t ) ) + C g ( e ( t ) ) + A g ( e ( γ ( t ) ) ) + ( C ( y ( t ) ) C ) f ( y ( t ) ) + ( C C ( x ( t ) ) ) f ( x ( t ) ) + ( A ( y ( t ) ) A ) f ( y ( γ ( t ) ) ) + ( A A ( x ( t ) ) ) f ( x ( γ ( t ) ) ) ] + t r a c e [ σ T ( t , e ( t ) , e ( γ ( t ) ) ) P σ ( t , e ( t ) , e ( γ ( t ) ) ) ] . (18)

The f i ( ) (activating function) is bounded, we have this result includes equation and inequation:

2 e T ( t ) H ( C ( y ( t ) ) C ¨ ) f ( y ( t ) ) = 2 i = 1 n j = 1 n e i ( t ) h i ( c i j ( y ( t ) ) C ¨ i j ) f j ( y j ( t ) ) 2 i = 1 n ( j = 1 n h i | c i j c i j | γ i ) | e i ( t ) | . (19)

We can similatively expect the gain in these inequations:

2 e T ( t ) H ( C ¨ C ( x ( t ) ) ) f ( x ( t ) ) 2 i = 1 n ( j = 1 n h i | c i j c i j | γ j ) | e i ( t ) | , (20)

2 e T ( t ) H ( A A ¨ ) f ( y γ ( t ) ) 2 i = 1 n ( j = 1 n H i | a i j a i j | γ j ) | e i ( t ) | , (21)

and

2 e T ( t ) H ( A ¨ A ¨ ( x ( t ) ) ) f ( γ ( t ) ) 2 i = 1 n ( j = 1 n h i | a i j a i j | γ j ) | e i ( t ) | . (22)

Besides, we have that

2 e T ( t ) H G 3 s i g n ( e ( t ) ) = 2 i = 1 n h i g 3 i | e i ( t ) | . (23)

Based on the Assumption 2 and (15):

t r a c e [ σ T ( t , e ( t ) , e ( γ ( t ) ) ) H σ ( t , e ( t ) , e ( γ ( t ) ) ) ] ρ t r a c e [ σ T ( t , e ( t ) , e ( γ ( t ) ) ) σ ( t , e ( t ) , e ( γ ( t ) ) ) ] ρ [ e T ( t ) M 1 T M 1 e ( t ) + e T ( γ ( t ) ) M 2 T M 2 e ( γ ( t ) ) ] . (24)

Related to Assumption 1, we can easily obtain these equations and inequations:

g T ( t ) D L e ( t ) = i = 1 n g i ( t ) d i l i e i ( t ) i = 1 n d i g i 2 ( t ) = g T ( t ) D g ( t ) , (25)

which implies that

g T ( t ) D g ( t ) g T ( t ) ( t ) (26)

Let η = [ e T ( t ) , e T ( γ ( t ) ) , g T ( e ( t ) ) , g T ( e ( γ ( t ) ) ) ] T and combining (11)-(12) and (16)-(22), basing on this above that we can obtain this inequation:

L V i = 1 2 L V i ( t ) + g T ( t ) D L e ( t ) g T ( t ) D g ( t ) ρ T N ρ + 2 H ( G 3 + J ) | e ( t ) | 0. (27)

Related on the (24) and Itô Formula, it is showing in this equation:

W V ( 1 ) W V ( 0 ) = W 0 1 L V ( s ) d s . (28)

Based on the (14), there exists a positionality of constant λ shown in these inequations:

e ( t ) W V ( 0 ) + W 0 t L V ( s ) d s L V ( 0 ) + λ max W 0 t e ( s ) 2 d s . (29)

Related on [13] [14] [15] [16], the equilibrium point in the error system (8) in mean square, which is globally asymptotic stabilized, and in [17] that we discussed.

Remark 1. In [17] [18], the analysis technique ignores the excitability and the inhesion effect of neurons. If given G1, G2, G3 the form of LMIS is derived synchronization, considering the conditions, considering the two properties of neurons one is excitability, the other is inhibition, derivation synchronization is a feature of LMIS. Besides, we have two advantages in this case. First, by solving the value LMIS, it can verify the conditions of G1, G2 and G3. Second, global synchronization cannot be achieved by adjusting any matrices or parameters.

Remark 2. We adopt the following decomposition technique in Theorem 2.

A ( y ( t ) ) f ( y ( t ) ) C ( x ( t ) ) f ( x ( t ) ) = C ¨ g ( e ( t ) ) + ( A ( y ( t ) ) C ¨ ) f ( y ( t ) ) + ( C ¨ A ( x ( t ) ) ) f ( x ( t ) ) , (30)

and

A ( y ( t ) ) f ( y ( γ ( t ) ) ) A ( x ( t ) ) f ( x ( γ ( t ) ) ) = A ¨ g ( e ( γ ( t ) ) ) + ( A ( y ( t ) ) A ¨ ) f ( y ( γ ( t ) ) ) + ( A ¨ A ( x ( t ) ) ) f ( x ( γ ( t ) ) ) . (31)

The results show that by using the decomposition technique, it is worthwhile to consider the previous results in the synchronization of the memristor neural network models in [19].

Remark 3. When the drive system and the response system have different states, the system is also different. MNN depends on the switching system. In the proof of Theorem 2, the discontinuous term in the control law is using to offset the difference between the two RMNS resulting in the anti-synchronization effect. Moreover, the discontinuous feedback controls are using to reduce interference.

Remark 4. To eliminate the chattering caused by the discontinuous control law (7), (7) can be modified as:

D ( t ) = G 1 ( t ) + G 2 ( e ( γ ( t ) ) ) + G 3 e ( t ) | e ( t ) | + ε 1 ,

where

e ( t ) | e ( t ) | + ε = ( e 1 ( t ) | e 1 ( t ) | + ε 1 , , e 1 ( t ) | e 1 | + ε 1 ) T .

For i = 1 , 2 , 3 , , n . The ε i is a small enough constant. We have these corollaries, let H = I n in Theorem 1, and shown in the following corollary:

Corollary 1. Assumption 1 - 3 is valid. Under the control law, in the mean square RMNN (3) and (4) can achieve the global asymptotic synchronization if there is a matrix R that is positive diagonal matrix, R = diag { r 1 , r 2 , , r n } , and positionally of definientia matrix P = [ p i j ] n × n , O = [ o i j ] n × n , it is shown in this equation:

N = [ Π 1 G 2 C ˜ + Q R A ˜ Π 2 0 0 P 2 R 0 ( 1 h ) H ] < 0 , (32)

G 3 + J < 0 , (33)

where Π 1 = D + G 1 + ( D + G 1 ) T + O + M 1 T M 1 and Π 2 = M 2 T M 2 ( 1 P ) O .

Corollary 2. In the mean square, under the control law (7), the RMNN (3) and (4) can achieve global asymtotic synchronization which let assumption 1 - 3 hold, if there exist a positive real number ρ , positioning defile the diagonal matrix H = diag { h 1 , h 2 , , h n } and R = diag { r 1 , r 2 , , r n } and define positive matrix P R n × n , and diagonal matrix G 3 R n × n , it is shown in this equation:

N = [ Π 1 G 2 P C ˜ + Q R H A Π 2 0 0 P 2 R 0 ( 1 p ) P ] < 0 , (34)

G 3 + J < 0 , (35)

and

H ρ I , (36)

where Π 1 = 2 H D + G 1 + G 1 T + R + ρ M 1 T M 1 and Π 2 = ρ M 2 T M 2 ( 1 p ) O . Moreover, G 1 = H 1 G 1 , G 2 = H 1 G 2 , and G 3 = G 3 .

Proof. This corollary can be directivity verified by letting G 1 = H 1 G 1 , G 2 = H 1 G 2 and G 3 = G 3 .

Remark 5. Besides, if we set G 2 = 0 , and the control law (7) is consistent with this paper, it also can synchronize both MNN with random disturbance on the mean square. Therefore, it provides retarded and advanced argument in the drive-response systems of MNNs, and the results will change. Furthermore, MNNs (with driving-response system) will be more meaningful due to the deviating function γ ( t ) .

Without the random disturbance, we using the MNNS (with driving-response system) in this equation:

d x ( t ) d t = B x ( t ) + C ( x ( t ) ) f ( x ( t ) ) + A ( x ( t ) ) F ( x ( γ ( t ) ) ) , (37)

and

d y ( t ) d t = B y ( t ) + C ( y ( t ) ) + A ( y ( t ) ) f ( y ( γ ( t ) ) ) + u ( t ) . (38)

Based on Theorem 2, we associate Corollary 3 with the following corollary:

Corollary 3. Let Assumption 1 and 3 hold, we established under the control law (7) as well as (36) and (37), global asymptotic synchronization can be achieved if exist positive definite diagonal matrix H = diag { h 1 , h 2 , , h n } , R = diag { r 1 , r 2 , , r n } , which define positively matrix H = [ h i j ] , O = [ O i j ] n × n , it is shown in this equation:

N = [ Π 1 H G 2 P C ¨ + Q R H A ( 1 h ) O 0 0 P 2 R 0 ( 1 p ) P ] < 0 , (39)

and

G 3 + J < 0 , (40)

where Π 1 = H ( C + G 1 ) + ( C + G 1 ) T H + O .

Corollary 4. Based on the matrix G 1 = diag { g 11 , , g 1 n } and G 2 = 0 in (7), if Assumption 1 holds the control law (7), it can achieve the global synchronization exponention of MNN (36) and (37), if there exist positive define martix r i ( i = 1 , 2 , , n ) , it is shown in this inequation:

g 1 i > c i + j = 1 n r j r i l i ( | c ¨ j i | + | a ¨ j i | ) , (41)

and

g 3 i 2 j = 1 n | c i j c i j | + | a i j a i j | γ j . (42)

Proof. Considering a function defined by this equation:

V ( t , e t ) = e δ t i = 1 n j = 1 n r j | a ¨ j i | × γ ( t ) t | g i ( e ( i ( s ) ) ) | e δ + i d s . (43)

As noted in Remark 2, following this proof of Theorem 1, and using this decomposition technique in Theorem 2, it can quickly verify the corollary.

Remark 6. This system can identify the retarded and advanced system, unlike traditional neural networks that need to be identifying in the system (1). Meanwhile, the comparative analysis of [8] shows that the function of behavior deviation Theorem 1 is more aggravated in the first than in the second.

Remark 7. By sorting out Theorem 2 and Theorem 3, it is noting that these two parameters do not need to deal with MNN, additive noise, and deviation function. Also, the uncertain disturbance can be determined by this method.

Remark 8. Retarded and advanced arguments can describe harmonized nonlinear systems with internal mechanisms. By using the theory of these differential equations, the substitutable, hysteresis, and advanced parameters are introducing.

4. Conclusion

This paper studies a class of neural memory networks with discontinuous neuronal activation and constant variables. Based on the non-smooth analysis theory, the generalized Lyapunov functional method and equivalent transformation method are adopted to design the state feedback controllers napping control scheme. The neural memory network’s global synchronization results with discontinuous neuron activation based on drive response are obtaining. It is worth pointing out that these controllers and non-smooth Lyapunov functional functions in this paper are new. For neuron memory networks with unbounded intermittent neuron activation, synchronization research’s main problem is to process amplification signals, and the equivalent variation method is using to process amplification functions. Unlike the previous paper, [3] [7] [8], the new approach is also applicable to continuously activated neural networks. Some numerical examples further illustrate the feasibility of the results obtained. Using the Filippov solution and Lyapunov function, the chain rules included in differentiation are derived. Sufficient conditions are giving to ensure the complete asymptotic synchronization of the model under consideration. Numerical simulation verifies the validity of the theoretical results when a discontinuous active drive-response control system to the presence of parameter mismatch between the neural networks. Besides, we also briefly discussed pinning strategy; for example, we should first peg which neurons. We can choose how much control gain discrete neural networks were achieving by pinning a control scheme of finite-time synchronization.

Acknowledgements

This work is supported by the Natural Science Foundation of China under Grants 61976084 and 61773152.

Conflicts of Interest

The authors declare no conflicts of interest regarding the publication of this paper.

References

[1] Chua, L. (1971) Memristor—The Missing Circuit Element. IEEE Transactions on Circuit Theory, 18, 507-519.
https://doi.org/10.1109/TCT.1971.1083337
[2] Mathur, N.D. (2008) The Fourth Circuit Element. Nature, 455, E13.
https://doi.org/10.1038/nature07437
[3] Zhang, H.G., Wang, Z.S. and Liu, D.R. (2014) A Comprehensive Review of Stability Analysis of Continuous-Time Recurrent Neural Networks. IEEE Transactions on Neural Networks and Learning Systems, 25, 1229-1262.
https://doi.org/10.1109/TNNLS.2014.2317880
[4] Itoh, M. and Chua, L. (2014) Memristor Network. Springer, Berlin.
[5] Pershin, Y.V. and Di Ventra, M. (2010) Experimental Demonstration of Associative Memory with Memristive Neural Networks. Neural Networks, 23, 881-886.
https://doi.org/10.1016/j.neunet.2010.05.001
[6] Cao, J.D and Wang, J. (2003) Global Asymptotic Stability of a General Class of Recurrent Neural Networks with Time-Varying Delays. IEEE Transactions on Neural Networks, 50, 34-44.
https://doi.org/10.1109/TCSI.2002.807494
[7] Cao, J.D. and Wang, J. (2005) Global Asymptotic and Robust Stability of Recurrent Neural Networks with Time Delays. IEEE Transactions on Circuits and Systems I: Regular Papers, 52, 417-426.
https://doi.org/10.1109/TCSI.2004.841574
[8] Wang, Z.S. and Zhang, H.G. and Jiang, B. (2011) LMI-Based Approach for Global Asymptotic Stability Analysis of Recurrent Neural Networks with Various Delays and Structures. IEEE Transactions on Neural Networks, 22, 1032-1045.
https://doi.org/10.1109/TNN.2011.2131679
[9] Wang, Z.S., Liu, L., Shan, Q.-H. and Zhang, H.G. (2015) Stability Criteria for Recurrent Neural Networks with Time-Varying Delay Based on Secondary Delay Partitioning Method. IEEE Transactions on Neural Networks and Learning Systems, 26, 2589-2595.
https://doi.org/10.1109/TNNLS.2014.2387434
[10] Wang, Z.S., Ding, S.B., Huang, Z.J. and Zhang, H.G. (2015) Exponential Stability and Stabilization of Delayed Memristive Neural Networks Based on Quadratic Convex Combination Method. IEEE Transactions on Neural Networks and Learning Systems, 27, 2337-2350.
https://doi.org/10.1109/TNNLS.2015.2485259
[11] Hu, J. and Wang, J. (2010) The 2010 International joint Conference on Neural Networks. Institute of Electrical and Electronics Engineers, 1-8.
[12] Wu, A.L. and Zeng, Z.G. (2012) Dynamic Behaviors of Memristor-Based Recurrent Neural Networks with Time-Varying Delays. In: American Mathematical Society, Ed., Graduate Studies in Mathematics, Vol. 36, 1-10.
[13] Cohen, M.A. and Grossberg, S. (1983) Absolute Stability of Global Pattern Formation and Parallel Memory Storage by Competitive Neural Networks. IEEE Transactions on Systems, Man, and Cybernetics, SMC-13, 815-826.
https://doi.org/10.1109/TSMC.1983.6313075
[14] Duan, L., Huang, L.H. and Fang, X.W. (2017) Finite-Time Synchronization for Recurrent Neural Networks with Discontinuous Activations and Time-Varying Delays. Chaos, 27, Article ID: 013101.
https://doi.org/10.1063/1.4966177
[15] Lu, W.L. and Chen, T.P. (2003) New Conditions on Global Stability of Cohen-Grossberg Neural Networks. Neural Computation, 15, 1173-1189.
https://doi.org/10.1162/089976603765202703
[16] Rosenstein, M.T. and Collins, J.J. and De Luca, C.J. (1993) A Practical Method for Calculating Largest Lyapunov Exponents from Small Data Sets. Physica D: Nonlinear Phenomena, 65, 117-134.
https://doi.org/10.1016/0167-2789(93)90009-P
[17] Xu, C.J. and Li, P.L. (2018) Periodic Dynamics for Memristor-Based Bidirectional Associative Memory Neural Networks with Leakage Delays and Time-Varying Delays. International Journal of Control, Automation and Systems, 16, 535-549.
https://doi.org/10.1007/s12555-017-0235-7
[18] Tubiana, J. and Monasson, R. (2017) Emergence of Compositional Representations in Restricted Boltzmann Machines. Physical Review Letters, 118, Article ID: 138301.
https://doi.org/10.1103/PhysRevLett.118.138301
[19] Tang, L.K., Lu, J-A., Lü, J.H. and Yu, X.H. (2012) Bifurcation Analysis of Synchronized Region in Complex Dynamical Networks. International Journal of Bifurcation and Chaos, 22, Article ID: 1250282.
https://doi.org/10.1142/S0218127412502823

Copyright © 2024 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.