Adaptive Stochastic Synchronization of Uncertain Delayed Neural Networks

Abstract

This paper considers adaptive synchronization of uncertain neural networks with time delays and stochastic perturbation. A general adaptive controller is designed to deal with the difficulties deduced by uncertain parameters and stochastic perturbations, in which the controller is less conservative and optimal since its control gains can be automatically adjusted according to some designed update laws. Based on Lyapunov stability theory and Barbalat lemma, sufficient condition is obtained for synchronization of delayed neural networks by strict mathematical proof. Moreover, the obtained results of this paper are more general than most existing results of certainly neural networks with or without stochastic disturbances. Finally, numerical simulations are presented to substantiate our theoretical results.

Share and Cite:

Wu, E. , Wang, Y. and Luo, F. (2023) Adaptive Stochastic Synchronization of Uncertain Delayed Neural Networks. Journal of Applied Mathematics and Physics, 11, 2533-2544. doi: 10.4236/jamp.2023.119164.

1. Introduction

Recently, the dynamic behaviors of neural networks (NNs) have been paid great attentions by academic researcher. Owing to the drive-response synchronization scheme proposed by Pecora and Carroll [1] , there have many works on synchronization of neural networks and it has been a hot topic. It should be noticed that the coefficients of drive-response systems are identical in most existing results for drive-response synchronization of NNs [2] [3] [4] . However, in practical applications, it is unrealistic to assume that the coefficients of the drive-response system are exactly the same due to some uncertainties. Therefore, it is of great significance to study the synchronization of neural networks with uncertain parameters.

The NNs with stochastic disturbance is close to real situation since some random factors exist in the process of signal transmission. Therefore, many efforts have been devoted to investigating synchronization of NNs with stochastic disturbances. For example, stochastic quasi-synchronization for delayed dynamical networks via intermittent control was studied in [5] . Exponential synchronization of delayed memristor-based neural networks with stochastic perturbation via nonlinear control in [6] . Fixed-Time synchronization of reaction-diffusion fuzzy neural networks with stochastic perturbations [7] . In [8] , synchronization of stochastic memristive neural networks with retarded and advanced argument has been investigated. Fixed-Time Synchronization of Neural Networks with Stochastic Perturbations by state feedback control is studied in [9] .

It is well known that drive-response NNs cannot achieve synchronization by themselves. Therefore, a suitable control scheme is needed to investigate synchronization of NNs, and the designed controller should be easy to implement and reduce the control cost. Among them, many different control schemes have been used to study synchronization of networks, such as state feedback control [10] [11] , intermittent control [12] [13] , adaptive control [14] [15] [16] , etc. Among them, adaptive control has received special attention because it can automatically adjust the size of control gain by the adaptive law. For example, adaptive exponential synchronization of delayed chaotic networks was studied in [17] . Adaptive exponential synchronization of delayed neural networks with reaction-diffusion terms was investigated in [18] . It should be noted that the system parameters are determined and the same, and there is no random interference in the drive-response system [14] [15] [16] [17] [18] . From the point of view of practical application, the system with uncertain parameters and random disturbance is more realistic. However, the methods described above cannot be used to address this challenge. Therefore, it is urgent to find a new method to study the synchronization of delayed neural networks with uncertain parameters and random interference.

Based on the above discussion, this paper mainly focuses on synchronization of delayed NNs with uncertain parameters and stochastic perturbations. The main contributions are: 1) Based on the advantage of adaptive control strategy, a general update law is proposed to further save communication resources and reduce control costs. 2) The designed controller can overcome the difficulties induced by uncertain parameters, time delays and stochastic disturbances simultaneously. 3) An algebraic analysis method is development, sufficient condition is obtained to guarantee synchronization of the considering neural networks.

Notations. n denotes a set of n-dimensional Euclidean space. The superscript T denotes transposition of a matrix or vector. I n is the n × n identity matrix. is the Euclidean norm, | x | is the absolute value of x. For vector

x = ( x 1 , x 2 , , x n ) T n , x = x T x . A = ( a i j ) n × n denotes matrix of n-dimensional, A = λ max ( A T A ) , λ max ( A ) means the largest eigenvalue of A. diag { }

is the diagonal matrix. Moreover, let ( Ω , F , { F t } t 0 , P ) be a complete probability space with filtration { F t } t 0 satisfying the usual conditions (i.e., the filtration contains all P-null sets and is right continuous). Denote by L F 0 P ( [ τ ,0 ] ; n ) the family of all F 0 -measurable C ( [ τ ,0 ] ; n ) -valued random variables ϖ = { ϖ ( s ) : τ s 0 } such that sup τ s 0 E ϖ ( s ) p < , where E { } stands for mathematical expectation operator with respect to the given probability measure P. Trace { } is the trace of matrix.

2. Preliminary Notes

Consider the following uncertain delays neural network model:

d x ( t ) = [ C ( t ) x ( t ) + A ( t ) f ( x ( t ) ) + B ( t ) f ( x ( t θ ) ) ] d t + δ ( t , x ( t ) , x ( t θ ) ) d W ( t ) , (1)

in which x ( t ) = ( x 1 ( t ) , x 2 ( t ) , , x n ( t ) ) T n represents the state vector; θ > 0 is time delay; C ( t ) , A ( t ) , B ( t ) are uncertainty parameter matrix, in which C ( t ) = diag ( C 1 ( t ) , C 2 ( t ) , , C n ( t ) ) n × n , C i ( t ) > 0 is the self-inhibition of ith neuron, A ( t ) = ( a i j ( t ) ) n × n n × n and B ( t ) = ( b i j ( t ) ) n × n n × n are the non-delayed and delayed connection weight matrices, respectively. f ( x ( t ) ) is real-valued activation function; δ ( t , x ( t ) , x ( t θ ) ) is the noise intensity matrix function. W ( t ) = ( W 1 ( t ) , W 2 ( t ) , , W n ( t ) ) T is a vector-form Wiener process defined on a complete probability space ( Ω , F , F t 0 , P ) . The initial condition of system (1) is x ( t ) = γ ( t ) , t [ θ ,0 ] .

Consider NN (1) as drive system, then the controlled response NN is given as follows:

d y ( t ) = [ C ¯ ( t ) y ( t ) + A ¯ ( t ) f ( y ( t ) ) + B ¯ ( t ) f ( y ( t θ ) ) + U ( t ) ] d t + δ ( t , y ( t ) , y ( t θ ) ) d W ( t ) (2)

in which y ( t ) = ( y 1 ( t ) , y 2 ( t ) , , y n ( t ) ) T n represents the state vector; C ¯ ( t ) , A ¯ ( t ) and B ¯ ( t ) are uncertainty parameter matrix, in which C ¯ ( t ) = diag ( C ¯ 1 ( t ) , C ¯ 2 ( t ) , , C ¯ n ( t ) ) n × n , C ¯ i ( t ) > 0 is the self-inhibition of ith neuron, A ¯ ( t ) = ( a ¯ i j ( t ) ) n × n n × n and B ¯ ( t ) = ( b ¯ i j ( t ) ) n × n n × n are the non-delayed and delayed connection weight matrices, respectively. δ ( t , y ( t ) , y ( t θ ) ) is the noise intensity matrix function. U ( t ) = ( u 1 ( t ) , u 2 ( t ) , , u n ( t ) ) T is the controller to be designed. The initial condition of system (2) is y ( t ) = κ ( t ) , t [ θ ,0 ] . The other parameters are the same as (1).

Let e ( t ) = y ( t ) x ( t ) . The error system is obtained by systems (1) and (2) as following

d e ( t ) = [ C ¯ ( t ) e ( t ) + A ¯ ( t ) g ( e ( t ) ) + B ¯ ( t ) g ( e ( t θ ) ) + ϖ ( t ) + U ( t ) ] d t + δ ( t , e ( t ) , e ( t θ ) ) d W ( t ) , (3)

in which ϖ ( t ) = ( C ( t ) C ¯ ( t ) ) x ( t ) + ( A ¯ ( t ) A ( t ) ) f ( x ( t ) ) + ( B ¯ ( t ) B ( t ) ) f ( x ( t θ ) ) , g ( e ( t ) ) = f ( y ( t ) ) f ( x ( t ) ) , g ( e ( t θ ) ) = f ( y ( t θ ) ) f ( x ( t θ ) ) , δ ( t , e ( t ) , e ( t θ ) ) = δ ( t , y ( t ) , y ( t θ ) ) δ ( t , x ( t ) , x ( t θ ) ) , The initial condition is e ( t ) = κ ( t ) γ ( t ) , t [ θ ,0 ] .

Inspired by [19] , the uncertain parameters A ( t ) , B ( t ) , C ( t ) and A ¯ ( t ) , B ¯ ( t ) , C ¯ ( t ) in (1) and (2) can be written as the following from:

[ A ( t ) B ( t ) C ( t ) ] = [ A B C ] + G F ( t ) [ E A E B E C ] , (4)

[ A ¯ ( t ) B ¯ ( t ) C ¯ ( t ) ] = [ A ¯ B ¯ C ¯ ] + G F ( t ) [ E A ¯ E B ¯ E C ¯ ] , (5)

where C , A , B , E A , E B , E C , G , C ¯ , A ¯ , B ¯ , E A ¯ , E B ¯ , E C ¯ are known constant matrix, C = diag ( c 1 , c 2 , , c n ) , C ¯ = diag ( c ¯ 1 , c ¯ 2 , , c ¯ n ) , c i > 0 , c ¯ i > 0 i = 1,2, , n , F ( t ) is unknown matrix and satisfy F T ( t ) F ( t ) I .

To proceed our study, the following assumptions are needed.

(H1) There exist positive constants h i , i = 1 , 2 , , n , such that

| f i ( u ) f i ( v ) | h i | u v | .

(H2) There exist nonnegative constants S i 1 and S i 2 such that

| x i ( t ) | S i 1 , | f i ( x ( t ) ) | S i 2 , i = 1,2, , n .

(H3) There exist positive constants μ 1 and μ 2 , such that

Trace [ δ T ( t , e ( t ) , e ( t θ ) ) δ ( t , e ( t ) , e ( t θ ) ) ] μ 1 e ( t ) 2 + μ 2 e ( t θ ) 2 .

Remark 1. By choosing appropriate parameters for systems (1) and (2), it is easy to see that the considered networks systems of (1) and (2) are chaotic. Moreover, in practical application, the parameters of the considered model can be obtained by different train. Based on the selected parameters and MATLAB tool, the data of assumptions is met. The reasonableness of the parameters and the given assumptions will be tested later in Section 4.

Definition 1. If there is a controller U ( t ) such that the solutions x ( t ) and y ( t ) satisfy the equation

l i m t E [ y ( t ) x ( t ) ] = l i m t E [ e ( t ) ] = 0

with arbitrary initial conditions. Then, system (3) is globally asymptotically stability. That is to say system (2) is globally asymptotically synchronized with (1).

Lemma 1. [20] Given any vector x , y n and positive definite matrix Q, inequality

2 x T y x T Q x + y T Q y

is hold.

Lemma 2. [21] If f ( t ) : R R + is a uniformly continuous function for t 0 and the limit of the integral

l i m t 0 t f ( s ) d s

exist and is finite, then l i m t f ( t ) = 0 .

3. Main Results

In this section, based on the Lyapunov stability theorem, sufficient condition for the synchronization of the considered neural networks is obtained. Rigorous mathematical proof is also given.

Theorem 3. Suppose that the assumptions (H1) - (H3) are satisfied. Then the neural networks (1) and (2) can achieve globally asymptotically synchronization under the following adaptive controller

u i ( t ) = l i ( t ) e i ( t ) α β i ( t ) sign ( e i ( t ) ) , i = 1,2, , n , (6)

and adaptive law

{ l ˙ i ( t ) = ε i e i 2 ( t ) , β ˙ i ( t ) = ξ i | e i ( t ) | .

where ε i > 0 , ξ i > 0 , i = 1 , 2 , , n , α > 1 .

Proof. Consider the following Lyapunov function:

V ( t ) = V 1 ( t ) + V 2 ( t ) + V 3 ( t ) , (7)

where

V 1 ( t ) = 1 2 e T ( t ) e ( t ) ,

V 2 ( t ) = 1 2 t θ t e T ( s ) Γ e ( s ) d s ,

V 3 ( t ) = i = 1 n 1 2 ε i ( l i ( t ) k i ) 2 + i = 1 n 1 2 ξ i ( m i β i ( t ) ) 2 ,

Γ is positive diagonal matrix, k i and m i are constants to be determined.

Differentiating V 1 ( t ) along trajectories of (3) and taking the expectations on both sides, one obtains

E [ d V 1 ( t ) d t ] = E [ e T ( t ) d ( e ( t ) ) + 1 2 ( d ( e ( t ) ) ) T d ( e ( t ) ) ] = E { e T ( t ) [ C ¯ ( t ) e ( t ) + A ¯ ( t ) g ( e ( t ) ) + B ¯ ( t ) g ( e ( t θ ) ) + ϖ ( t ) l ( t ) e ( t ) α β ( t ) sign ( e ( t ) ) ] + 1 2 Trace [ δ T ( t , e ( t ) , e ( t θ ) ) δ ( t , e ( t ) , e ( t θ ) ) ] }

E { e T ( t ) [ C ¯ ( t ) e ( t ) + A ¯ ( t ) g ( e ( t ) ) + B ¯ ( t ) g ( e ( t θ ) ) + ϖ ( t ) l ( t ) e ( t ) α β ( t ) sign ( e ( t ) ) ] + 1 2 ( μ 1 e ( t ) 2 + μ 2 e ( t θ ) 2 ) } (8)

where

l ( t ) = diag ( l 1 ( t ) , l 2 ( t ) , , l n ( t ) ) ,

β ( t ) = diag ( β 1 ( t ) , β 2 ( t ) , , β n ( t ) ) .

By (2.5), (H1) and Lemma 1, one can gets

e T ( t ) C ¯ ( t ) e ( t ) = e T ( t ) ( C ¯ + G F ( t ) E C ¯ ) e ( t ) e T ( t ) C ¯ e ( t ) + e T ( t ) ( G T G + E C ¯ T E C ¯ ) e ( t ) , (9)

e T ( t ) A ¯ ( t ) g ( e ( t ) ) = e T ( t ) A ¯ g ( e ( t ) ) + e T ( t ) G F ( t ) E A ¯ g ( e ( t ) ) 1 2 ( e T ( t ) A ¯ A ¯ T e ( t ) + g T ( e ( t ) ) g ( e ( t ) ) e T ( t ) G G T e ( t ) + g T ( e ( t ) ) ( E A ¯ T E A ¯ ) g ( e ( t ) ) ) 1 2 ( e T ( t ) A ¯ A ¯ T e ( t ) + e T ( t ) H H T e ( t ) e T ( t ) G G T e ( t ) + E A ¯ e T ( t ) H H T e ( t ) ) , (10)

and

e T ( t ) B ¯ ( t ) g ( e ( t θ ) ) 1 2 ( e T ( t ) B ¯ B ¯ T e ( t ) + e T ( t ) G G T e ( t ) + e T ( t θ ) H H T e ( t θ ) + E B ¯ e T ( t θ ) H H T e ( t θ ) ) . (11)

By (9) - (11), one obtains from (8)

E [ d V 1 ( t ) d t ] E { e T ( t ) ( Ω 1 l ( t ) ) e ( t ) + e T ( t θ ) Ω 2 e ( t θ ) + e T ( t ) [ ϖ ( t ) α β ( t ) sign ( e ( t ) ) ] } , (12)

where Ω 1 = C ¯ + 2 G T G + E C ¯ T E C ¯ + 1 2 [ A ¯ A ¯ T + B ¯ B ¯ T + ( E A ¯ + 1 ) H H T + μ 1 I n ] and Ω 2 = 1 2 [ ( E B ¯ + 1 ) H H T + μ 2 I n ] , H = diag ( h 1 , h 2 , , h n )

Differentiating V 2 ( t ) and V 3 ( t ) along trajectories of (3) and taking the expectations on both sides, one obtains

E [ d V 2 ( t ) d t ] = E [ e T ( t ) 1 2 Γ e ( t ) e T ( t θ ) 1 2 Γ e ( t θ ) ] , (13)

E [ d V 3 ( t ) d t ] = E [ i = 1 n ( l i ( t ) k i ) e i 2 ( t ) i = 1 n ( m i β i ( t ) ) | e i ( t ) | ] . (14)

Take Γ = ( E B ¯ + 1 ) H H T + μ 2 I n , k i = λ max ( Ω 1 + Γ ) + 1 , i = 1 , 2 , , n , one derives from (8), (12) - (14) that

E [ d V ( t ) d t ] E { e T ( t ) e ( t ) + e T ( t ) [ ϖ ( t ) α β ( t ) sign ( e ( t ) ) ] i = 1 n ( m i β i ( t ) ) | e i ( t ) | } . (15)

Take m i > ( C ˜ + G E C ˜ ) S i 1 + ( A ˜ + G E A ˜ + B ˜ + G E B ˜ ) S i 2 . By (3) - (5), (15) and (H2), one can obtains

e T ( t ) [ ϖ ( t ) α β ( t ) sign ( e ( t ) ) ] i = 1 n ( m i β i ( t ) ) | e i ( t ) | = e T ( t ) [ ( C ( t ) C ¯ ( t ) ) x ( t ) + ( A ¯ ( t ) A ( t ) ) f ( x ( t ) ) + ( B ¯ ( t ) B ( t ) ) f ( x ( t θ ) ) ] i = 1 n m i | e i ( t ) | + ( 1 α ) i = 1 n β i ( t ) | e i ( t ) | e T ( t ) [ ( C ˜ + G F ( t ) E C ˜ ) x ( t ) + ( A ˜ + G F ( t ) E A ˜ ) f ( x ( t ) ) + ( B ˜ + G F ( t ) E B ˜ ) f ( x ( t θ ) ) ] i = 1 n m i | e i ( t ) |

i = 1 n | e i ( t ) | [ C ˜ + G F ( t ) E C ˜ S i 1 + A ˜ + G F ( t ) E A ˜ S i 2 + B ˜ + G F ( t ) E B ˜ S i 2 ] i = 1 n m i | e i ( t ) | i = 1 n | e i ( t ) | [ ( C ˜ + G E C ˜ ) S i 1 + ( A ˜ + G E A ˜ + B ˜ + G E B ˜ ) S i 2 ] i = 1 n m i | e i ( t ) | 0, (16)

where C ˜ = C C ¯ , A ˜ = A ¯ A , B ˜ = B ¯ B , E C ˜ = E C E C ¯ , E A ˜ = E A ¯ E A , E B ˜ = E B ¯ E B .

Therefore, by (15) and (16) known that

E [ d V ( t ) d t ] E [ e T ( t ) e ( t ) ] . (17)

Integrating (17) from 0 to t yields

E [ V ( t ) V ( 0 ) ] E [ 0 t e T ( s ) e ( s ) d s ] , (18)

then

E [ 0 t e T ( s ) e ( s ) d s ] E [ V ( 0 ) ] .

Therefore

E [ l i m t 0 t e T ( s ) e ( s ) d s ] E [ V ( 0 ) ] .

By Lemma 2, we obtain that

E [ l i m t e ( t ) 2 ] = 0,

which means that

E [ l i m t e ( t ) ] = E [ l i m t y ( t ) x ( t ) ] = 0.

According Definition 1, system (2) is globally asymptotically synchronized with (1). The proof is completed.

Remark 2. Although the parameters of systems (1) and (2) are uncertain and nonidentical, the synchronization of NNs is achieved under the adaptive controller by Theorem 1. Comparing with the results of [14] [15] [16] [17] [18] , the obtained result of this paper is more general. On the other words, the analysis method of Theorem 1 can be applied to investigate synchronization of drive-response neural networks with certain or identical parameters.

4. Numerical Simulations

In this section, some numerical examples are given to verify Theorems 1. For system (1), we give the specific parameter conditions. For x ( t ) = ( x 1 ( t ) , x 2 ( t ) ) T , let f ( x ) = ( tanh ( x 1 ( t ) ) , tanh ( x 2 ( t ) ) ) T , θ = 1 , and

C ( t ) = [ 2 + sin t 0 0 1 + 0.06 cos t ] ,

A ( t ) = [ 1.2 + 0.06 sin t 0.5 6.2 5.2 + 0.06 cos t ] ,

B ( t ) = [ 1.4 + 0.06 sin t 0.2 0.1 3.5 + 0.06 cos t ] .

The noise intensity matrix function δ ( t , x ( t ) , x ( t θ ) ) = diag ( x ( t θ ) , x ( t ) ) . The initial condition is x ( t ) = ( 0.2,0.4 ) T , t [ 1,0 ] . Figure 1 shows the chaotic trajectory of system (1) with and without stochastic disturbances.

For system (2), let y ( t ) = ( y 1 ( t ) , y 2 ( t ) ) T , f ( y ) = ( tanh ( y 1 ( t ) ) , tanh ( y 2 ( t ) ) ) T , θ = 1 , and

C ¯ ( t ) = [ 1.3 + 0.12 sin t 0 0 2.8 + 0.06 cos t ] ,

A ¯ ( t ) = [ 2.5 + 0.12 sin t 1.2 1.8 2 + 0.12 cos t ] ,

B ¯ ( t ) = [ 2.8 + 0.12 sin t 0.1 0.3 6 + 0.12 cos t ] .

(a) (b)

Figure 1. Chaotic trajectory of system (1). (a) Without stochastic disturbances; (b) With stochastic disturbances.

The noise intensity matrix function δ ( t , y ( t ) , y ( t θ ) ) = diag ( y ( t θ ) , y ( t ) ) The initial condition is y ( t ) = ( 1, 1.3 ) T , t [ 1,0 ] . Figure 2 shows the chaotic trajectory of system (2) with and without stochastic disturbances.

From Figure 1, it is easy to see that (H2) is satisfied. Besides, (H1) is also satisfied when h 1 = h 2 = 1 . From (3), one have

Trace ( δ T ( t , e ( t ) , e ( t θ ) ) δ ( t , e ( t ) , e ( t θ ) ) ) e ( t ) 2 + e ( t θ ) 2 ,

therefore, (H3) is satisfied.

The time-step size is taken as 0.001, α = 3.5 , l 1 ( t ) = l 2 ( t ) = 0.1 , β 1 ( t ) = β 2 ( t ) = 0.2 , t [ 1,0 ] , ε i = 2 , ξ i = 1.5 . Figure 3 shows the synchronization error. Figure 4 shows the time evolution of the control gains l i ( t ) and β i ( t ) , i = 1 , 2 . Therefore, system (2) is globally asymptotically synchronized with (1) under the adaptive controller (6) based on Theorem 3.

(a) (b)

Figure 2. Chaotic trajectory of system (2). (a) Without stochastic disturbances; (b) With stochastic disturbances.

Figure 3. Synchronization error trajectory of system (2.3).

(a)(b)

Figure 4. Time evolution of the control gains l i ( t ) and β i ( t ) . (a) l i ( t ) ; (b) β i ( t ) .

5. Conclusions

In this paper, synchronization for a class of delayed neural networks with uncertain parameters and stochastic disturbances has been investigated. The designed adaptive controller can restrict the effects of the stochastic perturbations and the uncertain parameters. Based on the concept of Lyapunov stability theory and Barbalat lemma, synchronization criteria have been obtained. Numerical simulations verify the effectiveness of the theoretical results.

It is well known that the sign function can lead to chatting phenomenon, which damages equipment and induces undesirable effect. Moreover, due to environmental causes, neural networks are always affected by some external attacks. Studying the synchronization of neural networks with external attacks and mismatched parameters via non-chatting control is more practical, which is our future research work.

Acknowledgements

The authors are very grateful to the associate editor and the anonymous referees for their careful reading and valuable suggestions, which have notably improved the quality of this paper.

Funding

This work was supported by the Opening Project of Sichuan Province University Key Laboratory of Bridge Non-destruction Detecting and Engineering Computing (Grant No. 2022QZJ02, No. 2021QYJ03, No. 2020QYJ03).

Conflicts of Interest

The authors declare no conflicts of interest regarding the publication of this paper.

References

[1] Pecora, L. and Carrol, T. (1990) Synchronization in Chaotic Systems. Physical Review Letters, 64, 821-824.
https://doi.org/10.1103/PhysRevLett.64.821
[2] Huang, X. and Cao, J. (2006) Generalized Synchronization for Delayed Chaotic Neural Networks: A Novel Coupling Scheme. Nonlinearity, 19, 2797-2811.
http://iopscience.iop.org/0951-7715/19/12/004
https://doi.org/10.1088/0951-7715/19/12/004
[3] Shen, J. and Cao, J. (2011) Finite-Time Synchronization of Coupled Neural Networks via Discontinuous Controllers. Cognitive Neurodynamics, 5, 373-385.
https://doi.org/10.1007/s11571-011-9163-z
[4] Zhou, X., Zhou, W., Yang, J. and Hu, J. (2015) A Novel Scheme for Synchronization Control of Stochastic Neural Networks with Multiple Time-Varying Delays. Neurocomputing, 159, 50-57.
https://doi.org/10.1016/j.neucom.2015.02.031
[5] Pan, L. and Cao, J. (2012) Stochastic Quasi-Synchronization for Delayed Dynamical Networks via Intermittent Control. Communications in Nonlinear Science and Numerical Simulation, 17, 1332-1343.
https://doi.org/10.1016/j.cnsns.2011.07.010
[6] Cheng, H., Zhong, S., Li, X., Zhong, Q. and Cheng, J. (2019) Exponential Synchronization of Delayed Memristor-Based Neural Networks with Stochastic Perturbation via Nonlinear Control. Neurocomputing, 340, 90-98.
https://doi.org/10.1016/j.neucom.2019.02.032
[7] Sadik, H., Abdurahman, A. and Tohti, R. (2023) Fixed-Time Synchronization of Reaction-Diffusion Fuzzy Neural Networks with Stochastic Perturbations. Mathematics, 11, Article 1493.
https://doi.org/10.3390/math11061493
[8] Xian, R. (2021) Synchronization of Stochastic Memristive Neural Networks with Retarded and Advanced Argument. Journal of Intelligent Learning Systems and Applications, 13, 1-14.
https://doi.org/10.4236/jilsa.2021.131001
[9] Abudireman, A., Abudusaimaiti, M., Sun, W., Zhao, J., Zhang, Y. and Abdurahman, A. (2022) Some Further Results on Fixed-Time Synchronization of Neural Networks with Stochastic Perturbations. Journal of Applied Mathematics and Physics, 10, 200-218.
https://doi.org/10.4236/jamp.2022.101015
[10] Liu, Y., Zhu, C., Chu, D. and Li, W. (2018) Synchronization of Stochastic Coupled Systems with Time-Varying Coupling Structure on Networks via Discrete-Time State Feedback Control. Neurocomputing, 285, 104-116.
https://doi.org/10.1016/j.neucom.2018.01.035
[11] Li, X., Ge, Y., Liu, H., Li, H. and Fang, J. (2020) New Results on Synchronization of Fractional-Order Memristor-Based Neural Networks via State Feedback Control. Complexity, 2020, Article ID: 2470972.
https://doi.org/10.1155/2020/2470972
[12] Li, S., Lv, C. and Ding, X. (2020) Synchronization of Stochastic Hybrid Coupled Systems with Multi-Weights and Mixed Delays via Aperiodically Adaptive Intermittent Control. Nonlinear Analysis: Hybrid Systems, 35, Article ID: 100819.
https://doi.org/10.1016/j.nahs.2019.100819
[13] Chen, Y., Wang, Z., Shen, B. and Dong, H. (2019) Exponential Synchronization for Delayed Dynamical Networks via Intermittent Control: Dealing with Actuator Saturations. IEEE Transactions on Neural Networks, 30, 1000-1012.
https://sci-hub.ru/10.1109/tnnls.2018.2854841
https://doi.org/10.1109/TNNLS.2018.2854841
[14] Wang, P., Jin, W. and Su, H. (2018) Synchronization of Coupled Stochastic Complex-Valued Dynamical Networks with Time-Varying Delays via Aperiodically Intermittent Adaptive Control. Chaos, 28, Article ID: 043114.
https://doi.org/10.1063/1.5007139
[15] Shahri, E.S.A. (2018) Adaptive Synchronization of Chaotic Genesioa€“Tesi Systems via A Nonlinear Control. International Journal of Engineering Technology, 7, 136-139.
https://sci-hub.ru/10.14419/ijet.v7i3.19.17002
https://doi.org/10.14419/ijet.v7i3.19.17002
[16] Zhu, L. (2022) Adaptive Generalized Synchronization of Drive-Response Neural Networks with Time-Varying Delay. Applied Mathematics, 13, 19-26.
https://doi.org/10.4236/am.2022.131002
[17] Xiong, W., Xie, W. and Cao, J. (2006) Adaptive Exponential Synchronization of Delayed Chaotic Networks. Physica A: Statistical Mechanics and Its Applications, 370, 832-842.
https://sci-hub.ru/10.1016/j.physa.2006.03.002
https://doi.org/10.1016/j.physa.2006.03.002
[18] Sheng, L., Yang, H. and Lou, X. (2009) Adaptive Exponential Synchronization of Delayed Neural Networks with Reaction-Diffusion Terms. Chaos, Solitons & Fractals, 40, 930-939.
https://sci-hub.ru/10.1016/j.chaos.2007.08.047
https://doi.org/10.1016/j.chaos.2007.08.047
[19] Park, J.H. (2005) Adaptive Synchronization of Hyperchaotic Chen System with Uncertain Parameters. Chaos, Solitons & Fractals, 26, 959-964.
https://doi.org/10.1016/j.chaos.2005.02.002
[20] Qin, S., Xue, X. and Wang, P. (2013) Global Exponential Stability of Almost Periodic Solution of Delayed Neural Networks with Discontinuous Activations. Information Sciences, 220, 367-378.
https://doi.org/10.1016/j.ins.2012.07.040
[21] Popov, V.M. (1973) Hyperstability of Control System. Springer-Verlag, Berlin.

Copyright © 2024 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.