Scientific Research

An Academic Publisher

Networked Evolutionary Model of Snow-Drift Game Based on Semi-Tensor Product ()

^{}

Keywords

Share and Cite:

*Journal of Applied Mathematics and Physics*,

**7**, 726-737. doi: 10.4236/jamp.2019.73050.

1. Introduction

Cooperation widely exists in various complex systems from biological to economic and social networks. Cooperative behavior is regarded as a key factor in the evolution process. So research on cooperation has great significance for group development. In recent years, with the development of network research and the innovation of experimental results, cooperation has been combined with different networks in different disciplines. As a new field from the classical games, evolutionary game theory provides an important and effective theoretical framework for the study on the evolution of cooperation between competing individuals.

To convert the strategy profile dynamics of the evolutionary game into a logical dynamic system, a useful tool, called the semi-tensor product of matrices emerged as the times require. It was proposed by professor Cheng [1] [2] , and provides an effective mathematical tool for systematically analyzing the dynamic process of networked evolutionary games. In recent years, the semi-tensor product of matrices has been applied to Boolean network control [3] , which has been widely used in many fields, such as graph theory, fuzzy control, Boolean function distribution, fault detection and so on [4] . By using this method, Professor Cheng and his team have also studied the dynamic behavior of the networked evolutionary games and the strategy optimization problem, and have achieved certain achievements.

Combining the predecessors’ research on the controllability of networked evolutionary games [5] [6] [7] [8] , in this paper, we use semi-tensor product of matrices method to study the networked evolutionary model of snow-drift game with rewarding and penalty strategy. Different from the traditional snowdrift game, this paper introduces the strategy of rewards and punishment, which gives certain rewards to the cooperators, while the defectors need to deduct part of the payoffs. Then based on the theoretical basis of replication dynamics, we can determine the quantitative relationship between parameters. Through the strategy updating rules [9] , the dynamic process of networked evolutionary games can be expressed into a logical dynamic system, and finally converted into an algebraic form. On this basis, we discuss the final level of cooperation.

The composition of this paper is as follows: In Section 2, we give some preliminary knowledge, including semi-tensor product of the matrices, networked evolutionary game and replication dynamics. Section 3 discusses the dynamic model of networked evolution based on the snow-drift game. In Section 4, we use specific examples to discuss the ultimate level of cooperation among the players in the game, which is followed by a brief conclusion in Section 5.

2. Preliminaries

2.1. The Semi-Tensor Product of Matrices

For statement ease, we first introduce some notations:

・ ${M}_{m\times n}$ is the set of $m\times n$ real matrices;

・ $Co{l}_{i}\left(M\right)$ refers to the i-th column of matrix M, $Col\left(M\right)$ is the set of columns of M;

・ The domain of the k-valued logic is marked as ${\mathcal{D}}_{k}\mathrm{:}=\left\{\mathrm{1,2,}\cdots \mathrm{,}k\right\}$ ;

・ ${I}_{n}$ is the identity matrix, ${\delta}_{n}^{i}:=Co{l}_{i}\left({I}_{n}\right)$ refers to the i-th column of ${I}_{n}$ , ${\Delta}_{n}\mathrm{:}=Col\left({I}_{n}\right)$ is the set of columns of ${I}_{n}$ ;

・ $M\in m\times n$ is called a logical matrix if $Col\left(M\right)\subset {\Delta}_{m}$ , the set of $m\times n$ logical functions is recorded as ${\mathcal{L}}_{m\times n}$ ;

・ Assume $L\in {\mathcal{L}}_{m\times n}$ , then $L=\left[{\delta}_{m}^{{i}_{1}},{\delta}_{m}^{{i}_{2}},\cdots ,{\delta}_{m}^{{i}_{n}}\right]$ , and it can be abbreviated as $L={\delta}_{m}\left[{i}_{1},{i}_{2},\cdots ,{i}_{m}\right].$

Definition 2.1: [8] Let $A\in {M}_{m\times n}\mathrm{,}B\in {M}_{p\times q}$ , the semi-tensor product of two matrices A and B can be denoted as:

$A\u22c9B=\left(A\otimes {I}_{\frac{\alpha}{n}}\right)\left(B\otimes {I}_{\frac{\alpha}{p}}\right)$ (1)

where $\alpha $ is the least common multiple of $n\mathrm{,}p$ , $\otimes $ is the tensor product of the matrix, that is Kronecker product which can be denoted as:

$A\otimes B=\left[\begin{array}{ccc}{a}_{11}B& \cdots & {a}_{1n}B\\ \vdots & \ddots & \vdots \\ {a}_{m1}B& \cdots & {a}_{mn}B\end{array}\right]$ (2)

Definition 2.2: [8] $A\in {M}_{p\times s}\mathrm{,}B\in {M}_{q\times s}$ , then the K-product of A and B is defined as follows:

$A\ast B=\left[Co{l}_{1}\left(A\right)\u22c9Co{l}_{1}\left(B\right)\mathrm{,}\cdots \mathrm{,}Co{l}_{s}\left(A\right)\u22c9Co{l}_{s}\left(B\right)\right]$ (3)

Considering the multi-valued logical function [7] $f\mathrm{:}{\mathcal{D}}_{{k}_{1}}\times {\mathcal{D}}_{{k}_{2}}\times \cdots \times {\mathcal{D}}_{{k}_{n}}\to {\mathcal{D}}_{{k}_{0}}$ , in vector form, f converts into

$f\mathrm{:}{\Delta}_{{k}_{1}}\times {\Delta}_{{k}_{2}}\times \cdots \times {\Delta}_{{k}_{n}}\to {\Delta}_{{k}_{0}}$ (4)

Assume ${x}_{i}\in {\Delta}_{{k}_{i}}$ , $i=1,\cdots ,n$ , then we have $x:={\u22c9}_{i=1}^{n}{x}_{i}$ , where $k={\displaystyle {\prod}_{i=1}^{n}}\text{\hspace{0.05em}}{k}_{i}.$

Using the notation, we have $f\left(x\right):=f\left({x}_{1},\cdots ,{x}_{n}\right)$ , there is a unique logical matrix ${M}_{f}\in {\mathcal{L}}_{{k}_{0}\times k}$ , which is called the structural matrix of f, so that there is a vectorial form as:

$f\left({x}_{1},{x}_{2},\cdots ,{x}_{n}\right)={M}_{f}{\u22c9}_{i=1}^{n}{x}_{i}$ (5)

where ${M}_{f}=\left[f({\delta}_{k}^{1}f({\delta}_{k}^{2}\cdots f({\delta}_{k}^{1}\right]$ and $f\left(x\right)={M}_{f}x.$

2.2. Networked Evolutionary Games

Definition 2.3: [2] A basic networked evolutionary game $\left(\left(N\mathrm{,}E\right)\mathrm{,}G\mathrm{,}\Pi \right)$ consists of three parts:

・ A network $\left(N\mathrm{,}E\right)$ , where $N=1,2,\cdots ,n$ represents a set of nodes and $E\subset N\times N$ denotes a set of edges;

・ Basic networked game G, such that if $\left(i\mathrm{,}j\right)\in E$ , then the strategies adopted by i and j are respectively marked as ${x}_{i}\left(t\right)$ and ${x}_{j}\left(t\right)$ ;

・ $\Pi $ indicates the strategy updating rules for local information.

In a network $\left(N\mathrm{,}E\right)$ : The set of adjacent nodes of i is called the neighborhood of i and denoted by $U\left(i\right)$ , in this paper, we assume that $i\in U\left(i\right)$ . Regardless of the directionality of the edges, if there exits a path whose length from i to j is less than or equal to r, then j is called an r-neighbor node of i. The set of r-neighbor node of i is denoted by ${U}_{r}\left(i\right)$ .

The network used in this paper is an undirected cycle graph, and the degree of each node is the same 2, so the cycle graph ${S}_{n}$ is as Figure 1.

Definition 2.4: [2] ${C}_{ij}$ refers to the payoff between i and j, so that the overall payoff of player i can be expressed as:

${c}_{i}\left(t\right)=\underset{j\in U\left(i\right)\backslash i}{{\displaystyle \sum}}{c}_{ij}\left(t\right),i\in N$ (6)

Figure 1. S_{n} a cycle graph with the degree of each node is 2.

The strategy of i at the time $\left(t+1\right)$ depends on the information of its neighbors at time t, including their tactics and corresponding payments. Let ${x}_{i}\left(t\right)$ be the strategy of player i at time t. In the networked evolutionary game, the strategy updating rule is expressed by $\Pi $ :

${x}_{i}\left(t+1\right)={f}_{i}\left(\left\{{x}_{j}\left(t\right),{c}_{j}\left(t\right)|j\in U\left(t\right)\right\}\right)\text{\hspace{1em}}t\ge 0,i\in N$ (7)

This paper mainly use the strategy updating rules of unconditional imitation, as follows: The strategy of player i at time $\left(t+1\right)$ , ${x}_{i}\left(t+1\right)$ , is selected as the best strategy from strategies of neighborhood players ${U}_{i}$ at time t. At this time:

${x}_{i}\left(t+1\right)={x}_{{j}^{*}}\left(t\right)$ (8)

where

${j}^{*}=\mathrm{arg}{\mathrm{max}}_{j\in U\left(i\right)}{c}_{j}\left(x\left(t\right)\right)$ (9)

When the player with the best payoff is not unique,

${j}^{*}=\mathrm{min}\left\{u|u\in \mathrm{arg}{\mathrm{max}}_{j\in U\left(i\right)}{c}_{j}\left(x\left(t\right)\right)\right\}$ (10)

2.3. Replication Dynamics

Consider that in a homogenized population, each individual can play with all other individuals in the population. Each pair of individuals proceeds in accordance

with the payoff matrix $\left(\begin{array}{cc}R& S\\ T& P\end{array}\right)$ . Assume that the proportion of individuals using a cooperative strategy is x, and the proportion of people who choose to become a defector is y. The benefits of cooperator/defector in the population are:

$\{\begin{array}{c}{P}_{C}=Rx+Sy\\ {P}_{D}=Tx+Py\end{array}$ (11)

According to replication dynamics: The rate of changing a strategy in a population is proportional to the proportion of individuals using this strategy and their benefits:

$\{\begin{array}{c}\frac{\text{d}x}{\text{d}t}=x\left({P}_{C}-\varphi \right)\\ \frac{\text{d}y}{\text{d}t}=y\left({P}_{D}-\varphi \right)\end{array}$ (12)

where $\phi =x{P}_{c}+y{P}_{D}$ is the average income of the population. Because in the process of evolutionary game, the individual’s fitness is closely related to the proportion of individuals adopting various strategies. According to formula (11), (12), and in combination with $x+y=1$ , we can obtain the partner’s replicating dynamic equation:

$\frac{\text{d}x}{\text{d}t}=x\left(1-x\right)\left[\left(R-S-T+P\right)x+S-P\right]$ (13)

According to the above formula, the nonlinear differential equation is closely related to the parameters of the payoff matrix. Considering the different characteristics of dynamics, we can discuss the following four situations separately:

・ Defection dominates (D dominated C): $T>R>P>S$ , the individual benefits of defectors are better than those cooperators, such as Prisoner’s Dilemma;

・ Coexistence (C and D coexist): $T>R>S>P$ , at this time, cooperation and defection are in a symbiotic state, such as snow-drift game and Hawk-Dove game;

・ Bistable situation (C and D are bistable): $R>T>P>S$ , at this time, the player’s optimal strategy is to be consistent with the opponent: choosing cooperation or defection at the same time, such as: Stag Hunt Game;

・ Cooperation dominates (D dominated C): When $T<R$ and $P<S$ , no matter how the opponent chooses, the cooperative strategy is better than the defective strategy.

3. Model

3.1. Model Description

In the traditional snow-drift game, there are two strategies for the players to choose from: cooperation and defection. Considering that in a snowy night, the two men drive in opposite directions and are obstructed by the same snowdrift. Assuming that the cost of removing the snowdrifts to make the roads smooth is c, the benefits of smooth roads for everyone are b, $b>c$ . The cost of shoveling snow is evenly shared by cooperative snow shoveler. In this process, those who do not contribute are defector, and in order to promote the player to cooperate, we propose such a setting: If someone chooses to cooperate, then the cooperator can gain additional profits $\beta $ , while the defector will be deducted the proceeds $\gamma $ . When all the people choose to cooperate, they can get additional benefits $\alpha $ , so the original snowdrift model mutates into a mutated snow-drift game model with rewarding and penalty strategy. In order to better understand the framework model, we give the payoff bi-matrix in Table 1 (where C means “cooperate” and D means “defect”):

Table 1. Payoff Bi-matrix.

Next we discuss the conditions of Nash equilibrium conditions for the mutated snow-drift game: The benefits of cooperators and defectors in the population are:

$\{\begin{array}{l}{P}_{C}=\left(b-\frac{c}{2}+\alpha \right)x+\left(b-c+\beta \right)\left(1-x\right)\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}=\left(\frac{c}{2}+\alpha -\beta \right)x+b-c-\beta \\ {P}_{D}=\left(b-\gamma \right)x+o\cdot y\mathrm{=}\left(b-\gamma \right)x\end{array}$ (14)

with $\phi =x{P}_{C}+y{P}_{D}=x{P}_{C}+\left(1-x\right){P}_{D}$ , the cooperator’s replicator dynamical equation is:

$\frac{\text{d}x}{\text{d}t}=x\left({P}_{C}-\phi \right)=x\left(1-x\right)\left[\left(R-S-T+P\right)x+S-P\right]$ (15)

According to the different characteristics of the dynamics, when the game is judged as a variation of the snow-drift game, there is the following inequality relationship:

$\begin{array}{l}T>R>S>P\hfill \end{array}$ (16)

that is:

$b-\gamma >b-\frac{c}{2}+\alpha >b-c+\beta >0$ (17)

Therefore, the relationship between rewarding and punishment factors is:

$\{\begin{array}{l}b>\gamma \\ \alpha +\gamma <\frac{c}{2}\\ \beta +\gamma <c\end{array}$ (18)

where

$0<\alpha <\frac{c}{2},0<\beta <c,0<\gamma <\frac{c}{2}$

3.2. Algebraic Formulation

In (7), since ${c}_{j}\left(t\right)$ depends only on $U\left(j\right)$ and $U\left(j\right)\subset {U}_{2}\left(i\right)$ , then the dynamic evolution can be rewritten as:

${x}_{i}\left(t+1\right)={f}_{i}\left(\left\{{x}_{j}\left(t\right)|j\in {U}_{2}\left(i\right)\right\}\right)\text{\hspace{1em}}t\ge 0,i\in N$ (19)

We calculate the basic evolutionary equation for any node. In the cycle ${S}_{n}$ , the neighborhood of node i is recorded as:

$U\left(i\right)=\left\{i-1,i,i+1\right\},\text{\hspace{1em}}{U}_{2}\left(i\right)=\left\{i-2,i-1,i,i+1,i+2\right\}$

Based on the situation, which is the strategy of each point on ${U}_{2}\left(i\right)$ , we can get the benefits of each point on $U\left(i\right)$ . Then according to the benefits, and applying the strategy updating rules, we can get a new strategy:

${x}_{i+1}\left(t\right)={f}_{i}\left({x}_{i-2}\left(t\right),{x}_{i-1}\left(t\right),{x}_{i}\left(t\right),{x}_{i+1}\left(t\right),{x}_{i+2}\left(t\right)\right)$ (20)

Let $1\sim {\delta}_{2}^{1}\mathrm{,2}\sim {\delta}_{2}^{2}$ , then in vector form, we obtain that:

${x}_{i+1}\left(t\right)={M}_{{f}_{i}}{x}_{i-2}\left(t\right){x}_{i-1}\left(t\right){x}_{i}\left(t\right){x}_{i+1}\left(t\right){x}_{i+2}\left(t\right)={{M}^{\prime}}_{{f}_{i}}x\left(t\right),\text{\hspace{1em}}i\in N$ (21)

where ${{M}^{\prime}}_{{f}_{i}}$ is the structural matrix obtained by the players by adopting an evolutionary strategy that imitates his neighbor’s, $x\left(t\right)={x}_{1}\left(t\right){x}_{2}\left(t\right)\cdots {x}_{n}\left(t\right).$

From the formula (4) and the above formula, we have the algebraic form of the evolutionary dynamics as:

$x\left(t+1\right)={M}_{G}x\left(t\right)$ (22)

where ${M}_{G}={{M}^{\prime}}_{{f}_{1}}\ast {{M}^{\prime}}_{{f}_{2}}\ast \cdots \ast {{M}^{\prime}}_{{f}_{n}},{M}_{G}\in {\mathcal{L}}_{{2}^{n}\times {2}^{n}}$ is called the transition matrix of the game.

3.3. Final Level of Cooperation

Based on the calculation methods for ${M}_{{f}_{i}}$ and ${M}_{G}$ , we can draw the following conclusions:

$\begin{array}{l}co{l}_{1}\left({M}_{G}\right)={\u22c9}_{i=1}^{n}co{l}_{1}\left({M}_{{f}_{i}}\right)\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}=co{l}_{1}\left({M}_{{f}_{1}}\right)\u22c9co{l}_{1}\left({M}_{{f}_{2}}\right)\u22c9\cdots \u22c9co{l}_{1}\left({M}_{{f}_{n}}\right)={\delta}_{2n}^{1}\\ co{l}_{2}\left({M}_{G}\right)={\u22c9}_{i=1}^{n}co{l}_{2}\left({M}_{{f}_{i}}\right)\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}=co{l}_{2}\left({M}_{{f}_{1}}\right)\u22c9co{l}_{2}\left({M}_{{f}_{2}}\right)\u22c9\cdots \u22c9co{l}_{2}\left({M}_{{f}_{n}}\right)={\delta}_{2n}^{2n}\end{array}$ (23)

This formula show that if the player first choose the strategy to cooperate, eventually they maintain the cooperative strategy, and conversely, if the player first choose the strategy is uncooperative, and eventually they will maintain defection like first.

The matrix ${M}_{G}$ is the transition matrix of the game. Assuming any initial state $z\left(0\right)\in {\Delta}_{{2}^{n}}$ , there are

$z\left(t\right)={M}_{G}^{t}z\left(0\right)$ (24)

We have

$G=\underset{t\to \infty}{\mathrm{lim}}{M}_{G}^{t}$ (25)

Next we discuss G, we can assume that

$H=\left\{{\delta}_{{2}^{n}}^{i}|Co{l}_{i}\left(G\right)={\delta}_{{2}^{n}}^{1},1\le i\le {2}^{n}-1\right\}$ (26)

In other words, for any initial state, if $z\left(0\right)={\delta}_{{2}^{n}}^{i}\in H$ , then

$z\left(\infty \right)=Gz\left(0\right)=G{\delta}_{{2}^{n}}^{i}=Co{l}_{i}\left(G\right)={\delta}_{{2}^{n}}^{1}$ (27)

Therefore, we conclude that if the initial state is selected as $z\left(0\right)={\delta}_{{2}^{n}}^{i}\in H$ , the final state of all players will remain cooperation.

4. Example

4.1. Cooperation under Normal Circumstances

According to the conditions mentioned above, in the snow-drift game with rewarding and penalty strategy, we give the following examples: $n=4,b=1.5,c=1,\alpha =0.1,\beta =0.4,\gamma =0.2$ , then the payoffs are shown in Table 2.

The basic evolutionary equation can be figured out as in Table 3.

Then according to the strategy updating rules of unconditional imitation, the situation at time t is $\left({x}_{1}\left(t\right)\mathrm{,}{x}_{2}\left(t\right)\mathrm{,}{x}_{3}\left(t\right)\mathrm{,}{x}_{4}\left(t\right)\right)$ , and we have the strategy of each player i at time $t+1$ is ${f}_{i}\left(i=1,2,3,4\right)$ , expressed in vector form as:

$\begin{array}{c}{x}_{i}\left(t+1\right)={f}_{i}\left({x}_{1}\left(t\right)\mathrm{,}{x}_{2}\left(t\right)\mathrm{,}{x}_{3}\left(t\right)\mathrm{,}{x}_{4}\left(t\right)\right)\\ ={M}_{i}{x}_{1}\left(t\right)\mathrm{,}{x}_{2}\left(t\right)\mathrm{,}{x}_{3}\left(t\right)\mathrm{,}{x}_{4}\left(t\right),\text{\hspace{1em}}i=1,2,3,4\end{array}$ (28)

Table 2. Payoff Bi-matrix.

Table 3. Payoffs → Dynamics.

where

$\begin{array}{l}{M}_{1}={\delta}_{2}\left[1,2,1,1,2,2,1,1,2,1,2,1,1,2,1,2\right]\\ {M}_{2}={\delta}_{2}\left[1,1,2,1,2,2,1,1,2,1,2,1,1,1,2,2\right]\\ {M}_{3}={\delta}_{2}\left[1,2,2,1,2,2,1,2,1,1,2,1,1,1,1,2\right]\\ {M}_{4}={\delta}_{2}\left[1,2,2,1,1,2,1,1,2,1,2,2,1,1,1,2\right]\end{array}$ (29)

Now we have the dynamic situation as

$x\left(t+1\right)={M}_{G}x\left(t\right)$ (30)

where ${M}_{G}={M}_{1}\ast {M}_{2}\ast {M}_{3}\ast {M}_{4},x\left(t\right)={x}_{1}\left(t\right){x}_{2}\left(t\right){x}_{3}\left(t\right){x}_{4}\left(t\right)$ , it is easy to figure out that

${M}_{G}={\delta}_{16}\left[1,12,8,1,15,16,1,3,14,1,16,2,1,9,5,16\right]$ (31)

where

${M}_{G}^{2}={\delta}_{16}\left[1,2,3,1,5,16,1,8,9,1,16,12,1,14,15,16\right]$

${M}_{G}^{3}={\delta}_{16}\left[1,12,8,1,15,16,1,3,14,1,16,2,1,9,5,16\right]$ (32)

${M}_{G}^{4}={\delta}_{16}\left[1,2,3,1,5,16,1,8,9,1,16,12,1,14,15,16\right]$

Through the above dynamics, we can get the conclusion, if $x\left(0\right)\in \left\{{\delta}_{16}^{1}\mathrm{,}{\delta}_{16}^{4}\mathrm{,}{\delta}_{16}^{7}\mathrm{,}{\delta}_{16}^{10}\mathrm{,}{\delta}_{16}^{13}\right\}$ , then $x\left(t\right)={\delta}_{16}^{1}$ . In other words, if the initial strategy of the four players is one of the following, they will eventually choose the cooperative strategy:

$\left(\mathrm{1,1,1,1}\right)\mathrm{,}\left(\mathrm{1,1,2,2}\right)\mathrm{,}\left(\mathrm{1,2,2,1}\right)\mathrm{,}\left(\mathrm{2,1,1,2}\right)\mathrm{,}\left(\mathrm{2,2,1,1}\right)$ (33)

From this, we can conclude that the final situation of maintaining cooperation accounts for $\frac{5}{16}$ of the original total situations.

4.2. Parameter Discussion

Based on the above, we can know that there are two conditions to promote cooperation:

・ $R+S\ge 2T$ , when the status of two neighbors of a cooperator is cooperation and defection, and the two neighbors of a defector are all cooperators, if the cooperators benefits is greater than or equal to the defectors, the strategy chosen by the player will be biased towards the cooperation. At this time, we can achieve the purpose of promoting cooperation. That is

$\left(b-\frac{c}{2}+\alpha \right)+\left(b-c+\beta \right)\ge 2\left(b-\gamma \right)$ , then $-\frac{3}{2}c+\alpha +\beta +2\gamma \ge 0$ ;

・ $S>T$ , if in some initial state, the strategy that the cooperator’s neighbors select is a defective strategy, and the defector’s neighbors are all cooperators. At this time, when the cooperator’s income is greater than the defector, the chance of cooperation will increase greatly. That is $b-c+\beta >b-\gamma $ , then $\gamma -c+\beta >0$ .

In order to study the effect of changes in various parameters on the level of cooperation, we first change the values of $\alpha \mathrm{,}\beta \mathrm{,}\gamma $ respectively, and obtain the final state of stable cooperation. Then we study the proportion of cooperation under the steady state, and observe the changes in cooperation rates.

・ Firstly, we change the value of $\alpha $ . Since $\beta \mathrm{,}\gamma $ is unchanged, by the first condition we get that $\alpha \ge \frac{3}{2}c-\beta -2\gamma $ , taking the initial parameter value, we

have $\alpha \ge 0.7$ . Normally, we take $\alpha =0.7$ . The payoff bi-matrix at this time is as Table 4.

According to the basic evolution equationary under this parameter, the final result is:

${M}_{G}={\delta}_{16}\left[1,1,1,1,1,16,1,1,9,1,16,1,1,9,1,16\right]$ (34)

where

${M}_{G}^{2}={\delta}_{16}\left[1,1,1,1,1,16,1,1,9,1,16,1,1,9,1,16\right]$

${M}_{G}^{k}={\delta}_{16}\left[1,1,1,1,1,16,1,1,9,1,16,1,1,9,1,16\right]$ (35)

Therefore, at time t, if

$x\left(0\right)\in \left\{{\delta}_{16}^{1}\mathrm{,}{\delta}_{16}^{2}\mathrm{,}{\delta}_{16}^{3}\mathrm{,}{\delta}_{16}^{4}\mathrm{,}{\delta}_{16}^{5}\mathrm{,}{\delta}_{16}^{7}\mathrm{,}{\delta}_{16}^{8}\mathrm{,}{\delta}_{16}^{10}\mathrm{,}{\delta}_{16}^{12}\mathrm{,}{\delta}_{16}^{13}\mathrm{,}{\delta}_{16}^{15}\right\}$ (36)

then $x\left(t\right)={\delta}_{16}^{1}$ . That is, if the initial strategy of the four players is one of the following situations, they will eventually choose to cooperate:

$\begin{array}{l}\left(\mathrm{1,1,1,1}\right)\mathrm{,}\left(\mathrm{1,1,1,2}\right)\mathrm{,}\left(\mathrm{1,1,2,1}\right)\\ \left(\mathrm{1,1,2,2}\right)\mathrm{,}\left(\mathrm{1,2,1,1}\right)\mathrm{,}\left(\mathrm{1,2,2,1}\right)\mathrm{,}\left(\mathrm{1,2,2,2}\right)\\ \left(\mathrm{2,1,1,2}\right)\mathrm{,}\left(\mathrm{2,1,2,2}\right)\mathrm{,}\left(\mathrm{2,2,1,1}\right)\mathrm{,}\left(\mathrm{2,2,2,1}\right)\end{array}$ (37)

The practical significance of this result is: when we improve the reward factor $\alpha $ , the proportion of the profile that ultimately maintains cooperation improves

to $\frac{11}{16}$ . The probability of cooperation has been greatly increased by increasing the benefits of cooperation.

・ Secondly, changing the value of $\beta $ , we can get $\beta \ge 1$ from condition 1 and $\beta >0.8$ from condition 2, so we take $\beta =0.9$ . At this time, the payoff bi-matrix is as Table 5.

Similarly, we have:

${M}_{G}={\delta}_{16}\left[1,12,8,1,15,1,1,3,14,1,1,2,1,9,5,16\right]$

${M}_{G}^{2}={\delta}_{16}\left[1,2,3,1,5,1,1,8,9,1,1,12,1,14,15,16\right]$

${M}_{G}^{3}={\delta}_{16}\left[1,12,8,1,15,1,1,3,14,1,1,2,1,9,5,16\right]$

${M}_{G}^{4}={\delta}_{16}\left[1,2,3,1,5,1,1,8,9,1,1,12,1,14,15,16\right]$ (38)

At time t, when

$x\left(0\right)\in \left\{{\delta}_{16}^{1}\mathrm{,}{\delta}_{16}^{4}\mathrm{,}{\delta}_{16}^{6}\mathrm{,}{\delta}_{16}^{7}\mathrm{,}{\delta}_{16}^{10}\mathrm{,}{\delta}_{16}^{11}\mathrm{,}{\delta}_{16}^{13}\right\}$ (39)

then $x\left(t\right)={\delta}_{16}^{1}$ . Then we have the profiles as:

$\begin{array}{l}\left(\mathrm{1,1,1,1}\right)\mathrm{,}\left(\mathrm{1,1,2,2}\right)\mathrm{,}\left(\mathrm{1,2,1,2}\right)\\ \left(\mathrm{1,2,2,1}\right)\mathrm{,}\left(\mathrm{2,1,1,2}\right)\mathrm{,}\left(\mathrm{2,1,2,1}\right)\mathrm{,}\left(\mathrm{2,2,1,1}\right)\end{array}$ (40)

We can see that compared with the original, the proportion of the profile that

ultimately maintains cooperation has increased to $\frac{7}{16}$ . That is to say, when one

player chooses to cooperate and the other chooses to defect, increasing the reward of the cooperator can promote the proportion of cooperation.

・ Thirdly, changing the value of $\gamma $ , from condition 1 we can get $\gamma \ge 0.5$ and $\gamma >0.6$ from condition 2, so we take $\gamma =0.5$ , and the payoff bi-matrix is in Table 6.

Eventually, we have:

${M}_{G}={\delta}_{16}\left[1,1,1,1,1,16,1,1,9,1,16,1,1,9,1,16\right]$

${M}_{G}^{2}={\delta}_{16}\left[1,1,1,1,1,16,1,1,9,1,16,1,1,9,1,16\right]$

${M}_{G}^{k}={\delta}_{16}\left[1,1,1,1,1,16,1,1,9,1,16,1,1,9,1,16\right]$ (41)

Similarly at time t, if there is

$x\left(0\right)\in \left\{{\delta}_{16}^{1}\mathrm{,}{\delta}_{16}^{2}\mathrm{,}{\delta}_{16}^{3}\mathrm{,}{\delta}_{16}^{4}\mathrm{,}{\delta}_{16}^{5}\mathrm{,}{\delta}_{16}^{7}\mathrm{,}{\delta}_{16}^{8}\mathrm{,}{\delta}_{16}^{10}\mathrm{,}{\delta}_{16}^{12}\mathrm{,}{\delta}_{16}^{13}\mathrm{,}{\delta}_{16}^{15}\right\}$ (42)

then $x\left(t\right)={\delta}_{16}^{1}$ , the initial strategies of the player that ultimately choose to cooperate are as follows:

$\begin{array}{l}\left(\mathrm{1,1,1,1}\right)\mathrm{,}\left(\mathrm{1,1,1,2}\right)\mathrm{,}\left(\mathrm{1,1,2,1}\right)\\ \left(\mathrm{1,1,2,2}\right)\mathrm{,}\left(\mathrm{1,2,1,1}\right)\mathrm{,}\left(\mathrm{1,2,2,1}\right)\mathrm{,}\left(\mathrm{1,2,2,2}\right)\\ \left(\mathrm{2,1,1,2}\right)\mathrm{,}\left(\mathrm{2,1,2,2}\right)\mathrm{,}\left(\mathrm{2,2,1,1}\right)\mathrm{,}\left(\mathrm{2,2,2,1}\right)\end{array}$ (43)

At first when one of two players choose to cooperate and the other does not cooperate, we can increase the punishments to reduce the gains of the defector. In the end, the proportion of the situation that maintaining cooperation has

increased to $\frac{11}{16}$ , and it has also achieved the purpose of promoting cooperation.

Table 4. Payoff Bi-matrix.

Table 5. Payoff Bi-matrix.

Table 6. Payoff Bi-matrix.

5. Conclusion

In this paper, we have investigated the networked evolutionary model based on snow-drift game with rewarding and penalty strategy. By using semi-tensor product of matrices approach, the mathematical model of the networked evolutionary game is expressed as a dynamic logical system and next converted into its evolutionary dynamic algebraic form. Based on the form, many properties of the games evolutionary dynamics have been revealed. We have found the following interesting result: when the rewards for cooperators and the punishment for defectors are increased, that will promote the players to cooperate. But there are still many problems worth studying in our model and conclusion.

Conflicts of Interest

The authors declare no conflicts of interest regarding the publication of this paper.

[1] |
Cheng, D., Qi, H., He, F., Xu, T. and Dong, H. (2014) Semi-Tensor Product Approach to Networked Evolutionary Games. Control Theory and Technology, 12, 198-214. https://doi.org/10.1007/s11768-014-0038-9 |

[2] |
Cheng, D., He, F., Qi, H. and Xu, T. (2015) Modeling, Analysis and Control of Networked Evolutionary Games. IEEE Transactions on Automatic Control, 60, 2402-2415. https://doi.org/10.1109/TAC.2015.2404471 |

[3] |
Cheng, D., Qi, H. and Li, Z. (2011) Analysis and Control of Boolean Networks: A Semi-Tensor Product Approach. Springer, London. https://doi.org/10.1007/978-0-85729-097-7 |

[4] |
Cheng, D., Qi, H. and Zhao, Y. (2012) An Introduction to Semi-Tensor Product of Matrices and Its Applications. World Scientific Publishing Co. Pte. Ltd., Singapore. https://doi.org/10.1142/8323 |

[5] | Wang, J., Zhao, J. and Li, Y. (2017) The Dynamic Processes of the Control Network Evolution Based on Boxed Pig Game with the Strategy of Punishment and Incentive. Mathematics in Practice and Theory, 47, 225-234. |

[6] | Liu, H., Li, Y., Zhao, J. and Ge, M. (2016) Impact of Keeping Silence in the Final Cooperation Levels of Prisoner’s Dilenna Game. Mathematics in Practice and Theory, 46, 240-247. |

[7] | Ge, M., Zhao, J. and Li, Y. (2016) Modeling and Analysis of Network Evolution Based on Prisoner’s Dilemma Game with Punishment Strategy. Journal of Systems Science and Mathematical Sciences, 36, 2041-2048. |

[8] | Xing, H. and Zhao, J. (2016) Formulation of Networked Evolutionary Games with Variation Mechanism. Journal of Shandong University (Natural Science), 51, 103-107. |

[9] |
Li, H., Wang, Y. and Liu, Z. (2012) Existence and Number of Fixed Points of Boolean Transformations via the Semi-Tensor Product Method. Applied Mathematics Letters, 25, 1142-1147. https://doi.org/10.1016/j.aml.2012.02.023 |

Copyright © 2020 by authors and Scientific Research Publishing Inc.

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.