Two Players Game Based on Schrödinger Equation Solution

Abstract

Game theory between two no co-operation players is an important topic during these years. For each player, he has his own thoughts and for each time, maybe there is more than one idea in his brain which is a disturbance. It is really similar as the Schrodinger equation solution: for fixed time, the solution has different states in probability. In the paper, we connect these two things together and find the related topics between equilibrium and solution property.

Share and Cite:

Gao, Y. (2022) Two Players Game Based on Schrödinger Equation Solution. Theoretical Economics Letters, 12, 564-574. doi: 10.4236/tel.2022.122032.

1. Introduction

A physical or a socioeconomical system (described through quantum mechanics or game theory) is composed by n members (particles, subsystems, players, states, etc.). Each member is described by a state or a strategy which has assigned a determined probability ρ i j . In evolutionary game theory, the system is defined through a relative frequencies vector x whose elements can represent the frequency of players playing a determined strategy. The evolution of the density operator is described by the von Neumann equation which is a generalization of the Schrödinger equation. It is a basic equation and a basic assumption of quantum mechanics proposed by Schrödinger, an Austrian physicist. So people started to use quantum language (entropy function) to study game theory (Orrell, 2019).

Firstly, Shubik (1999) finds there are three basic sources of uncertainty in an economy: exogenous, strategic, and quantum. The first involves the acts of nature, weather, earthquakes, and other natural disasters or favorable events over which we have no control. Strategic uncertainty is endogenous and involves our inability to predict the actions of competitors.

Later in their paper (Haven et al., 2018), they say in quantum mechanics, a state is formalized with a wave function, which is complex valued. That state will now form part of a Hilbert space. Position and momentum in quantum physics are real-valued, and one needs to find so called operators in the Hilbert space which can represent those real quantities. In Drabik (2011), the author introduces the basic concepts of quantum mechanics to the process of economic phenomena modelling. Quantum mechanics is a theory describing the behaviour of microscopic objects and is grounded on the principle of wave-particle duality. It is assumed that quantum-scale objects at the same time exhibit both wave-like and particle-like properties, but he just lists all the physics information, not exactly gives a connection with game theory.

These work (Hubbard, 2017; Hidalgo, 2007a, 2007b) focus on entropy (mostly is minmax question) to analysis the iteration of the game. But we want to analyze the game strategy based on Schrödinger equation solution (which also represents the state). We use the distance of two states to represent “good” or “bad” for two players and the “jump” between two different states is exactly the player’s strategy for the next game around (Samuelson, 1997).

Our paper concludes four sections, in the second section, we will give the model of Schrödinger equation and game theory separately. In third section, we give some basic theorems, examples and proof. In the last section, we have our conclusions and discussions.

2. Model

2.1. Schrödinger Equation

At the beginning of the twentieth century, experimental evidence suggested that atomic particles were also wave-like in nature. For example, electrons were found to give diffraction patterns when passed through a double slit in a similar way to light waves. Therefore, it was reasonable to assume that a wave equation could explain the behaviour of atomic particles. Schrödinger was the first person to write down such a wave equation. The eigenvalues of the wave equation were shown to be equal to the energy levels of the quantum mechanical system, and the best test of the equation was when it was used to solve for the energy levels of the Hydrogen atom, and the energy levels were found to be in accord with Rydberg’s Law.

In this part, we will give the exact Schrödinger equation, for simplifity, the system is closed which means two players can not be affteced by outside factors. So the potential can only change with people’s different thoughts. Also, each one can do optimal choices instead of information loss during their decision. The details will be discussed in the next sections.

Assumption 1. For a fixed time t = t 0 , for each player, they each only have two states, i.e., for player A, he has states i A and j A , similarliy, for player B, he has states i B and j B . And the states represents different solutions of the equation.

Assumption 2. For each player, they have their own same equation with different initial values. There is no entanglement between these two quantum phenomenon.

Schrödinger developed a differential equation for the time development of a wave function. Since the Energy operator has a time derivative, the kinetic energy operator has space derivatives, and we expect the solutions to be traveling waves, it is natural to try an energy equation. The Schrödinger equation is the operator statement that the kinetic energy plus the potential energy is equal to the total energy.

Traditionally, the Schrödinger equation is used to express the evolution of a quantum particle on the surface by its wave function ϕ ( x , t ) :

i ϕ ( x , t ) t = 2 2 m 2 ϕ ( x , t ) + v ( x , t ) ϕ ( x , t ) , ( t , x ) ( 0 , ) × R , ϕ ( x , 0 ) = ϕ 0 ( x ) (1)

where is the gradient operator at x, 2 is the Laplacian, m is the mass, is the reduced Planck constant, v ( x , t ) is the real time-dependent potential, and ϕ 0 ( x ) is the initial wavefuntion.

But here we use this equation to express player’s choice moving and simplify the model as m = = 1 , the potential v is time-independent. Then the modified equation for player A changes to:

i ϕ A ( x , t ) t = 1 2 Δ ϕ A ( x , t ) + v 1 ( x ) ϕ A ( x , t ) ϕ A ( x ,0 ) = ϕ 0 ( x ) (2)

Similarly, for player B, his equation is:

i ϕ B ( x , t ) t = 1 2 Δ ϕ B ( x , t ) + v 2 ( x ) ϕ B ( x , t ) ϕ B ( x ,0 ) = ϕ 1 ( x ) (3)

Since the Schrodinger equation is like the “heat equation” (only difference is time t changes to it, so from the fundamental solution of heat equation, we know that there is also “fundamental solutions” for Schrödinger. After the comparsion, for the Schrödinger equation, we need the calculation of i , which in one dimension situation. Actually the sqaure root of i has two roots which is suitable for our player’s two states in a time.

2.2. Game Theory

Game theory is a set of techniques to study the interaction of “rational” agents in “strategic” settings. Here “rational” measn the standard thing in economic: maximizeing the obejectives functions subject to conditions; “strategic” means the player care not only their own actions, but also about the actions taken by other player.

Modern game theory becomes a field of research from the work of John von Neumann. In 1928, he wrote an important paper about two-person zero-sum games. In 1944, he and Oscar Morgenstern published their classic book (Von Neumann & Morgenstern, 1947), Theory of Games and Strategic Behavior, theta extended the work on zero-sum games, and alsp started cooperative game theory. In the early 1950’s, John Nash made his contributions to non-zero-sum games (Nash Jr., 1950) and started bargaining theory. After that, there was an explosion of theoerical and applied work in game theory and the methodology was well along its way to its current status as a tool (Shubik, 1999; Samuelson, 1997; Selten, 1975; Samuelson, 2016).

And in our paper we will focus on noncooperative game theory, which means the former takes each player’s individual actions as primitives, whereas the latter takes joint actions as primitives. And we have the following assumptions about the players:

Assumption 3. The number of players is 2, player A and player B.

Assumption 4. There is no outside factors affecting their strategies.

Assumption 5. Each one is smart enough to have the optimal choice and no information loss when they have the decision. From the Schrödinger equation, The conservation law is true.

Definition 1. Players has no information loss during any time they make the decision is the L 2 norm integration of their equation solution from to is always constant 1.

| ϕ j ( x , t ) | 2 d x = 1 , j = A , B

Since this integral must be finite (unity), we must have the solution ϕ ( x , t ) 0 as x in order for the integral to have any hope of converging to a finite value. The importance of this with regard to solving the time dependent Schrödinger equation is that we must check whether or not a solution ϕ ( x ) satisfies the normalization condition.

Definition 2. (Distance Between Different States). Distance between two state i and j is defined as:

D i s ( i , j ) = i j 2

Definition 3. (Information of Strategy Sets). A collection of information set, I is a set of linear combination of two solutions for a fixed time t = t 0 , e.g. player A at time t = t 0 , his information set is:

I = { a × ϕ A i ( x , t 0 ) + b × ϕ A j ( x , t 0 ) }

Here ϕ A i ( x , t 0 ) , ϕ A j ( x , t 0 ) are the basic solutions of the equation. It is a mixed strategy for player A since he has two pure strategy distributions. a , b satisfying | a | 2 + | b | 2 = 1 . Just like the famous Schrödinger’s cat paradox which stated by Schrödinger in 1935. He presented a case of cat in a box which has fifty percent of survive and fifty percent it may die. So if we open the box, we can find the cat is alive or dead, but when we close the bos, there are infinity states it can be. According to this, we can explain the startegy: when we do the choice, we only have A and B these two choices. But when we are thinking, no one knows what we are thinking and actually we have infinity thoughts in our own brain.

Definition 4. (States Evolution as Strategy Change). When player A starts to change his strategy according to his guess of player Bs behavior, state change means the evolution for the Schrödinger solution. If his initial state is i, and the state changes to j after time t, the relation between i and j is

j = e i H t i .

Of course the time is a continuous parameter we obvious have

lim t 0 j = i

Definition 5. (Strictly Dominant Strategy) Similiarly as the definition in traditional game theory, a startegy state A i is a strictly dominant strategy for player A is for all A ¯ i A i , and all states B j for player B, D i s ( A i , B j ) < D i s ( A ¯ i , B j ) .

Definition 6. A state i and j for A and B is a Nash Equilibrium if and only if their distance is the least, i.e., for any other states i and j , D i s ( i , j ) D i s ( i , j ) . It also has another name: Stable Equilibrium.

Remark 1. This idea is from the model of the electron. The internal power between two electron is related to their distance, more closeness more power. If they are far away, exact little power between them, we do not care about these two electron. The two players are the opponents and partner”, player A is affteced by B and player B is affected by A. So there should be more power between them.

Definition 7. (Uncertainty principle for player) Any player of two can not exactly guess what his opponents strategy in the next step for what probability.

Remark 2. The uncertainty principle is one of the most famous ideas in physics. It tells us there is a fuzziness about the behavior of quantum particles and we can not determine particles position x and momentum p at the same time. There is a famous inequality derived by Werner Heisenberg:

σ x σ p 2 .

where is the reduced Planck constant: / ( 2 π ) . In two-players game, x represent their opponent’s behavior set (which is also the information set), p represets the related probability to take such decision. This phenomenon can not happen: player B continues his strategy always whatever A’s strategy is, then Δ x = Δ p = 0 , a contradiction with the uncertainly principle. In next section, we will give the example and proof about this.

3. Basic Theorem

Theorem 3.1. A player can have at most one strictly dominant strategy.

Proof. If we assume player A has two strictly domiant strategy and each one is state i 1 and i 2 , then for any state i A i 1 and states j for player B, we have the inequality:

D i s ( i 1 , j ) < D i s ( i A , j ) .

Same idea for state i 2 , we still have the inequality:

D i s ( i 2 , j ) < D i s ( i A , j ) .

We pick i A = i 2 in the first inequality and pick i A = i 1 for the second one, which is a contradiction.

Remark 3. There can be no strategy state i for player A such that for all i ¯ of A and j of B, D i s ( i , j ) < D i s ( i ¯ , j ) . B is the same.

Theorem 3.2. The game system is closed under the time-evolution, which means L 2 norm of the state (solution) is always 1.

Proof. It is obvious from the property that | e i H t | 2 = 1 .

Theorem 3.3. The time evolution operator is only related to its end time: intial time t 0 and ending time t 1 , it has no relation with the middle states from t 0 to t 1 . From the strategy idea, two players will have the decision in a fixed time and the opponent does not care about your thinking process.

Proof. Assume we have the inital state i ( t 0 ) and start to evolve to time t 1 , there are two possibilities: 1) First one is directly “jump” from t 0 to t 1 ; 2) Second part we have many “stopping thinking time” t 2 , t 3 , , t n , until t 1 . Compare these two states:

e i H ( t 1 t 0 ) i t 0

e i H ( t 1 t n ) e i H ( t n t n 1 ) e i H ( t 2 t 0 ) i t 0 = e i H ( t 1 t 0 ) i t 0

We find they have the same result and finish the proof.

Theorem 3.4. Two players will get Ne (Nash Equilibrium) in a period time.

Proof. If we consider two players evolve separately, which means player A is from state j to state e i H t 0 j , and player B is from state i to state e i H t 1 i , then calculate their distance:

e i H t 0 j e i H t 1 i 2 = e i H t 0 ( j e i H ( t 1 t 0 ) i ) 2 = j e i H ( t 1 t 0 ) i 2 . (4)

So we can only consider the state i evolution! Assume we evolve after time t, then the distance between these two is:

D i s ( j , e i H t i ) = j e i H t i 2 = j 2 + i 2 2 R e ( j e i H t i ) = 2 2 R e ( j e i H t i ) (5)

which is function of t and we want to minimize the D i s , so need to make sure R e ( j e i H t i ) as big as enough. Now we write the state in the linear combination of energy eigenstate E p :

j = p = 1 α p E p

i = p = 1 β p E p

And each constane α p , β p can be expressed as the norm times the exponential function of phase:

α p = | α p | e i ω p 1

β p = | β p | e i ω p 2

Then

j e i H t i = p = 1 α p E p e i H t p = 1 β p E p = p = 1 | α p | e i ω p 1 E p e i H t p = 1 | β p | e i ω p 2 E p (6)

For fixed p, p = 1 situation:

| α 1 | e i ω 1 1 E 1 e i H t | β 1 | e i ω 1 2 E 1 = | α 1 | | β 1 | e i ω 1 1 E 1 e i H t e i ω 1 2 E 1 = | α 1 | | β 1 | e i ω 1 1 e i ω 1 2 e i E j t = | α 1 | | β 1 | e i ( E j t + ω 1 1 ω 1 2 ) = | α 1 | | β 1 | ( cos ( E j t + ω 1 1 ω 1 2 ) + i sin ( E j t + ω 1 1 ω 1 2 ) ) (7)

Return back to Equation (6), we have the final formula:

j e i H t i = p = 1 | α p | | β p | cos ( E j t + ω 1 1 ω 1 2 ) + i p = 1 | α p | | β p | sin ( E j t + ω 1 1 ω 1 2 ) .

To maximize the Equation (5) is equal to:

max R e j e i H t i = max p = 1 | α p | | β p | cos ( E j t + ω 1 1 ω 1 2 ) = 1 | α p | | β p |

Since | cos θ | 1 for any θ . Here we will use the following lemma to make sure we can take the equality for special t.

Lemma 3.5. There exists infinite t [ 0, ) such that cos ( E p t + ω p 1 ω p 2 ) = 1 for each p and the period time is related to E 1 and E 2 .

Proof. We focus on p = 2 , ( p > 2 ) is the same extension.

Assume there exists two numbers k 1 and k 2 such that the following equalities satisfy:

E 1 t + w 1 1 w 1 2 = 2 k 1 π

E 2 t + w 2 1 w 2 2 = 2 k 2 π

If we rewrite these equations with t, we have the following equality:

( 2 k 1 π + ω 1 2 ω 1 1 ) E 2 = ( 2 k 2 π + ω 2 2 ω 2 1 ) E 1

2 k 1 E 2 π + ( ω 1 2 ω 1 1 ) E 2 = 2 k 2 E 1 π + ( ω 2 2 ω 2 1 ) E 1

k 1 E 2 k 2 E 1 = ( ω 2 2 ω 2 1 ) E 1 ( ω 1 2 ω 1 1 ) E 2 2 π

k 1 = ( k 2 + ω 2 2 ω 2 1 2 π ) E 1 E 2 ω 1 2 ω 1 1 2 π

Obviously we can choose suitable ω 1 1 , ω 1 2 , ω 2 1 , ω 2 2 such that their difference is

some integer q times 2 π , according from the famous Bohr formula: E n = E 1 n 2 , so E 1 E 2 is the rational expression n 1 2 n 2 2 , pick suitable k 2 such that k 1 is also an

integer. Then go back to the t equation to get t. Simliarly, for p > 2 , we can still find their least common multiple.

Example 1. (For Definition 7) The prisons dilemma is a standard example of a game analyzed in game theory and we will use it first as an example for the uncertainty.

1) If A and B each betray the other,each of them serves two years in prison;

2)If A betrays B but B remains silent,A will be set free and B will serve three years in prison (vice versa).

3)If A and B both remain silent,both of them will serve only one year in prison.

So actually each player is in the dilemmas and no one knows his/her opponent’s strategy for the next step. It is a classical application of the “Uncertainty Principle”.

Example 2. (This example is from the lecture notes (Ferguson, 2005) (Odd and Even) Player A and B simultaneously call out one of the numbers one or two. Player As name is Odd; he wins if the sum of the numbers is odd. Player Bs name is Even; she wins if the sum of the numbers is even. The amount paid to the winner by the loser is always the sum of the numbers in dollars. We choose X = { 1 , 2 } , Y = { 1 , 2 } and the table is following:

Let us analysis the game from Player As point of view.Suppose he calls one3/5ths of the time and two2/5ths of the time at random.In this case,

1)If B calls one”,A loses 2dollars 3/5ths of the time and wins 3dollars 2/5ths of the time;on average,he wins 0.It is a even game in the long run.

2)If B calls two”,A wins 3dollars 3/5ths of the time and loses 4dollars 2/5ths of the time and average he wins 1/5.

Clearly if A mixed his choices in this given way,the game has two ending:even or A wins 0.2dollar on the average every time.

1)If we think about after a long time even”,A and B have no change,without loss of generality,the schedule for A is 1,1,1,2,2.Then B starts to think if he can do some change and earn money.So when A is asleep”,she chooses 2when A is 1and she chooses 1when A is 2.Then each time she can earn 3dollars,A is losing! So based on such situation happen,A should be clear each step and do some changes which is hard for B to guess As strategy.

2)Similarly,for second situation.B calls twoand A wins 0.2dollar average time.A is happy since he can earn money without doing any change,but B want to savemoney unless each game she will lose 0.2average.So she will call onewithout any dilemma,at that situation,A get nothing (since the average payoff is 0),so he will try to make some changes.In that case,each one will behave randomly without fixed strategy.

It satisfies the uncertainty principle.

However, player A can not player know B’s strategy, can he guess her probability for the next step? Of course he can, this is the following theorem from the quantum mechanics.

Theorem 3.6. For player A, he can guess the probability of player Bs from state j 1 to j 2 at time t:

P j 1 j 2 ( t ) = | C 2 ( t ) | 2

Here C 2 ( t ) is definded as:

C 2 ( t ) = i 0 t H 21 ( t ) exp ( i ω 0 t ) d t

H 0 is the initial Hamiltoian operator: 1 2 Δ + v 1 . The player A has a time dependent perturbation H ( t ) , and H 12 = j 1 H j 2 , H 21 = ( H 12 ) * , ω 0 = E 2 E 1 = E 2 E 1 . We write H = H 0 + H ( t ) .

The proof is similar from the reference book (Griffiths, 2007).

Proof. To begin with, let us suppose that thare are just two states j 1 , j 2 , then the solution function ϕ ( t ) can be expressed by the combinations of these two:

ϕ ( t ) = C 1 ( t ) ϕ 1 e i E 1 t + C 2 ( t ) ϕ 2 e i E 2 t .

And now since we have the perturbation, the new Schrödinger equation is:

H ϕ = i ϕ t

Then combine these two together and cancel the term, hence

C 1 [ H ϕ 1 ] e i E 1 t + C 2 [ H ϕ 2 ] e i E 2 t = i K

K = C ˙ 1 ϕ 1 e i E 1 t + C ˙ 2 ϕ 2 e i E 2 t

To isolate C ˙ 1 , we use the trick: Take the inner product with ϕ 1 , and exploit the orthogonality of ϕ 1 and ϕ 2 , conclude that:

C ˙ 1 = i [ C 1 H 11 + C 2 H 12 e i ( E 2 E 1 ) t ]

C ˙ 2 = i [ C 2 H 22 + C 1 H 21 e i ( E 2 E 1 ) t ]

Then after simplifying the equation:

C ˙ 1 = i H 12 e i ω 0 t C 2

C ˙ 2 = i H 21 e i ω 0 t C 1

Since our H is “small”, we can solve the equation in a process of successive approximations. Suppose the particle starts out in the lower state:

C 1 ( 0 ) = 1 , C 2 ( 0 ) = 0.

After the comparsion of zero order and first order, we can have our final conclusion (skip the detail here):

C 2 ( t ) = i 0 t H 21 ( t ) exp ( i ω 0 t ) d t

which means the player B can guess player A’s moving probability in a sense. Vice versa, player A can guess player B’s.

4. Conclusion and Discussions

In our article, we firstly combine the two-players strategy game and Schrödinger equation together, have a connection, successfully explain the game evolution using solution state. It transfers the economics problem to the physical question. Also, we determine the “good” or “bad” based on the distance of two states which is clear and easy to compare and apply the famous quantum mechanics results into the game theory, however we still cannot exactly transfer the game “language” into the initial potentials v 1 , v 2 or the equation directly, which is an limitation. And we hope to get the game strategy based on the equation solution (states) totally.

However, the distance we defined in the previous section is in the eigenstate basis, but when we perform an measurement of the whole system, we need a transformation for the computational basis, also we will get a probability of getting to the exact state, which helps us approximate the opponent’s strategy. It is an ongoing project.

Conflicts of Interest

The author declares no conflicts of interest regarding the publication of this paper.

References

[1] Drabik, E. (2011). Classical and Quantum Physics in Selected Economic Models. Foundations of Management, 3, 7-20.
https://doi.org/10.2478/v10238-012-0032-9
[2] Ferguson, T. S. (2005). Game Theory. In Lecture Notes Math 167.
[3] Griffiths, D. J. (2007). Introduction to Quantum Mechanics. Cambridge University Press.
[4] Haven, E., Khrennikov, A., Ma, C., & Sozzo, S. (2018). Introduction to Quantum Probability Theory and Its Economic Applications. Journal of Mathematical Economics, 78, 127-130.
https://doi.org/10.1016/j.jmateco.2018.08.004
[5] Hidalgo, E. G. (2007a). Quantum Econophysics. In AAAI Spring Symposium: Quantum Interaction (pp. 158-163).
[6] Hidalgo, E. G. (2007b). Quantum Games Entropy. Physica A: Statistical Mechanics and Its Applications, 383, 797-804.
https://doi.org/10.1016/j.physa.2007.05.001
[7] Hubbard, W. H. J. (2017). Quantum Economics, Newtonian Economics, and Law. University of Chicago, Coase-Sandor Institute for Law Economics Research Paper No. 799.
https://doi.org/10.2139/ssrn.2926548
[8] Nash Jr., J. F. (1950). Equilibrium Points in n-Person Games. Proceedings of the National Academy of Sciences of the United States of America, 36, 48-49.
https://doi.org/10.1073/pnas.36.1.48
[9] Orrell, D. (2019). Introduction to the Mathematics of Quantum Economics.
[10] Samuelson, L. (1997). Evolutionary Games and Equilibrium Selection. MIT Press.
[11] Samuelson, L. (2016). Game Theory in Economics and Beyond. Journal of Economic Perspective, 30, 107-130.
https://doi.org/10.1257/jep.30.4.107
[12] Selten, R. (1975). Re-Examination of the Perfectness Concept for Equilibrium Points in Extensive Games. International Journal of Game Theory, 4, 22-55.
https://doi.org/10.1007/BF01766400
[13] Shubik, M. (1999). Quantum Economics, Uncertainty and the Optimal Grid Size. Economics Letter, 64, 277-278.
https://doi.org/10.1016/S0165-1765(99)00095-6
[14] Von Neumann, J., & Morgenstern, O. (1947). Theory of Games and Economic Behavior (2nd ed.). Princeton University Press.

Copyright © 2024 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.