1. Introduction
The game of cops and robbers on graphs was introduced independently by Quilliot [1] and Nowakowski and Winkler [2]. In this game, a cop and a robber alternate turns moving from vertex to adjacent vertex on a connected graph G with the cop trying to catch the robber and the robber trying to evade the cop. In 2014, Komarov and Winkler studied a variation of the cop and robber game in which the robber is too inebriated to employ an escape strategy, and at each step he moves to a neighboring vertex chosen uniformly at random [3].
In 2020, Harris, Insko, Prieto-Langarica, Stoisavljevic, and Sullivan introduced another variant of the cop and robber game that they call the tipsy cop and drunken robber game on graphs [4]. Each round of this game consists of independent moves where the robber begins by moving uniformly and randomly on the graph to an adjacent vertex from where he began, this is then followed by the cop moving to an adjacent vertex from where she began; since the cop is only tipsy, some percentage of her moves are random and some are intentionally directed toward the robber.
In this paper, we generalize the work of Harris et al. to analyze the tipsy cop and tipsy robber game. One inspiration for this study to model is the biological scenario illustrated in the YouTube video [5] https://www.youtube.com/watch?v=Z_mXDvZQ6dU where a neutrophil chases a bacteria cell moving in random directions. While the bacteria’s movement seems mostly random, the neutrophil’s movement appears slightly more purposeful but a little slower.
Harris et al. modeled the game by having the players alternate turns. In this paper we use a slightly different model of the game that was suggested to us by Dr. Florian Lehner. We let the cop and robber be any amount of tipsy and rather than assume that the players alternate turns and flip a coin to determine if the player moves randomly or not, we use a spinner wheel to determine the probability of whether the next move will be a sober cop move, a sober robber move, or a tipsy move by either player. We model this scenario on vertex-transitive graphs (both finite and infinite examples) and on non-vertex transitive graphs (friendship graphs) using the theory of Markov chains. Given a set of initial conditions on each player’s tipsiness and their initial distance, the questions we consider are:
● What is the probability
that the game, beginning in state i, will be in state j after exactly
rounds?
● What is the probability
that the game lasts at least
rounds if they start distance d away?
● What is the expected number
of rounds the game should last if they start distance d away?
For some of these graphs we are able to identify when the expected capture time is finite and when the robber is expected to escape.
In Section 2 we introduce our model of the tipsy cop and robber game on both vertex-transitive and non-vertex transitive graphs. In Section 3 we analyze the game on regular trees and compare it to the famous Gambler’s ruin problem [6]. In Section 4 we present our general method for analyzing the game using Markov chains, and we then apply these methods to analyze the game on specific families of graphs in Sections 5-8. In Section 5 we analyze the game on cycle graphs. In Section 6 we analyze the game on Petersen graphs. In Section 7 we analyze the game on friendship graphs. In Section 8 we analyze the game on toroidal grids. In Section 9 we present a model for the game where players start off drunk and sober up as the game progresses, answering Question 6.1 from Harris et al. [4]. Finally, in Section 10 we present a model for the game where the players’ tipsiness increases as a function of the distance between them, thus providing an answer to Question 6.6 from Harris et al. [4]. While we present our methods with specific examples from each family of graphs we analyze, we also include our SageMath (CoCalc) code in the Appendix so that any reader may adapt our methods to their own models.
2. Background and Introduction of Our Model
In this paper we model the tipsy cop and tipsy robber game on a graph G by first placing the cop on one vertex and the robber on another vertex on the graph G. Rather than require the players alternate turns as in previous models of the cops and robbers game, we allow for four possible outcomes in each round of the game: a sober robber move r, a sober cop move c, a tipsy robber move tr, or a tipsy cop move tc, where
. The outcome of each round is assigned at random, perhaps by a probability spinner as depicted in Figure 1.
● r is the probability of a sober robber move (in yellow)
● c is the probability of a sober cop (in green)
● tr is the probability of a tipsy move by the robber (in red)
● tc is the probability of a tipsy move by the cop (in blue)
If the game is played on a non-vertex transitive graph the probability of transition from one state to another depends on the starting location of the players. One such graph is a friendship graph, where n copies of the cycle graph C3 are joined by a common vertex (see Figure 2 for an example of a friendship graph with 3 copies of the C3 cycle graph).
A vertex-transitive graph is a graph, where every vertex has the same local environment, so that no vertex can be distinguished from any other based on the vertices and edges surrounding it. On a non-vertex-transitive graph a tipsy move by the cop is not necessarily equivalent to a tipsy move by the robber, as it may increase, keep the same, or decrease the distance between the players based on each player’s starting state. For example, the first part of Figure 2 gives us the possible movements of the cop (in blue) if she is in the center of a friendship graph with 3 triangles. If the cop makes a tipsy move to a green vertex (which
accounts for
of tc moves), the distance between her and the robber (in red) increases to 2. If she moves tipsy to the vertex in orange the distance will not change (
of tc moves), and if she moves to the vertex in red (that move can be either
of tc moves or the only sober move) she will catch the robber (distance
will be 0). On the other hand, if the cop’s starting position is on an outer vertex (second part of Figure 2) there are only two possible outcomes, she moves closer
to the robber (in black-either
of tc or the only sober move) or keeps the same distance (in orange,
of tc moves).
Similarly, the starting state of the robber determines the probability of moving to the next state. The first part of Figure 3 depicts the possible movements of the robber (in red) if he is in the center of a friendship graph with 3 triangles. If the
Figure 1. Probability of each move based on spinner.
Figure 2. Possible cop moves (double circles) on a friendship graph with 3 triangles.
Figure 3. Possible robber moves (single circles) on a friendship graph with 3 triangles.
robber makes a sober or tipsy (which accounts for
of tr moves) move to a green vertex, the distance between him and the cop (in blue) increases to 2. If he moves tipsy (
of tr moves) to the vertex in orange the distance will not change, and if he moves to the vertex in blue (a different tipsy move
of tr moves) he
will get caught (distance will be 0). On the other hand, if the robber’s starting position is on an outer vertex (second part of Figure 3) there are only two possible
outcomes, he moves closer to the cop (in black,
of tr moves) or keeps the same distance (in orange,
of tr moves or a sober move).
We will analyze the friendship graph game in more detail in Section 7.
The game is simpler to model on vertex-transitive graphs. A sober cop move will always decrease the distance between the two (as the cop is chasing the robber). A sober robber move will always increase the distance between the two, or if they are at a maximum distance from each other on a finite graph, he can decide to stay in the same place. Finally, a tipsy move by either player is a random move, and therefore may increase or decrease the distance between the two. When modeling the game on a vertex-transitive graph, a tipsy move by the cop is equivalent to a tipsy move by the robber, so we regroup all tipsy moves together. Every move in the game is guaranteed to fall in one of those three categories hence,
(Figure 4).
● r is the probability of a sober robber move (in yellow)
● c is the probability of a sober cop (in green)
● t is the probability of a tipsy move by either player (in red)
For example, if the game is played on an infinite path as depicted in Figure 5, the probability that the distance between the cop and robber increases by one in a round is equal to the probability of a sober robber move, since a sober robber will always run from the cop, plus one half of the probability of a tipsy move by either player, since half of all random moves will increase the distance between them. We will employ Markov chains and transformation matrices to model how the players move from one state to another on these graphs.
3. Regular Trees and the Gambler’s Ruin Problem
On an infinite regular tree of degree
, the distance between the two players
decreases with probability
and increases with probability
when either
player makes a tipsy move. When the cop makes a sober move, the distance always decreases, and when the robber makes a sober move the distance always increases. Hence, if we assume that the cop calls off the hunt when the distance between the players reaches a specified distance
, then the Markov chain in Figure 6 models the game where the probability of the distance increasing is
and the probability of the distance decreasing is
.
Figure 4. Probability of each move based on spinner where tipsy moves are grouped together.
Figure 5. Move of a tipsy cop (blue) and tipsy robber (red).
Figure 6. Markov chain on infinite regular trees.
Those familiar with the gambler’s ruin problem will immediately realize that this Markov chain is the same as the gambler’s ruin chain with a probability of
the gambler winning a round given by
and the gambler losing a round given by
.
It is well-known that the expected game length of the gambler’s ruin problem is given by the equation
when
( [7], p. 79) and
(1)
when
( [7], p. 79].
After finding a common denominator and factoring out
we can rewrite
using our notation as
(2)
When
(favoring cop success) then the right hand side of Equation (2) has a limit of d as
. So if the cop never gives up, we get the formula
(3)
(4)
If
, then as
(5)
If
and
represent the probability of the robber or cop winning respectively when starting the game from distance d away from each other, then of course
. Formulas for
and
can easily be derived from well-known formulas modeling the gambler’s ruin problem. For
instance, when
, the probability of the robber escaping
is the same as the probability of the gambler’s success in the fair gambler’s ruin problem
and when
, we have
(6)
is the same as the probability of the gambler’s success in the unfair gambler’s ruin problem ( [6], Equation 1.2).
Substituting
(transition probability to increase distance) and
(transition probability to decrease distance) we find
to be
(7)
Numerical Examples
As an example, we consider the game on an infinite regular tree of degree
, where the proportion of moves follows the distribution of
,
,
, and the cop calls off the chase if the robber reaches a distance of
. Table 1 gives the expected game time
, and the probability of the robber or cop winning from each possible starting distance d.
We observe that if this game begins at
, then it has the lowest expected game time of any possible starting distance. This makes sense as 40% of the time the robber will escape in the first move, or with probability 22.5% either player will move drunkenly to allow the robber to escape in the first move; hence, the robber has a 66.5% chance of escaping in the first move if the game begins at
.
Table 1. Expected game times and probabilities of cop or robber win.
4. General Matrix Method for Analyzing Markov Processes
In this section we describe how to use various matrix equations to find the critical data points of the game. These include, the probability of transitioning from one vertex to another in exactly
rounds, the probability the game lasts at least
rounds, and the expected game time. To calculate these values for a finite Markov chain we will use its probability matrix P, where
is the probability of transitioning from state i to state j in one round of the Markov process. We will use these matrix methods repeatedly to analyze various families of graphs in the subsequent sections of this paper.
4.1. Calculating Probability of Moving from State i to State j in M Rounds
The probability of starting in state i and ending in state j in exactly
moves is
(8)
where
and
are the row and column basis vectors respectively associated with state i and state j.
For example, given the Markov chain in Figure 7, the probability matrix for this Markov chain is
(9)
Some probabilities of going from one state to another in
rounds are displayed in Table 2.
Figure 7. Sample for a Markov chain with four states.
Table 2. Multi-round state transition probabilities.
Note that in this example, state 0 and state 3 are both absorbing states, and that it is possible when calculating
for an absorbing state a, that the robber reaches the absorbing state a in
moves, and then sits there for the remaining
moves.
4.2. Calculating the Probability
That the Game Will Last at Least M Rounds
To calculate the probability that the cop will still be actively chasing the robber through
rounds of this game with a starting distance d between the players, we restrict attention to the matrix
modeling only the non-absorbing states of the Markov chain. Note that in this case, T is the matrix obtained from P by removing the columns and rows associated with any absorbing states in P. The probability of the game lasting at least
rounds of the game if the players start at distance d from each other is then given by the following product of matrices
(10)
where
denotes a standard basis row vector with 1 in column d and zero elsewhere, and
is the column vector with 1 in each entry.
4.3. Calculating Expectation E(d)
As the number of rounds goes to infinity the likelihood that the game is still going on goes to zero. If we sum all the
probabilities for all
we can find the expected number of rounds the game should last. That is,
(11)
(12)
(13)
where I is the identity matrix.
5. Cycle Graphs
The study of cycle graphs breaks down into two families of cases based on if the number of vertices
is even or odd. We start by analyzing the case when
is even, and then explain how the odd case differs slightly.
For a cycle graph
with
nodes, where
is even we have the states where the robber and the cop are 0 (cop catches the robber or the robber is tipsy
and stumbles upon the cop herself),
or
moves away from each other.
We assume that the cop always moves until they are at distance 0 away. Also,
if they are at distance
away, and the robber is sober, he will stay at the same location since it is the furthest from the cop. The Markov chain for a cycle graph with
nodes has
states as shown in Figure 8.
From this Markov chain we find the probability matrix P to be a tridiagonal
(
) matrix of the following form:
,
,
and the upper and lower diagonal entries are
for
, and
for
, and finally
otherwise.
The transition matrix T is derived by removing the first row and column corresponding to the absorbing state of P. Hence, T is an (
) matrix with
,
,
for
,
for
, and
otherwise.
When
is odd, the only difference in the Markov chain is that the probability
of transitioning from state
to itself is
and the probability of transitioning from
to
is
. Hence
,
, since the robber and cop move all the time.
We may now use our formulas involving T to find the probability that the game lasts at least
rounds as in Subsection 4.2 and the expected game time Subsection 4.3. It is also important to note that because our model does not guarantee turns alternate, and this graph is finite where the cop never calls off the chase,
for all values of d, c, r, t, and
.
5.1. Numerical Examples
For cycle graph with
nodes we have the states where the robber and the cop are 0 (cop catches the robber or the robber is tipsy and stumbles upon the cop himself), 1, 2 or 3 moves away from each other as shown in Figure 9.
We assume that the cop always moves until they are at distance 0 away. Also, if they are at distance 3 away and the robber is sober he will stay at the same location since it is the furthest from the cop. We derive the Markov chain in Figure 10 for a cycle graph with 6 nodes.
We use a stochastic matrix P to represent the transition probabilities of this system. The rows and columns in this matrix are indexed by the possible states
Figure 8. Markov chain for cycle graph with
nodes where
is even.
Figure 9. Distance between cop (blue double circle) and robber (red single circle) on a cycle C6 graph.
listed above, with the pre-transition state as the row and post-transition state as the column. Transition probability matrix
(14)
This transition probability matrix P can be restricted to a transient transition matrix T with absorbing state 0 removed
(15)
Using our general matrix method as in Section 4 we can solve the following numerical example.
5.2. Numerical Examples
We assume the tipsy moves by either player account for half of all moves
and
. Table 3 gives the probability the cop will still be chasing the robber after
rounds provided the cop and the robber start at distance
or 3 as the percentage of the sober moves allocated to the cop and the robber varies.
Table 3 also shows the game will last longest, and the chase has the largest probability of still continuing after 7 rounds, if the starting distance is 3 and the robber takes the maximum percentage of sober moves possible. The accompanying SageMath code for these computations is included in Appendix A.1, so the interested reader can adapt these calculations to model the game for any values of
they choose.
Table 3. Some survival rates and expected game durations on C6.
6. Petersen Graph
The Petersen graph is a vertex-transitive graph where the cop and robber can only be 0, 1, or 2 moves away from each other as illustrated in Figure 11.
We assume that the cop moves until eventually she captures the robber. Also, if robber is distance 2 away from the cop, and the robber is sober, he will stay at the same location since it is the furthest from the cop. Hence the Markov chain for the Petersen graph is as depicted in Figure 12.
We use a stochastic matrix P to represent the transition probabilities of this system. The rows and columns in this matrix are indexed by the possible states listed above, with the pre-transition state as the row and the post-transition state as the column.
(16)
To calculate the probability of the game lasting at least
rounds and the expected number of rounds in the game, we remove the absorbing state 0 to get the restricted transition matrix
(17)
We may now use our formulas involving T to find the probability that the game will last at least
rounds as in Subsection 4.2 and the expected number of rounds as in Subsection 4.3.
Numerical Examples
In Table 4 we assume the tipsy moves by either player account for half of all moves
and
. We then calculate the probability the cop
Figure 11. Distance between cop (blue double circle) and robber (red single circle) on a Petersen graph.
Table 4. Some survival rates and expected game durations on petersen Graph.
will still be chasing the robber after
rounds based on their starting distances
or 2 and on the percentage of the sober moves allocated to the cop and the robber.
We have published the accompanying SageMath code for these computations in Appendix A.1, so the interested reader can change these calculations to model the game for any values of
that they choose.
7. Friendship Graphs
As friendship graphs are not vertex-transitive, we cannot simply model the game on them with a Markov chain where each state is determined by the distance between the cop and robber, and we must treat tipsy cop moves and tipsy robber moves separately as the transition probabilities from state to state depend on who is moving. Hence, in this section the spinner we use has four distinct possible outcomes satisfying
.
The following notation will be used to model the tipsy cop and tipsy robber game on friendship graphs of n triangles. The states 1e, 1cc, and 1rc all refer to the cop and robber being distance 1 away from each other but, 1e means both players are on the same outer edge, 1cc means the cop is in the center, and 1rc means the robber is in the center as depicted in Figure 13.
Figure 13. All possible states of friendship graphs of n = 2 triangles.
Friendship graphs with
and 4 triangles are shown in Figure 14. Since the cop never calls off the chase on this finite graph, the cop will win eventually
even if the robber is completely sober throughout.
The Markov chain modeling the game on a friendship graph with n triangles is shown in Figure 15.
From this Markov chain we derive the transition probability matrix
(18)
and transition matrix with absorbing state 0 removed
(19)
Sample Calculations
In this subsection we compute the expected game times and probability of the game lasting through
rounds on a friendship graph with
cliques. Table 5 summarizes the results. In these computations we assume that the cop and robber each get 50% of the moves so
, but we vary the tipsiness of the cop and robber. The code for these computations is available in Appendix A.4.
8. Toroidal Grids
A toroidal grid is the Cartesian product of two cycle graphs
and represents a grid that can be embedded on the surface of a torus. Since our model does not guarantee turns alternate, and the cop never calls off the chase, the cop will eventually win
on any toroidal grid.
The Markov chain for our model on a 7 × 7 toroidal grids is shown in Figure 16.
Figure 14. Examples of friendship graphs with
triangles with cop (blue) and robber (red).
Figure 15. Markov chain for tipsy cop and tipsy robber game on friendship graphs of n cliques.
Figure 16. Markov chain for a 7 × 7 toroidal grid.
Table 5. Some survival rates and expected game durations on friendship graph.
The number of states in our Markov chain model grows considerably as the number of nodes in
grows.
Therefore,
(20)
and
(21)
Numerical Examples
In this section we model the game on a 7 × 7 Toroidal grid with the following proportion of moves
,
,
. Table 6 shows the probability of the robber evading the cop through the first 50 rounds of the game as well as the expected game length based on each of the players’ possible starting configurations. The SageMath code for these calculations is available to the interested reader in Appendix A.2.
9. Modeling the Game Where Players Sober up over Time
Harris et al. asked how to model a tipsy cop and drunk robber game when the cop is sobering up over time ( [4], Question 6.1). In this section, we model the game where both players begin completely drunk and sober up as time passes. We define our probability of a tipsy move by either player t as a function of m rounds that have passed
. Additionally, we set the proportion of sober
cop moves to be
and the proportion of sober robber moves to be
, where a and b are any desired integers that determine the pro
portion of sober moves that is assigned to the cop and to the robber. As we assume the players are sobering up as time passes, we choose to use a function f with the following properties:
(22)
(23)
with these assumptions, the probability of surviving
rounds, given an initial state d, is given by:
(24)
The expected game time is given by the series:
(25)
We conclude this section with some sample calculations in tables. The code for the calculations is attached in Appendix A.3, so the interested reader can change these calculations to model the game for any values of
, or
they choose.
Table 6. Survival rates and expected durations on 7 × 7 toroidal grid.
Numerical Examples
Assuming the game is played on a cycle graph of six nodes Figure 9, its accompanying Markov Chain is given in Figure 10.
Example 9.1.1. Let the tipsiness function be given by the formula
as it is an example of a function that follows the rules stated
above. Then, Table 7 shows the probability of the game lasting 5 rounds from varying starting positions and proportions of sober moves allotted to each player. It also shows estimates for expected game times from each starting position where we use
(26)
to approximate the infinite sum in Equation (25). We choose either
or
to approximate these values because these values of
are sufficiently large to approximate the expected game time accurately to at least five digits as we vary the proportion of sober moves allocated to the robber.
The expected game time for 80% was also calculated at
, and found to be the same for at least the first five digits. It seems only at 90% and above is
not sufficiently large.
Example 9.1.2. Let tipsiness be the function given by the formula
, because it changes exponentially and follows the rules state
above. Then, Table 8 shows the probability of the game lasting 5 rounds from varying starting positions and proportions of sober moves allocated to each player. It also shows estimates for expected game times from each starting position where we use
(27)
Table 7. Some survival rates and expected game times on C6 when players sober up over time.
*Sum calculated to
, **
.
Table 8. Survival rates and expected duration from different starting states on C6.
*Sum calculated to
, **
.
to approximate the infinite sum in Equation (25). We use either
or
to approximate these values because these values of
are sufficiently large for their respective proportion of sober moves allocated to the robber to accurately approximate the expected game time to at least five digits.
The expected game time for 70% was also calculated at
and found to be the same for at least five digits. However, we are certain that for the expected game time at 90%
is not sufficiently large, yet trying to calculate the sum with any larger
causes the SageMath server to timeout.
10. Modeling the Game Where Tipsiness Is a Function of Distance
Mentioning that it may be more biologically realistic, Harris et al. also asked how to model a tipsy cop and drunk robber game where the players’ tipsiness is determined by the distance between them ( [4], Question 6.6). In this section, we model the tipsy cop and robber game where the players sober up as they get closer to each other. This models the scenario where the cop’s ability to track the robber improves the closer they get, and the robber senses that the cop is on his trail so he moves more deliberately. The transition matrix
represents the stochastic matrix, where both the cop and robber sober up as they get closer to each other and get more tipsy as the distance between the two increases. If
is the maximum distance the two can be apart on a finite graph (or the distance at which the cop calls off the hunt), then we assume that the function
has the following properties:
(28)
(29)
Additionally, we set the proportion of sober cop moves to be
and the proportion of sober robber moves to be
, where a and b are
any desired integers that determine what fraction of all sober moves is assigned to the cop and to the robber.
10.1. Linear Increase of Tipsiness
In this scenario, we choose to use the function
, where d is the
distance between the cop and the robber when they start the chase and
is the maximum distance they can be apart; this value
is specified to be either the radius of the graph on finite graphs, or the maximum specified distance before the cop calls off the chase on an infinite graph. The probability of the game lasting at least
rounds when starting in state d is
(30)
The expected game time is
(31)
10.2. Exponential Increase of Tipsiness
Now, we choose to use the function
, where d is the distance
between the cop and the robber when they start the chase. The probability of the game lasting through
rounds is given by:
(32)
The expected game time is given by:
(33)
We conclude this section with some sample calculations in tables. We have published the accompanying SageMath code for these computations in Appendix A.5, so the interested reader can change these calculations to model the game for any values of
that they choose.
10.3. Numerical Example on Cycle Graph
Given a cycle graph of
nodes, we will have the possibilities of starting at distances 0 to 5 away with maximum distance
. The accompanying Markov Chain is (again we assume that the cop always moves and the sober robber chooses not to move if they are furthest away from each other as noted in Section 5 - even nodes cycle graphs) (Figure 17).
Based on the Markov chain we get the transformation matrix based on distance
(34)
Note that for each distance d,
. In our model we specify what percentage of all sober moves
are given to the robber.
The probability of surviving 20 rounds when starting at distance d apart
and the expected duration of the chase
using the linear growth
of the tipsiness
is shown in Table 9.
Similarly, the probability of surviving 20 rounds when starting distance d apart
and the expected duration of the chase
using the exponential
growth for tipsiness
are shown in Table 10.
Based on the results in the tables the chase is longer if the tipsiness is increasing exponentially and the percentage of all sober moves
are given to the robber is less than 50%. In the case that the percentage of all sober moves
are given to the robber is greater than 50% the chase is longer if the tipsiness is increasing linearly. Assuming the robber is given the choice, the robber should pick a strategy based on the percentage of sober moves assigned to him-if he gets to move more than 50% of the sober moves his tipsiness should increase
Table 9. Sample calculations on C10 when tipsiness increases linearly with distance.
Table 10. Sample calculations on C10 when tipsiness grows exponentially with distance.
linearly, otherwise (he gets to move 50% or less of all sober moves) he has a better chance of surviving if the tipsiness changes exponentially.
10.4. Infinite Regular Trees
If we play the game on an infinite regular tree, and the cop calls off the hunt if the robber reaches a distance of
nodes away from the cop, then the matrix P is an (
) tridiagonal matrix with
,
and the upper
and lower diagonal entries are
for
, and
for
, and finally
otherwise. To
obtain the matrix T we remove from P the first and last rows and columns corresponding to both absorbing states.
For example, on an infinite regular tree of degree
with maximum specified distance
, the accompanying Markov Chain is in Figure 18. Based on the Markov chain we get the transformation matrix based on distance
(35)
Note that in this case we remove states 0 and 5 because they are absorbing. Similarly in the numerical example on cycle graphs, we specify what percentage of all sober moves are given to the robber.
10.5. Numerical Example on Infinite Regular Tree with and Maximum Distance
The probability that the game continues for rounds when starting distance d apart and the expected duration of the chase when
when tipsiness increases linearly is shown in Table 11. Note that we assume the cop calls off the chase if the distance reaches.
Table 12 shows the probability of a game lasting 30 rounds when starting at distance d apart and the expected duration of the chase when the game takes place on a regular tree of degree and tipsiness is modeled
Figure 18. Markov chain for regular tree with maximum distance of.
Table 11. Numerical results on infinite regular tree when tipsiness increases linearly with distance.
Table 12. Some survival times and expected game durations on infinite regular tree of degree 4 when tipsiness increases exponentially with distance.
using the exponential function. Again we assume the cop calls off the chase if the distance reaches.
Based on the results in the tables above, the robber should pick the strategy where the tipsiness changes exponentially, since it gives him a higher chance of survival and the chase lasts longer.
11. Future Directions and Open Questions
We end this paper by pointing out a few possible areas of future study and posing a few open questions: modeling the game on infinite grids is considerably more difficult than on infinite regular trees because there are multiple states for a given distance, and it is not immediately clear what strategy the cop should use to decrease distance or the robber should use to increase distance.
1) On an infinite grid, what are the optimal strategies for the cop and robber?
2) Who is more likely to win on an infinite grid, if both players employ their optimal strategies?
3) How will the game change if the cop and the robber do not take turns, but instead move simultaneously?
Acknowledgements
We would like to thank Drs. Pamela Harris, Florian Lehner, and Alicia Prieto-Langarica for many helpful and inspiring conversations and suggestions regarding this project. We also thank Drs. Katie Johnson and Shaun Sullivan for reading and providing helpful feedback regarding earlier drafts of this manuscript. This collaboration was supported by the FGCU Seidler Collaboration Fellowship.
Appendix
A. SageMath Code
The code described in the manuscript is provided in this appendix.
A.1. SageMath Code for Petersen Graph
var ('c,t,M,i,p')
#####################################################
def check_rt(r,t):
check=1
if r<0:
check=0
print('r cannot be negative')
if t<0:
check=0
print('t cannot be negative')
if t+r>1:
check=0
print('r+t cannot be greater than 1')
return check
#####################################################
def define_P(n=4): #default n=4 can be changed later
L=[]
for i in range(n+1):
L.append([])
for j in range(n+1):
L[i].append(0)
L[0][0]=1
L[−1][−1]=1
for i in range(1,n):
L[i][i+1]=r+(2*t/3)
for i in range(1,n):
L[i][i−1] =c+t/3
L[n][n]=r+(2*t/3)
L[n][n−1]=c+t/3
M=matrix(L)
return M
#####################################################
def check_define_P(n=4):
if check_rt(r,t) ==1:
return define_P(n)
#####################################################
M=7 #number of rounds
r=.30
t=.40
c=1−t−r
n=10 #default n=4 can be changed here
check_define_P(n)
#####################################################
def define_T(n):
P=define_P(n)
T=P.submatrix(1,1,n,n)
return T
#####################################################
P=define_P(n)
print(P)
print()
T=define_T(n)
print(T)
print()
I = identity_matrix(n)
print(I)
one=[]
for i in range(n):
one.append(1)
N=vector(one)
print(N)
print('c=',c,'r=',r)
for d in range(1,n+1):
e_d = I.column(d−1)
print('d=',d,'n=',n,'M=',M)
G = e_d*(T^M)*N
print('Surv. prob. of M rounds from')
print('distance', d, 'is', G)
E = e_d*((I−T)^(−1))*N
print('Expected number of rounds from')
print('distance', d, 'is', E)
print()
A.2. SageMath Code for 7 × 7 Toroidal Grid
var('r,t,c')
r=0.4
# Proportion of moves that are sober robber moves.
c=0.3
# Proportion of moves that are sober cop moves.
t=0.3
# Proportion of moves that are tipsy moves by
# either player.
M=50
# Number of rounds you want the robber to survive.
#####################################################
if c+r+t != 1: #
print('Make sure your probabilities sum to 1:') #
print('Sum = ' +'t+r+c' + ' = 1 ?') #
#####################################################
else:
v=vector([1,1,1,1,1,1,1,1,1])
a=vector([1,0,0,0,0,0,0,0,0]) #starting state (3,3)
b=vector([0,1,0,0,0,0,0,0,0]) #starting state (3,2)
d=vector([0,0,1,0,0,0,0,0,0]) #starting state (3,1)
e=vector([0,0,0,1,0,0,0,0,0]) #starting state (3,0)
f=vector([0,0,0,0,1,0,0,0,0]) #starting state (2,2)
g=vector([0,0,0,0,0,1,0,0,0]) #starting state (2,1)
h=vector([0,0,0,0,0,0,1,0,0]) #starting state (2,0)
i=vector([0,0,0,0,0,0,0,1,0]) #starting state (1,1)
j=vector([0,0,0,0,0,0,0,0,1]) #starting state (1,0)
T=matrix([[r+t/2,c+t/2,0,0,0,0,0,0,0],
[r+t/4,t/4,t/4,0,c+t/4,0,0,0,0],
[0,r+t/4,t/4,t/4,0,c+t/4,0,0,0],
[0,0,r+t/2,t/4,0,0,c+t/4,0,0],
[0,r+t/2,0,0,0,c+t/2,0,0,0],
[0,0,r+t/4,0,t/4,0,t/4,c+t/4,0],
[0,0,0,r+t/4,0,t/2,0,0,c+t/4],
[0,0,0,0,0,r+t/2,0,0,c+t/2],
[0,0,0,0,0,0,r+t/4,t/2,0]])
G=vector((T^M)*v)
E=vector((1−T).inverse()*v)
print('G_'+'M'+'(3,3)=',G*a)
print('G_'+'M'+'(3,2)=',G*b)
print('G_'+'M'+'(3,1)=',G*d)
print('G_'+'M'+'(3,0)=',G*e)
print('G_'+'M'+'(2,2)=',G*f)
print('G_'+'M'+'(2,1)=',G*g)
print('G_'+'M'+'(2,0)=',G*h)
print('G_'+'M'+'(1,1)=',G*i)
print('G_'+'M'+'(1,0)=',G*j)
print('E(3,3)=',E*a)
print('E(3,2)=',E*b)
print('E(3,1)=',E*d)
print('E(3,0)=',E*e)
print('E(2,2)=',E*f)
print('E(2,1)=',E*g)
print('E(2,0)=',E*h)
print('E(1,1)=',E*i)
print('E(1,0)=',E*j)
A.3. SageMath Code for Cycle Graphs Sobering up Over Time
from sage.calculus.calculus import symbolic_sum
var ('t,M,N,i,p')
M=5 #number of rounds you want the robber to survive
N=1000 #Nth partial sum of infinite sum.
# Higher is more accurate but takes longer.
k=9 #number of nodes in cycle graph
p=0.51 #proportion of sober moves allocated to robber
#####################################################
# if p > 0.8 then N should be at at least 3000. #
# otherwise N=1000 is sufficient. #
#####################################################
If k%2==0:
n=(k/2)
else: n=(k−1)/2
def define_P(n,p,t):
L=[]
for i in range(n+1):
L.append([])
for j in range(n+1):
L[i].append(0)
L[0][0]=1
L[−1][−1]=1
for i in range(1,n):
L[i][i+1]=p*(1−t) +t/2
# transition probability to increase distance
L[i][i−1]=(1−p)*(1−t) + t/2
# transition probability to decrease distance
if k%2==0: #i f cycgraph i s even
L[n][n]= p*(1−t)
L[n][n−1]=(1−p)*(1−t)+t
else: #if cycle graph is odd
L[n][n]=p*(1−t)+t/2
L[n][n−1]=(1−p)*(1−t)+t/2
M=matrix(L)
return M
#####################################################
def define_T(n,p,t):
P=define_P(n,p,t)
T=P.submatrix(1,1,n,n)
return T
#####################################################
# ouptut: n length vector of all ones
#####################################################
def one_vector(n):
L=[]
for i in range(n):
L.append(1)
return vector(L)
#####################################################
def position_vector(n,d):
L=[]
for i in range(n):
L.append(0)
L[d]=1
return vector(L)
#####################################################
T=define_T(n,p,t)
#print T
#Uncomment the line above to see Transition Matrix
print('Calculating...')
x=vector(prod(define_T(n,p,4/(3+m))
for m in range (1,M+1))*one_vector(n))
y=vector(sum(prod(define_T(n,p,4/(3+m))
for m in range (1,q))
for q in range (1,N+1))*one_vector(n))
print('done:')
for i in range (0,n):
print('G_'+'M'+'('+'i+1'+')=',x*position_vector(n,i))
print('E('+'i+1'+')=',y*position_vector(n,i)) #####################################################
A.4. SageMath Code for Friendship Graphs
var('r,tr,tc,n,c')
tr=.4 #Probability of a tipsy robber move
tc=.4 #Probability of a tipsy cop move
r=.1 #Probability of a sober robber move
c=.1 #Probability of a sober cop move
n=6 #Number of triangles
M=5 #Number of rounds the game lasts
#####################################################
if tc+tr+r+c != 1: #
print('Make sure your probabilities sum to 1:') #
print('Sum = '+'tc+tr+r+c' + ' = 1 ?') #
#####################################################
else:
T=matrix([[r+tc/2+tr/2,c+tc/2,tr/2,0],
[tc*((n−1)/n),r+tr/2,0,tc/(2*n)],
[r+tr*((n−1)/n),0,tc/2,tr/(2*n)],
[0,tc/2,r+tr/2,0]])
v=vector([1,1,1,1])
a=vector([1,0,0,0]) #starting state 2
b=vector([0,1,0,0]) #starting state 1cc
c=vector([0,0,1,0]) #starting state 1rc
d=vector([0,0,0,1]) #starting state 1e
E=vector((matrix.identity(4)−T).inverse()*v) #################################################### print('G_'+'M'+'(2)=',a*T^M*v)
print('G_'+'M'+'(1cc)=',b*T^M*v)
print('G_'+'M'+'(1rc)=',c*T^M*v)
print('G_'+'M'+'(1e)=',d*T^M*v)
print('E(2)=',E*a)
print('E(1cc)=',E*b)
print('E(1rc)=',E*c)
print('E(1e)=',E*d)
A.5. SageMath Code for Regular Tree with Exponential Change of Tipsiness as a Function of Distance
var('c,t,M,i,p')
M=30 #number of rounds you want the robber to survive
#c=1−t−c #proportion of sober cop moves
#r—proportion of sober robber moves
#t—proportion of tipsy moves, changes with distance
#D—distance between cop and robber when they start
n=10#maximum distance between cop and robber
delta=4
p=1 #proportion of sober moves allocated to robber
######################################################
# input: n is maximum distance between cop and robber
# before calling off chase # input: p is proportion of sober moves allocated to
# robber
# output: transition matrix (including 2 absorbing
# states)
######################################################
def define_P(n,p):
L=[]
for i in range(n+1):
L.append([])
for j in range(n+1):
L[i].append(0)
T=[0]
R=[0]
C=[0]
L[0][0]=1
L[−1][−1]=1
for D in range(1,n+1):
s=1 − (1.2)^(1−D) q=1 + (1.2)^(1−D)
T.append(s/q) #tipsiness based on distance
R.append(p*(1−T[D]))
# Percentage of sober moves are robber moves
# as a function of t based on distance
C.append(1−T[D]−R[D])
print('T=',T)
print('R=',R)
print('C=',C)
for i in range(1,n):
L[i][i+1]=R[i]+(T[i]*((delta − 1)/delta))
# transition probability to increase distance
L[i][i−1] = C[i]+T[i]*(1/delta)
# transition probability to decrease distance
L[n][n−1]=0
L[n][n]=1
M=matrix(L)
return M
######################################################
######################################################
# input: n is maximum distance between cop and robber
# before calling off chase
# input: p is proportion of sober moves allocated to
# robber
# output: transition matrix (without absorbing states)
######################################################
def define_T(n,p):
P=define_P(n,p)
T=P.submatrix(1,1,n−1,n−1)
return T
######################################################
######################################################
# input: n the maximum distance between cop and robber
# before calling off chase
# ouptut: n−2 length vector of all ones
######################################################
def one_vector(n):
L=[]
for i in range(n−1):
L.append(1)
return vector(L)
######################################################
######################################################
# input: n the maximum distance between cop and robber
# before calling off chase
# input: d such that d+1 is starting distance
# between cop and robber
######################################################
def position_vector(n,d):
L=[]
for i in range(n−1):
L.append(0)
L[d]=1
return vector(L)
######################################################
P=define_P(n,p)
T=define_T(n,p)
one= one_vector(n)
print() print('G_d is probability of surviving', M,)
print('rounds starting at distance d.')
print('E_d is expected game length from distance d:')
print()
for d in range(n−1):
print('starting distance d=', d+1)
e=position_vector(n,d)
G_d=e*(T^M)*one
print('G_d=', float(G_d))
E_d=e*((1−T).inverse())*one
print('E_d=', float(E_d))
print()