# Continuous dispersal on a discrete lattice

Dispersal is a key process in many domains, and particularly in ecology. Individuals move in space, and this movement can be modelled as a random process following some kernel. The dispersal kernel is simply a probability distribution describing the distance travelled in a given time frame. Since space is continuous, it is natural to use a continuous kernel. However, some modelling frameworks are formulated on a lattice, or discrete array of patches.

So how can we implement a continuous kernel in discrete space?

As with many modelling situations, we could approach this in a number of ways.  Here is the implementation that I can up with, and I welcome your suggestions, dear reader, for alternatives or improvements to this approach.

1. Draw a random variate d from the dispersal kernel.
2. Draw a uniform random number θ between 0 and 2π, which we will use to choose a direction.
3. Calculate the relative location (in continuous space) of the dispersed individuals using some trig:
x  = cos(θ) d
y = sin(θ) d
4. Determine the new location on the lattice for each individual by adding the relative x and y positions to the current location. Round these locations to the nearest integer and take the modulo of this integer and the lattice size. This creates a torus out of the lattice such that the outer edges loop back on each other. If you remember the old Donkey Kong games, you can think of this like how when you leave out the right side of the screen you enter from the left.

I implemented this approach in R as a function that takes in a population in a lattice, and returns a lattice with the dispersed population. The user can also specify which dispersal kernel to use. Here is the result using a negative-exponential kernel on a 50×50 lattice.

Created by iterating over the dispersal function:

```## General function to take in a lattice and disperse
## according to a user provided dispersal kernel
## Author: Corey Chivers
lat_disp<-function(pop,kernel,...)
{
lattice_size<-dim(pop)
new_pop<-array(0,dim=lattice_size)
for(i in 1:lattice_size[1])
{
for(j in 1:lattice_size[2])
{
N<-pop[i,j]
dist<-kernel(N,...)
theta<-runif(N,0,2*pi)
x<-cos(theta)*dist
y<-sin(theta)*dist

for(k in 1:N)
{
x_ind<-(round(i+x[k])-1) %% lattice_size[1] + 1
y_ind<-(round(j+y[k])-1) %% lattice_size[2] + 1
new_pop[x_ind,y_ind]<-new_pop[x_ind,y_ind]+1
}
}
}
return(new_pop)
}
```

For comparison, I also ran the same population using a Gaussian kernel. I defined the parameters of both kernels to have a mean dispersal distance of 1.

Here is the result using a Gaussian kernel:

The resulting population after 35 time steps has a smaller range than when using the exponential kernel, highlighting the importance of the shape of the dispersal kernel for spreading populations (remember that in both cases the average dispersal distance is the same).

Code for generating the plots:

```############## Run and plot #######################

## Custom colour ramp
colours<-c('grey','blue','black')
cus_col<-colorRampPalette(colors=colours, space = c("rgb", "Lab"),interpolate = c("linear", "spline"))

## Initialize population array
Time=35
pop<-array(0,dim=c(Time,50,50))
pop[1,25,25]<-10000

### Normal Kernel ###
par(mfrow=c(1,1))
for(i in 2:Time)
{
image(pop[i-1,,],col=cus_col(100),xaxt='n',yaxt='n')
pop[i,,]<-lat_disp(pop[i-1,,],kernel=rnorm,mean=0,sd=1)
}

## Plot
png('normal_kern.png', width = 800, height = 800)
par(mfrow=c(2,2),pty='s',omi=c(0.1,0.1,0.5,0.1),mar=c(2,0,2,0))
times<-c(5,15,25,35)
for(i in times)
image(pop[i-1,,],
col=cus_col(100),
xaxt='n',
yaxt='n',
useRaster=TRUE,
main=paste("Time =",i))

mtext("Gaussian Kernel",outer=TRUE,cex=1.5)
dev.off()

### Exponential Kernel ###
par(mfrow=c(1,1))
for(i in 2:Time)
{
image(pop[i-1,,],col=cus_col(100),xaxt='n',yaxt='n')
pop[i,,]<-lat_disp(pop[i-1,,],kernel=rexp,rate=1)
}

## Plot
png('exp_kern.png',  width = 800, height = 800)
par(mfrow=c(2,2),pty='s',omi=c(0.1,0.1,0.5,0.1),mar=c(2,0,2,0))
times<-c(5,15,25,35)
for(i in times)
image(pop[i-1,,],
col=cus_col(100),
xaxt='n',
yaxt='n',
useRaster=TRUE,
main=paste("Time =",i))

mtext("Exponential Kernel",outer=TRUE,cex=1.5)
dev.off()

############################################################
```

# Uncertainty in markov chains: fun with snakes and ladders

I love board games. Over the holidays, I came across this interesting post over at Arthur Charpentier’s Freakonometrics blog about the classic game of snakes and ladders. The post is a nice little demonstration of how the game can be formulated completely as a Markov chain, and can be analysed simply using the mathematics of state transitions.

The particular board which was analysed had the following ‘portals’:

``` starting=c(4,9,17,20,28,40,51,54,62,64,63,71,93,95,99)
ending=c(14,31,7,38,84,59,67,34,19,60,81,91,73,75,78)

```

Given that a player roles a six sided die to determine how many positions forward to travel, the transition matrix can be defined as:

```
n=100
M=matrix(0,n+1,n+1+6) ## from n+1 starting positions (0-100 inclusive) to n+1+6 ending (includes overshooting the last position)
rownames(M)=0:n
colnames(M)=0:(n+6)

for(i in 1:6){diag(M[,(i+1):(i+1+n)])=1/6} ##dice role probs from each position on the board

M[,n+1]=apply(M[,(n+1):(n+1+6)],1,sum) ##collect all of the 'overshooting' the ending probabilities
M=M[,1:(n+1)]

for(i in 1:length(starting))
{
v=M[,starting[i]+1]
ind=which(v>0)
M[ind,starting[i]+1]=0
M[ind,ending[i]+1]=M[ind,ending[i]+1]+v[ind]
}

```

In order to calculate the probability distribution of a player’s position after h rolls, the initial position vector (what state is currently occupied) is multiplied by the transition matrix raised to the hth power.

``` ### Multiply the transition matrix to get the position distribution ###
powermat<-function(P,h){
Ph<-P
if(h>1)
{
for(k in 2:h)
{
Ph<-Ph%*%P
}
}
return(Ph)
}
### -- ###

initial<-c(1,rep(0,n))
initial%*%powermat(M,h=1)

```

You can vary h and get the probability distribution of where a player will be on the board after that many rolls. Neat!

The thing is, this got me wondering… For the first roll, a player can end up in 1 of 6 possible positions (for this board 1,2,3,14,5 or 6), each with a 1/6 chance. We therefore can predict the position of a player after one roll with a good degree of confidence. If we wanted to predict the player’s position at roll 3, however, there are more positions which are possible (though not equally likely). So we would probably be less confident when trying to predict the position of a player after 3 rolls, and we would feel less and less confident the further out we get.

However, the game does end (although unfortunately for those who would like to move on to video games, this is not guaranteed – I leave it to the reader to prove this) and therefore we might expect to be fairly confident in predicting a player’s position after 100 rolls (they are probably at the finish line, of course). Which begs the question, how many rolls into a game would you be the least confident in predicting a player’s position?

To answer this question, we need a measure of uncertainty which can quantify how well we could predict a player’s position. It turns out, that the Shannon entropy measure does just that! The formula is very simple:

H=-∑p log(p)

```entropy<-function(p){
ind<-which(p>0)
return(-sum(p[ind]*log(p[ind])))
}
```

The Shannon entropy defines how much information is missing about the outcome of a random variable. So, if there is no information missing, we know the outcome with p=1, and the Shannon entropy = 0. If a random variable can have n possible outcomes, then the Shannon entropy is at a maximum when p=1/n for each.

So, back to my question: how many rolls into a game is our uncertainty about a player’s position on the board at a maximum? Keep in mind that we are talking about uncertainty with respect to our predictions before the game is started. Any knowledge of a player’s position at any point in the game would of course change our predictions and associated uncertainty. To answer this question, we just need to calculate the Shannon entropy of the outcome distribution generated by our Markov chain, and find at which point it is maximised.

```##############################################################
############ Calculate the entropy after n turns #############
entropy<-function(p)
{
ind<-which(p>0)
return(-sum(p[ind]*log(p[ind])))
}

turns<-100
ent<-numeric(turns)
for(n in 1:turns)
ent[n]<-entropy(initial%*%powermat(M,n))

plot(ent,type='b',xlab='Turn',ylab='Entropy')

```

Which is at a maximum (highest uncertainty) at roll 10. We are most certain of a player’s position at roll 1 as we might expect, but again from rolls 33 and on. After which, we become increasingly certain that the player has reached the finish line.

TL;DR When predicting player positions in snakes and ladders, forecasts are least reliable ten rolls in. Have fun!