Continuous dispersal on a discrete lattice

*Edit: I made a video!

Dispersal is a key process in many domains, and particularly in ecology. Individuals move in space, and this movement can be modelled as a random process following some kernel. The dispersal kernel is simply a probability distribution describing the distance travelled in a given time frame. Since space is continuous, it is natural to use a continuous kernel. However, some modelling frameworks are formulated on a lattice, or discrete array of patches.

So how can we implement a continuous kernel in discrete space?

As with many modelling situations, we could approach this in a number of ways.  Here is the implementation that I can up with, and I welcome your suggestions, dear reader, for alternatives or improvements to this approach.

  1. Draw a random variate d from the dispersal kernel.
  2. Draw a uniform random number θ between 0 and 2π, which we will use to choose a direction.
  3. Calculate the relative location (in continuous space) of the dispersed individuals using some trig:
         x  = cos(θ) d
         y = sin(θ) d
  4. Determine the new location on the lattice for each individual by adding the relative x and y positions to the current location. Round these locations to the nearest integer and take the modulo of this integer and the lattice size. This creates a torus out of the lattice such that the outer edges loop back on each other. If you remember the old Donkey Kong games, you can think of this like how when you leave out the right side of the screen you enter from the left.

I implemented this approach in R as a function that takes in a population in a lattice, and returns a lattice with the dispersed population. The user can also specify which dispersal kernel to use. Here is the result using a negative-exponential kernel on a 50×50 lattice.

Created by iterating over the dispersal function:

## General function to take in a lattice and disperse
## according to a user provided dispersal kernel
## Author: Corey Chivers
for(i in 1:lattice_size[1])
for(j in 1:lattice_size[2])

for(k in 1:N)
x_ind<-(round(i+x[k])-1) %% lattice_size[1] + 1
y_ind<-(round(j+y[k])-1) %% lattice_size[2] + 1

For comparison, I also ran the same population using a Gaussian kernel. I defined the parameters of both kernels to have a mean dispersal distance of 1.

Here is the result using a Gaussian kernel:

The resulting population after 35 time steps has a smaller range than when using the exponential kernel, highlighting the importance of the shape of the dispersal kernel for spreading populations (remember that in both cases the average dispersal distance is the same).

Code for generating the plots:

############## Run and plot #######################

## Custom colour ramp
cus_col<-colorRampPalette(colors=colours, space = c("rgb", "Lab"),interpolate = c("linear", "spline"))

## Initialize population array

### Normal Kernel ###
for(i in 2:Time)

## Plot
png('normal_kern.png', width = 800, height = 800)
for(i in times)
main=paste("Time =",i))

mtext("Gaussian Kernel",outer=TRUE,cex=1.5)

### Exponential Kernel ###
for(i in 2:Time)

## Plot
png('exp_kern.png',  width = 800, height = 800)
for(i in times)
main=paste("Time =",i))

mtext("Exponential Kernel",outer=TRUE,cex=1.5)


Mapping Bike Accidents in R

At last weekend’s Hack Ta Ville event here in Montreal, I joined up with some talented urban planners and web devs to realize Vélobstacles. The idea of the project is to crowd source information on cycling conditions around the city. As with any crowd sourcing project, we were faced with the problem of seeding the site with some data to draw the attention of users to get the ball rolling.

Fortunately, we had access to a data set of all reported cycling accidents between 2006-2010. Once we seeded Vélobstacles with this data, the web devs went to town adding features to the site, and I had outlived my usefulness as a data geek. So I decided to play with the accident data a little and produce some visualization. I plotted all the accidents on a map and animated it through time. I also calculated and plotted the monthly accident rate using a moving average.

Be sure to select HD quality:

Not surprisingly, the accident rate goes way up in the summer months as Montreal winters are braved on two wheels by only a rarefied few. What is interesting is the mid-summer dip in the accident rate. This dip is notably correlated with Montreal’s much beloved construction holiday – though the causal relationship is unclear. If you have any alternative explanations, or an idea about how to test the construction holiday hypothesis, drop a note in the comments.

As always, you can get the code on my github page.

The future of Artificial Intelligence – as imagined in 1989

This image comes from the cover of Preliminary Papers of the Second International Workshop on Artificial Intelligence and Statistics (1989). Someone abandoned it in the lobby of my building at school. Whatever for, I’ll never know.

I just love the idea of machine learning/AI/Statistics evoking a robot hand drawing a best fit line through some points on graph paper with a pen.

What other funny, or interesting, metaphorical visual representations of machine learning have you seen? Drop a link in the comments and I’ll get my robot arm to compile the results.

Next Post

The horror of left over variance…


After some discussion with Jeremy Fox, and a re-reading of Fox (2006), my understanding of the Price Equation partitioning of effects of biodiversity loss on ecosystem function has been improved. What I thought was a term for residual variance is the context dependent effect. This is analogous to transmission bias in the evolutionary interpretation of the Price Equation.

Walmart Invasion

As an invasion biologist, the process of spatial spread is at the heart of what I do. When I came across this dataset of Walmart store openings since 1962 I couldn’t help but see it as an invasion front which looks a lot like a biological invasion or (albeit slow) epidemic.

The video shows monthly store openings (in red) between 1962 and 2006.

You can get the code I wrote for generating this visualization on my github page.

An update on visualizing Bayesian updating

A while ago I wrote this post with some R code to visualize the updating of a beta distribution as the outcome of Bernoulli trials are observed. The code provided a single plot of this process, with all the curves overlayed on top of one another. Then John Myles White (co-author of Machine Learning for Hackers) piped up on twitter and said that he’d like to see it as an animation. Challenge accepted – and with an additional twist.

The video shows how two observers who approach the problem with different beliefs (priors) converge toward the same conclusion about the value of the unknown parameter after making enough observations.

I’ve posted the code here.

The essence of a handwritten digit

If you haven’t yet discovered the competitive machine learning site, please do so now. I’ll wait.

Great – so, you checked it out, fell in love and have made it back. I recently downloaded the data for the getting started competition. It consists of 42000 labelled images (28×28) of hand written digits 0-9. The competition is a straight forward supervised learning problem of OCR (Optical Character Recognition). There are two sample R scripts on the site to get you started. They implement the k-nearest neighbours and Random Forest algorithms.

I wanted to get  started by visualizing all of the training data by rendering some sort of an average of each character. Visualizing the data is a great first step to developing a model. Here’s how I did it:

## Read in data
train <- read.csv("../data/train.csv", header=TRUE)

##Color ramp def.

## Plot the average image of each digit
for(di in 0:9)

z<-z[,28:1] ##right side up

Which gives you:
Notice the wobbly looking ‘1’. You can see that there is some variance in the angle of the slant, with a tenancy toward leaning right. I imagine that this is due to the bias toward right handed individuals in the sample.

I also wanted to generate a pdf plot of all of the training set, to get myself an idea of what kind of anomalous instances I should expect.

If you are interested, dear reader, here is my code to do just that.

for(i in 1:nrow(train))
z<-z[,28:1] ##right side up

Which will give you a 2625 page pdf of every character in the training set which you can, um, casually peruse.
As of the time of writing, the current leading submission has a classification accuracy of 99.27%. There is no cash for this competition, but the knowledge gained from taking a stab at it is priceless. So give it a shot!