One of the most straightforward examples of how we use Bayes to update our beliefs as we acquire more information can be seen with a simple* Bernoulli process*. That is, a process which has only two possible outcomes.

Probably the most commonly thought of example is that of a coin toss. The outcome of tossing a coin can only be either heads, or tails (barring the case that the coin lands perfectly on edge), but there are many other real world examples of Bernoulli processes. In manufacturing, a widget may come off of the production line either working, or faulty. We may wish to know the probability that a given widget will be faulty. We can solve this using Bayesian updating.

I’ve put together this little piece of R code to help visualize how our beliefs about the probability of success (heads, functioning widget, etc) are updated as we observe more and more outcomes.

## Simulate Bayesian Binomial updating
sim_bayes<-function(p=0.5,N=10,y_lim=15)
{
success<-0
curve(dbeta(x,1,1),xlim=c(0,1),ylim=c(0,y_lim),xlab='p',ylab='Posterior Density',lty=2)
legend('topright',legend=c('Prior','Updated Posteriors','Final Posterior'),lty=c(2,1,1),col=c('black','black','red'))
for(i in 1:N)
{
if(runif(1,0,1)<=p)
success<-success+1
curve(dbeta(x,success+1,(i-success)+1),add=TRUE)
print(paste(success,"successes and ",i-success," failures"))
}
curve(dbeta(x,success+1,(i-success)+1),add=TRUE,col='red',lwd=1.5)
}
sim_bayes(p=0.6,N=90)

The result is a plot of posterior (which become the new prior) distributions as we make more and more observations from a Bernoulli process.

With each new observation, the posterior distribution is updated according to Bayes rule. You can change *p* to see how belief changes for low, or high probability outcomes, and *N* for to see how belief about *p* asymptotes to the true value after many observations.

### Like this:

Like Loading...

*Related*

Pingback: datanalytics » Visualización de la actualización bayesiana (y unas cuantas funciones de R)

I also posted this small change to your code on Reddit. I hope this is helpful.

Thanks for posting this, it looks like it might be helpful showing people how Bayesian updating works as a process. It’s often nice to show how different priors can affect the final result, so I added some code to make the Beta parameters variables that can be specified when calling the function. For example:

sim_bayes(p=0.2, N=50,prior_a=10,prior_b=10)

To call the function with a prior centered in the middle.

sim_bayes<-function(p=0.5,N=10,y_lim=15,prior_a=1,prior_b=1)

{

success<-0

curve(dbeta(x,prior_a,prior_b),xlim=c(0,1),ylim=c(0,y_lim),xlab='p',ylab='Posterior Density',lty=2)

legend('topright',legend=c('Prior','Updated Posteriors','Final Posterior'),lty=c(2,1,1),col=c('black','black','red'))

for(i in 1:N)

{

if(runif(1,0,1)<=p)

success<-success+1

curve(dbeta(x,success+prior_a,(i-success)+prior_b),add=TRUE)

print(paste(success,"successes and ",i-success," failures"))

}

curve(dbeta(x,success+prior_a,(i-success)+prior_b),add=TRUE,col='red',lwd=1.5)

}

Pingback: An update on visualizing Bayesian updating « bayesianbiologist

Pingback: Visualizing bayesian updating | David Ruescas