Stan
I don’t mean the weirdo that locks his girlfriend in the boot of his car and drives into a river, after having dyed his hair so he looks like that other weirdo Eminem (who, I have to admit, for some reason I kind of liked, back in the days).
I mean Stan the new software for Bayesian analysis developed by Andrew Gelman et al. The announcement of its 1.0 release has been widely discussed in several fora (including of course Gelman’s own blog). Predictably, I liked Martyn Plummer’s discussion.
The whole point of Stan is to be particularly efficient for problems involving large hierarchical models, where (possibly high) correlation among parameters typically creates problems for Gibbs sampling, leading to excruciatingly long compilation and running time and often to non-convergence.
To solve this problem, Stan uses Hamiltonian Monte Carlo (HMC), which I have to admit I don’t know very well (but plan to read about in the near future). I believe that Mark has been working on this a lot. As Martyn puts it, HMC works by using a very clever blocking strategy, effectively updating all the parameters at once, which makes it so quick.
I’ve tried to play around with a couple of very easy examples and Stan does look impressive. What I want to do next is to translate one of my own models (originally run in JAGS) to see how things change and whether I could then do everything I normally do after I’ve fit my models (eg performing a health economic analysis with BCEA). I think a nice example would be the Eurovision model, which is a bit slow using standard Gibbs sampling.
All in all, I think it’s really cool that we now have so many pieces of software that allow us to do Bayesian statistics. I was talking with Marta the other day and we thought that it’s really exciting that you can switch from JAGS to INLA (and now to Stan) to deal with all the problems you have in real world applications. In fact, I think I should find a way to at least mention all these in my teaching, so that students are aware of all the possibilities.