Kings of Lyon
I am in Lyon for a couple of days for our annual Bayes 20XX/Bayesian Biostatistics conference. I couldn’t be here for the whole time, courtesy of exams period, but what I did see, I really enjoyed!
There has been quite a number of good talks, including Sara on RDD (well… I probably would say that… But the talk was really good!). Two of Leo’s students presented some fascinating work on scientific reproducibility — Leo told me a while back he was starting a Center for Reproducible Research and clearly he got it going with interesting results.
I also enjoyed very much the talk given by Kelly Moran, who presented her work (which I think is part of her PhD) on verbal autopsy — not the “usual” sort of topic that we’ve historically seen at this conference, but very interesting.
Another talk that caught my attention is the one by Eric-Jan Wagenmakers, the maker of JASP. Eric-Jan’s talk was a bit broader than just showing off the software (although there was quite a lot of that…) and he discussed an interesting example of a paper recently published in the NEMJ. The paper discussed a study of Progesterone in Women with Bleeding in Early Pregnancy; this was a large trial (~4000 women) and the interesting bit was that it was strongly characterised as a negative finding, on the basis of \(p-\)value=0.08. So he went on to present a sort of Bayesian re-analysis to emphasise the point that the p-value is data-, but not context/exisisting information-dependent/driven. Apparently, the researchers knew a lot about the underlying biological mechanisms to determine that Progesterone is most likely not harmful (while clearly the evidence isn’t overwhelming in favour of the hypothesis of it being massively beneficial, basically whichever way you look at it).
Anyway, Eric-Jan used a live demo of JASP to showcase his point (or perhaps used the point to showcase JASP), which I think was kind of cool. Certainly, the software looks very professional and has quite a few interesting features. I think it’s built on some sort of interface (probably not much different than a Shiny App) and uses R in the background to generate (a relatively large set of) relevant analyses — including for the most part both a frequentist and a Bayesian version.
Like I said, I think this is cool and it can help people get into actually using Bayesian modelling. But I’m a bit cautious about fit-for-all-analyses, only-need-to-know-what-menus-to-pick-up kind of software. Of course, we’ve done things that “hide” Bayesian modelling under the hood ourselves, but in that case it was a highly specialised bit of modelling and I purposedly stayed clear of all-purpose programmes…