R in HTA workshop
Today we’ve had our workshop on “R for trial and model-based cost-effectiveness analysis”, at UCL. I really enjoyed the whole day — we had several interesting presentations and very lively discussion. In fact, all presenters have agreed to make their slides available, which I’ll put on the workshop webpage.
One of the cool outputs is actually that we’ll use that webpage as some sort of “meta-repository”; many people have presented their work and their own GitHub repository with code and documentation. So we thought it wouldn’t make sense to migrate all of these into a new home. Instead, we’ll use this webpage as a signpost, where we’ll collate all the information and point people to the relevant pages and further reading/code.
Many of the talks showed examples of very nice applications of modelling based on R, particularly using Shiny web-apps. I think I’m kind of happy with that (well… “happy” in a nerdy kind of way, at least…), because I really see this as the future I would like this field to go to. One thing I’m not completely sure on a more foundational level is that some people (not necessarily those who presented today) aim at providing “general-purposes” web-apps, that people can use over and over. I think perhaps it’s more efficient to just establish the possibility of using this machinery, while each individual modelling exercise should be specific and have only the background structure in common with the others (e.g. in terms of using R as the underlying engine).
I think probably the most interesting result I’ve seen is in Dyfrig’s presentation. He talked about the perspective of health technology assessors and reported on a survey they have done with the Evidence Review Groups (who work with NICE to review the models prepared by companies in support to their dossiers). One of them reported that they have nobody able to review and understand a model based on R, which I found very concerning.
We had lots of discussion around this — one kind-of-reassuring point raised was that NICE pre-process the dossiers and send them to an ERG who’s able to assess them properly, so in a way, that particular ERG wouldn’t get a R-heavy dossier. While that makes sense, I was also a bit unsatisfied with this, because it makes me implicitly question the validity of that specific ERG, given they are completely unable to assess a form of perfectly valid modelling strategy…
In the afternoon, there was a more technical session, which presented methods to deal with value of information (of which I’m of course a massive fan!) and then multistate models, with Chris deputising for Howard, who had a very good excuse to miss the workshop (although some people suggested this may have to do with fears of missing the football because of delayed trains back home… :smirk:).
Finally, we opened up for discussion, which again was very interesting — people had lots of comments. I opened it by saying that we’ll keep it sweet and short so they can all run and go home to watch the England’s game, but in fact we did use all the allocated 80 minutes for that!