Surprise?
So: for once I woke up this morning feeling slightly quite tired for the late night, but also rather upbeat after an election. The final results of the general election are out and have produced quite some shock.
Throughout yesterday, it looked as though the final polls were returning an improved majority for the Conservative party \(-\) this would have been consistent with the “shy Tory” effect. Even Yougov had presented their latest poll suggesting a seven points lead and improved Tory majority. So I guess many people were unprepared for the exit polls, which suggested a very different figure…
First off, I think that the actual results have vindicated Yougov’s model (rather than the poll), based on a hierarchical model informed by over 50,000 individual-level data on voting intention as well as several other covariates. They weren’t spot on, but quite close.
Also, the exit polls (based on a sample of over 30,000) were remarkably good. To be fair, however, I think that exit polls are different than the pre-election polls, because unlike them they do not ask about “voting intentions”, but the actual vote that people have just cast.
And now, time for the post-mortem. My final prediction using all the polls at June 8th was as follows:
mean | sd | 2.5% | median | 97.5% | OBSERVED | |
---|---|---|---|---|---|---|
Conservative | 346.827 | 3.411262 | 339 | 347 | 354 | 318 |
Labour | 224.128 | 3.414861 | 218 | 224 | 233 | 261 |
UKIP | 0.000 | 0.000000 | 0 | 0 | 0 | 0 |
Lib Dem | 10.833 | 2.325622 | 7 | 11 | 15 | 12 |
SNP | 49.085 | 1.842599 | 45 | 49 | 51 | 35 |
Green | 0.000 | 0.000000 | 0 | 0 | 0 | 1 |
PCY | 1.127 | 1.013853 | 0 | 2 | 3 | 4 |
Not all bad, but not quite spot on either and to be fair, less spot on than Yougov’s (as I said, I was hoping they were closer to the truth than my model, so not too many complaints there!…).
I’ve thought a bit about the discrepancies and I think a couple of issues stand out:
- I (together with several other predictions and in fact even Yougov) have overestimated the vote and, more importantly, the number of seats won by the SNP. I think in my case, the main issue had to do with the polls I have used to build my model. As it has happened, the battleground in Scotland has been rather different than the rest of the country, I think. But what was feeding into my model were the data from national polls. I had tried to bump up my prior for the SNP to counter this effect. But most likely this has exaggerated the result, producing an estimate that was too optimistic.
- Interestingly, the error for the SNP is 14 seats; 12 of these, I think, have (rather surprisingly) gone to the Tories. So, basically, I’ve got the Tory vote wrong by (347-318+12)=41 seats \(-\) which if you actually allocate to Labour would have brought my prediction to 224+41=265.
- Post-hoc adjustements aside, it is obvious that my model had overestimated the result for the Tories, while underestimating Labour’s performance. In this case, I think the problem was that the structure I had used was mainly based on the distinction between leave and remain areas at last year’s referendum. And of course, these were highly related to the vote that in 2015 had gone to UKIP. Now: like virtually everybody, I have correctly predicted that UKIP would get “zip, nada, zilch” seats. In my case, this was done by combining the poor performance in the polls with a strongly informative prior (which, incidentally, was not strong enough and combined with the polls, I did overestimate UKIP vote share). However, I think that the aggregate data observed in the polls had consistently tended to indicate that in leave areas the Tories would have had massive gains. What actually happened was in fact that the former UKIP vote has split nearly evenly between the two major parties. So, in strong leave areas, the Tories have gained marginally more than Labour, but that was not enough to swing and win the marginal Labour seats. Conversely, in remain areas, Labour has done really well (as the polls were suggesting) and this has in many cases produced a change in colours in some Conservative marginal seats.
- I missed the Green’s success in Brighton. This was, I think, down to being a bit lazy and not bothering telling the model that in Caroline Lucas’ seat the Lib Dem had not fielded a candidate. This in turn meant that the model was predicting a big surge in the vote for the Lib Dems (because Brighton Pavilion is a strong remain area), which would eat into the Green’s majority. And so my model was predicting a change to Labour, which never happened (again, I’m quite pleased to have got it wrong here, because I really like Ms Lucas!).
- My model had correctly guessed that the Conservatives would regain Richmond Park, but that the Lib Dems had got back Twickenham and Labour would have held Copeland. In comparison to Electoralcalculus’s prediction, I’ve done very well in predicting the number of seats for the Lib Dems. I am not sure about the details of their model, but I am guessing that they had some strong prior to (over)discount the polls, which has lead to a substantial underestimation. In contrast, I think that my prior for the Lib Dems was spot on.
- Back to Yougov’s model, I think that the main, huge difference, has been the fact that they could rely on a very large number of individual level data. The published polls would only provide aggregated information, which almost invariably would only cross-tabulate one variable at a time (ie voting intention in Leave vs Remain, or in London vs other areas, etc \(-\) but not both). To actually be able to analyse the individual level data (combined of course with a sound modelling structure!) has allowed Yougov to get some of the true underlying trends right, which models based on the aggregated polls simply couldn’t, I think.
It’s been a fun process \(-\) and all in all, I’m enjoying the outcome…