How 2015 UK election polls prove real-time is the best type of survey
The 2015 UK election dust has settled, 61.3 million votes have been counted and the UK has a new government but many so-called specialist pollsters and survey experts are still confused and mystified as to why the pre-election polls differed so much from reality. As the BBC said, a mixture of anger and contempt has been hailed down on those responsible for such poor predictions. And they were poor predictions; According to David Cowling, Editor of BBC Political Research Unit, NONE of the 92 polls they monitored predicted the 7% lead the Conservatives would achieve. 52% actually predicted a Labour lead!
What was it about the 2015 pre-election polls that were so inaccurate and why?
Most of those research organisations that have lost face have already started the “clean-up” campaign and set up various independent enquiries and investigations in order to come up with answers. Maybe they will or maybe they will just be used to delay and merely kick the can down the street.
Whatever transpires, we at VIRTUATell think we know exactly why there was such a discrepancy as we deal with millions of multi-channel, multi-type surveys each month and we know the strong points and the weaknesses of each survey channel and how it is best used. We already know the dramatic differences in survey results that can occur, depending upon the question wording, the survey type, the survey channel used but most importantly, the timing of the survey and who you are surveying. If you get any of these wrong there can be a lot of egg on faces as well us unrepresentative score upon which decisions could be made!
What experience has VIRTUATell, that can explain why the 2015 UK polls and surveys were so wrong and inaccurate? What makes our automated survey insight relevant to an election?
Actually, it’s not that we are clever, it’s just the fact that we conduct millions of client surveys each month across the telephone, web, email, smartphone and SMS channels and already know the major differences that can occur, depending upon the survey channel used and how close to the “event” you conduct the survey – and that’s not taking into account survey wording.
So, what do we know and what are the parallels between our survey experience and the election polls that could have helped the pollsters?
Firstly, we know that real-time surveys conducted as close to the “event” as possible, gives the most unbiassed results compared to other poll and survey methods! Whether it is an SMS, phone or online survey, it must be conducted immediately after the event upon which you want feedback.
We even measure the gap between the survey invitation and the survey result because we know that delayed or “sat-on” online, SMS or phone surveys and polls result in biased and inaccurate results. If fact our experience is that the automated telephone surveys conducted the minute after the “event” gives the most accurate feedback.
So which survey conducted before the election was the most accurate? It was the late telephone poll conducted very close to the election, that was only 1% wrong compared to the actual results. And which was the most accurate survey of all? That was the almost 100% correct exit-poll conducted immediately after the voter had voted (the “event”)!
Secondly, we know that survey wording and the presentation of the question is hugely important and can have a significant effect of the bias of survey results.
As we conduct most of our surveys in real-time and immediately after the “event”, we are almost always asking about actions that have already taken place – just like an exit poll. Our survey questions tend to ask “did you….?” and not “are you going to…..?” and the difference is significant, as the pre-election polls discovered. If the question is about an event that has not yet occurred, intrinsic bias is created by many factors that result in an unstable or unreliable result, including;
- lack of knowledge
- uncertainty and indecision
- late additional information
- embarrassment resulting in an untrue answer
We know real-time, immediately after the “event” give the best results, as did the exit poll.
The final parallel turns out to be nothing more than the data! VIRTUATell know that the huge importance of understanding the data that is used to choose who to survey and we have sophisticated “rules engine” and selection methodology to ensure that we are always comparing “like for like’. The 2015 pollsters forgot this very significant fact!
In the 2015 election there was 13% less voters for the 3 main parties, compared to the previous election. Also, the number of people polled by the big 10 pollsters varied from just over 1000 to over 10,000. That seems to suggest that there was a huge variation in the representation of the audience and VIRTUATell know that a poll can consist of 20,000 respondents but, if they are not representative of the general population, they are no more use than a sample of 200.
We are not considering suggesting that VIRTUATell run the next election polls but for the best advice and experience of automated customer satisfaction surveys, it’s nice to know we know what we are talking about.
For more information on VIRTUATell‘s services CLICK HERE to Contact VIRTUATel.
Recent Comments