Can we trust online research methods as much as we do face-to-face methods?
Those who know me will realise that I come from a traditional focus group background (always referred to as ‘group discussions’ before Mr. Blair got hold of them!). I’ve conducted over 1500 focus groups – so you might say I have some experience in this field.
But, In the past, I would have been one of the first to question the validity of responses gained not face-to-face but by going the online route.
However, in recent years there have been more and more safeguards and checks put on online responses to weed out the ‘professional online respondent’ (often a student!) and those who just aren’t giving the survey their attention.
In partnership with our friends at MrQual (and their in-house software) we are now confident that by using online quantitative surveys or online focus groups we are getting close to or even up to the levels of confidence that we have with face-to-face methods.
Currently we check the validity of responses using various methods. For example:
- By checking the speed at which an individual respondent completes a questionnaire (or even a particular question)
- Checking responses to grid or matrix questions for ‘straight lining’
- Checking the sense of open-ended responses.
Typically, 5-10% of respondents who complete a survey will be removed (for any of the three reasons above) and replaced by new respondents.
We also check (unlike many other agencies) that the surveys are completed by humans rather than ‘bots’ and we include logic checks across different questions.
At the end of the day validity can no longer be a reason for preferring face-to-face to online. Of course, online also offers you key advantages in costing and speed.
I’m not writing off face-to-face focus methods, they still have a massive role to play in research, but they should look over their shoulder because their online counterparts are getting damn close.