Simon Wood
I had a new experience last week – I received my first ever phone call from a robot. Now before those of you of a nervous disposition start thinking Terminator has come true and the machines are rising (albeit very politely), let me explain…
Earlier in the evening I had been frequenting a public house where, ever the researcher, I had noticed their poster advertising a survey on my pub experience. All I had to do was text a number and they would do the rest. So, when I got home, I did indeed text the number. Nothing happened. No survey link was sent to my phone, no SMS survey or the like.
Then about 30 minutes later, my phone started ringing and I received a call from a robot (didn’t catch his/her name) which proceeded to interrogate me for 10 minutes about my trip to the pub. At the end of every question, I had to answer by pressing the appropriate number on my keypad. I even got to leave an answer to an open question – I talked into the phone and the robot recorded everything I said. Well, not quite everything – it cut me off after a minute of my rambling about the quality of the steak and kidney pie, thanked me for my help, and ended the call.
I’ll be honest and say that I didn’t really enjoy the survey – largely because the questionnaire was dull and somewhat overkill in terms of detail. But the question that stuck in my mind was whether or not we can ever make robotic customer satisfaction surveys (IVR) work?
The case for:
Before you dismiss this out of hand, just stop and think about how IVR could help as a survey method. In its favour, it’s certainly cheaper than the average telephone interviewer and can be scaled up and down very easily. In particular this appeals when the budget may suggest an ‘economy’ approach – but the sample records require a ‘telephone’ approach e.g. few emails but lots of phone numbers.
It can also operate in multiple languages simultaneously without needing separate interviewing teams and allows you to research the entire world from one office in one place. And of course there’s no interviewer bias – suddenly telephone interviews are precisely standardised with the exact wordings you requested being used just as you intended.
It can also collect a wide range of data – from numeric/coded answers to open questions, and record every word said in the process rather than having an interviewer trying to record the exact words you used as fast as you say them (not easy when I’m ranting my dislike of a particular brand/experience). And, add to that, because the open question is a recording, you can pick up the emotions and way the words are said, as well as what was said – and you can listen to it as often as you like. Given the importance often placed on open questions, surely that’s a real bonus.
Ignoring the research angle for a moment, there’s also the fact that it seems to work in a whole range of other industries – many call centres/helplines automatically direct you through IVR systems to get you to the right person or to collect information about you in advance. Whilst we may not use IVR in research that much, the general population is used to using them.
The case against:
A survey approach that removes bias and is cheap. What’s not to like??? Well unfortunately it’s not quite that simple, or is it?
There are certainly some failings that you’d have to live with. Trying to use an 11 point scale on an IVR survey could be interesting, unless your respondent has a somewhat unusual phone! There could also be problems getting a response depending on the research audience and the types of phones they have. For example, it probably works best on old fashioned phones where you had the keypad separate to the handset and didn’t have to keep taking the phone from your ear to press a button – but no one has them anymore! And as for smartphones, you just have to hope your system can cope with people having to pull up the keypad to start answering.
Yet these problems aren’t insurmountable – and let’s be honest, all methods have their own issues that you have to take into account.
Yet the biggest problem I can see is the fact that you are being called by a robot – even if it uses recordings to speak with a human voice – people have a general dislike of IVR systems. In reality, much of this isn’t the fault of the research industry – we simply haven’t used IVR enough to be responsible for killing its reputation. What does seem to have killed it though is the plethora of bad IVR systems used by other companies to handle your calls, tell you your bank balance, tell you the time of a different film to the one you want to see at a different cinema to the one you want to go to. In essence, the types of systems that have become the butt of endless stand-up jokes.
Instinctively, a lot of people – whether clients, researchers or respondents – wince when you mention the idea of IVR. And that instinct, created by bad IVR systems, kills off the chance if it being a widespread research method – and in two ways:
Firstly, there’s the practical application. When you get a robotic call, will you stay on the line and fill it in? From many people, the response is ‘no’ (usually something stronger). We’re used to pretty low response rates for online surveys, and seem happy enough as an industry to live with them, but this response rate can be even worse with IVR. No-one wants to be the researcher who can’t get anyone to fill their survey in – not only is it embarrassing, but it can make the debrief somewhat tricky!
And even when you do have results, with such a low response rate you may well find your sample is heavily skewed. Can you really trust the people who did fill it in to be representative of the other 98%? If not, you’ve probably just wasted your time!
But for me, the stronger argument against is more of a philosophical one. In the end, although we may think of ourselves as independent researchers, every interaction we have with customers – and especially in the customer satisfaction area – is on our client’s behalf. When we contact a customer we are acting as an extension of the client’s company – and their customers don’t see us as any different from them. We have to get that contact right, we have to represent them positively – the research must build, not destroy their customer relationships.
With the public reacting so negatively to IVR, can we seriously deploy it as an approach and say we’re protecting our client’s brands?
There’s also a third reason IVR looks unlikely to ever really change the game – mobile interviewing. A lot has already been written elsewhere about the potential for mobile research – as an interview platform and as a passive collection approach, so I won’t cover that here. But, it does seem to be that mobile may nullify some of the arguments in favour of IVR.
Mobile will probably never be as cheap as IVR. There are also big questions over when/whether mobile web will enable us to use online survey-style approaches to a telephone number (albeit only those beginning ‘07’). But, what it does do is offer an alternative approach that can reach a sample for whom you only have telephone numbers (albeit mobile) without the need for an interviewer. And probably more crucially, it looks to be growing and developing over time – it may not be quite there yet, but certainly looks to be a future approach. Given mobile’s increasing usage and abilities, it would seem a little strange if IVR, were to suddenly claim this part of the research market!
On balance
So based on the above, it’s pretty tricky to see a major future for IVR as a research method. Yet is it the research equivalent of the dodo? Well realistically no. In reality, there are two types of IVR survey – the robotic cold-call and use of an automated script that you are put through at the end of your dealings with a helpline, call centre, etc.
Hand on heart, I’m not convinced that there really is a major future for the robotic cold-call, even when prompted by the customer – such as my texting a number saying I want to give feedback. I suspect that instinctive reaction from consumers is simply too negative to make it a success. Automated calls from the health service to warn you of a health risk, or from your dentist to remind you of your appointment, have a practical benefit and may be accepted – even popular. But research is often perceived by consumers in the same way as selling – and our automated interview calls will be viewed in the same light as the prompts to claim your PPI back or the calls used by the political parties encouraging you to vote for them.
Where I can see a role for IVR research is with surveys where people are keen to participate – such as employee surveys. In these situations, IVR can offer a cost effective approach that reaches all employees evenly, including those who work remotely, travel a lot, or are never close to a computer – something that few other approaches can achieve.
I also believe there remains a role for IVR as a survey instrument for call centres. Yes the response rate is often low and there may be a skew, but it does offer a good way to get instant feedback on how the call was handled. As long as you keep to 1-2 questions and don’t ask too much, you should be able to make this work.
And finally, IVR can retain a role if the research is asking the right fundamental question. Much of what we do in research is focussed on representative samples, with robust findings that are accurate. But what if the fundamental question at the heart of the research is around removing/reducing the bad experiences? If what you actually want is an approach that attracts the extreme feedback from those who were left delighted/unhappy and offers you a chance to intervene directly, then many of the IVR approach’s weaknesses become less problematic. Yes it will only attract the extreme views, and yes robotic calls may never work, but offering an IVR feedback channel can allow you to identify the unhappy customers, explore their problem and intervene.
So where have I netted out? Well, IVR will probably never revolutionise the research industry and/or be the number one approach. But, like much of what we do, it’s the classic case of ‘horses for courses’ and we shouldn’t dismiss it out of hand. There are severe limitations and I suspect few of us will do more than a few IVR jobs in our careers. But we need to know its strengths and weaknesses and learn when it is appropriate. If we don’t, then we’re failing in our duty to offer our clients the best solution to meet their exact needs.
Simon Wood is Head of Stakeholder Management Research at TNS UK.
The views expressed in this blog posting are the author’s own, and do not necessarily reflect the views of TNS, nor of its associated companies.
3 comments
yes ivr can be molded as per requirement
Being you are situated in the UK, I find your perspective interesting and different from ours. In the US, our company is having great success implementing the IVR survey model. We find it works best with large volume surveys, thousands or millions a month, and we’ve seen completion rates as high as 80%. But as in most business projects, it often comes down to $$. For the volume we implement, it just wouldn’t be possible to do live-call surveys with any kind of monetary efficiency.
Our clients like it because it’s proactive communication that builds brand loyalty. But, as you noted, it has its limitations. Although we capture the voice-of-the-customer and translate/analyze for our customers, we’ve found that IVR surveys still work best for simple quantitative surveys unless the client has someone dedicated to acting on the information gathered in the recordings — and at thousands of recordings a month — that’s a lot of data! Even with our platform’s ability to analyze the data for keywords (“great”, “sucks”, “perfect”, “frustrated”, etc.), the data can still be overwhelming.
Of course, I think all good research firms should have access to an IVR survey platform at their disposal. Even if their clients don’t use it, it’s another offering to build their credibility as industry experts.
Steven,
Good overview, but I tend not to agree with all of your conclusions. At least in North America, the IVR survey model is alive and well (and growing). It is true that it tends to pick out the extremes in higher than their representative rate, however it gives our clients the ability to measure feedback on a scale they would not otherwise be able to gather. We see response rates of anywhere between 5% on the low end to 20% on the high end, of all attempted surveys (into millions of customers).
Aside from just research, the big benefit, which you do point out albeit briefly – is that it serves as an early warning system for frustrated and dissatisfied customers. So it might come down to a philosophical question with respect to research, but in terms of tangible results, IVR surveys give our clients actionable data to improve their operations and customer experience.