Uncategorized

Lost amid a sea of metrics!

I’ve decided airports are good places to write blogposts. You get lots of time to sit around with nothing to do except wait and no interruptions (except the tannoy announcing flights to places you’ve never heard of in a language you can’t understand). The other good thing about airports for my blog writing is the sheer number of customer experience surveys I get given – and the ‘inspiration’ I receive from them.

So far today, I’ve accumulated the following: a hotel satisfaction survey upon check out (in addition to the one on the desk in my room, and the card asking me to give them a good review on TripAdvisor), two invitations to rate the service advertised at the bottom of receipts (from the coffee I bought and the inevitable trip to buy a duty-free Toblerone bar), a pop-up from the airport’s website, satisfaction with my flight out yesterday and a random invite to participate in a panel survey from a panel I didn’t know I was part of! You’ll be pleased to know that being a curious sort I’ve completed them all, except the panel survey which screened me out early on due to my not owning a dog.

Filling all these in, and as a fellow member of the industry, I’ve been struck by the sheer variation in what gets measured.  Some surveys were long (and some too long), others were mercifully short. The topics have varied from what I experienced today to my thoughts on the brand in general and every competitor I’ve ever heard of.  I was even asked my preference for women’s perfume scents at one point – I have neither a preference nor the olfactory abilities to tell the difference between musk and oak moss!

Ignoring the inevitable detailed aspects, the biggest thing I noticed was the variation in the overall metrics.  Not only did the questionnaires include a mix of the usual overall satisfaction, repurchase and recommendation questions, but measures like effort, differentiation and trust all made an appearance.  Even when there was similarity in the questions asked, the scales were all over the place and often mixed within the same questionnaire (5pt, 7pt, 10pt, 11pt, and even 100pt!).

I may be in need of a better social life, but I actually find this variation interesting. Not only does it allow you to play ‘guess the research agency responsible’ but you can also play ‘guess the key metric used’ e.g. was it Net Promoter Score, Customer Effort, CFI, TRI*M, etc.  (I really do need to get out more).  The more sensible reason that I find this of interest is that I regularly find myself answering the question ‘What’s the best metric?’ – both from fellow researchers and by our clients.

The problem I find with this question is that too often, there simply isn’t an easy answer – or at least there isn’t an answer that anyone will readily want to go into the necessary details on.

Taking a closer look: Net Promoter Score
For example, let’s take the NPS score.  Its advocates (maybe I should say promoters) love how it’s so simple, and so quick.  They love to tell us how levels of recommendation have been proven to correlate with business success – and there are a multitude of books that you can read on the subject. Yet the NPS detractors point to the fact that it’s a nightmare for significance testing, is unstable in tracking, that it seems to fall over in regression and drivers analysis and that the grouping of 11 points into passives, detractors and promoters oversimplifies. There’s also the issue of whether it really works at the individual level and ‘mutually compensating error,’ and whether it measures satisfaction, recommendation, word of mouth or some amalgamation of the lot!

And that’s just one metric!  You can pretty much have the same debate about any of the metrics we use in customer experience (or research in general) and leave yourself and anyone who’s not fascinated by the subject utterly lost and confused.  Not the effect you want, ever.

And then there’s the Hawthorne effect – the sheer fact that by measuring something you draw attention to it and will probably raise performance.  How many ‘proven’ metrics are simply living off improvements actually driven by the fact that someone senior is now paying attention to performance?!?

Is anyone else now feeling like the research equivalent of the Ancient Mariner – metrics everywhere but not a metric to use?

So how do we get round this problem?
My gut instinct usually tells me to simply tell everyone to stop worrying and to use the ‘Happy Customer Index’.  Not for any proven reason other than I love the name!

Seriously though, I think the first thing we need to do is leave our researcher mindsets behind and put ourselves in our client’s shoes. We need to ask ourselves what our clients actually want to achieve when they measure their customer’s experience.

My perspective is this: fundamentally, clients want a decent measure they can use to monitor the level of service provided.  They need to be able to hand on heart say the metric works and will react to changes in the level of service. What scale that metric is based on, or which questions it asks, is effectively a secondary consideration.

This was brought home to me earlier this week when I watched a client in a webinar explaining how they’d selected which metric to use. They hadn’t reviewed all the options, scored them on advantage and disadvantages or gotten in a team of experts.   No, they’d heard from a friend who worked in research/insights elsewhere that it ‘worked’, googled it to find out a bit more, saw there were books validating his choice and a couple of case studies as evidence, then put out a brief stating it was their metric of choice.  At no point did they investigate correlation coefficients or R2 (I’m not sure they’d even read the books they found about their so-called ‘key’ metric).

Shocked?  Don’t be. Happens all of the time. Thing is, our clients want us to act as their trusted advisor. They want us to give them the best advice and give them something that works. Many of them are out of their comfort zone on this and simply wouldn’t know which to choose if you gave them all the options.  Let’s be frank.  How many of us truly know the relative merits of each of the metrics we provide?

So what’s the best advice?
For me, the basic success factor in any research programme is that the results get used to grow the business. It doesn’t matter how ‘sound’ the research is, if no one uses it then it’s been a waste of both money and time.  When choosing a metric we need to adhere to those same principles – use a measure that will get traction within the business and people will use to measure performance and target improvement.  If that means keeping it simple so it can be used with confidence by the marketing team then great.  If it means building a bespoke composite of 27 questions that will resonate with users than do that too!  If you want to see benchmarks, than use one where benchmarks are available or can be created.

In essence, the killer performance indicator is not the metric with the highest correlation coefficient or the perfectly weighed scale, it’s the one that the individuals in your business, or your client’s business WILL use when developing and improving their customer service.

A metric is a measurement tool. It’s a means to an end.  What’s important is that research programmes produce truly actionable, precise findings that can be taken by the end user and applied to grow their business. Advising on an appropriate metric is a part of that but should never be the centrepiece, however much we may want to retreat into our technical research comfort zones.  The focus must remain on outcomes and the impact of the research findings, not on the journey we take to get to those insights.

Simon Wood is Head of Stakeholder Management Research at TNS UK.

The views expressed in this blog posting are the author’s own, and do not necessarily reflect the views of TNS, nor of its associated companies.

Leave a Comment

* By using this form you agree with the storage and handling of your data by this website.
Please note that your e-mail address will not be publicly displayed.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Related Articles