Twenty years ago I was a young pup working in a research agency (which no longer exists) for a client company (which no longer exists). We ran a customer loyalty tracking study, which fed into the business dashboard and was used to calculate bonuses. Performance was in decline. My contact at the company called me to discuss the latest report.
‘I’d like you to ‘soft weight’ the data,’ she said.
What she meant was that I should change the numbers, and take some of the sting out of the impact of this customer feedback on bonuses. To cut a long story short, I resigned the account and got hauled in front of the Board of my own agency as a result. They thrashed out an agreement over my head: the numbers stayed unweighted, at least when they left the agency. I don’t know what happened to them after that.
Another client (B2B, tech), around 10 years ago, adopted a system of ‘DIY’ customer insight: managers interviewed customers to identify opportunities for product development.
They needlessly spent more than £1-million on a development for which, it turned out, the customer insight had been ‘massaged’ to fit with a particular manager’s interests. It transpired that this very expensive, new product feature was only relevant to one customer.
In both cases, these businesses lost objectivity.
Objectivity is a vital – sometimes under-valued – aspect of what outside insight expertise brings to a project. This is becoming more of an issue as more businesses in-source customer insight.
It can only be a positive that people in the business get out of the business and invest time in understanding customers better. But this isn’t a substitute for objective evaluation, testing and assessment.