Why do organizations struggle to show the impact of user research?
Data from our research capability scorecard shows that the biggest challenge for organizations is showing the impact from research. Why is that and if it should be, how can it be addressed?
Research capability scorecard — benchmark data
We have designed a tool to help UX and product leaders understand where their organizations’ strengths and weaknesses are regarding research capability. If you want to try it, you can do so here (it’s free): https://www.ux247.com/research-capability
Not only do you get your score, but we also provide a personalized report with a breakdown of your score and recommendations for how you can make improvements.
As the sample size grows, the data reveals where organizations struggle the most as illustrated below:
Whilst all areas show a separation from the ideal, ‘Research operating model’ is where we see the largest deviation. Before we go on and look at this further, it is worth noting that research competence is where the highest scores appear, which is very good news.
The research operating model overall score is derived from the responses to a number of questions focused in return on research investment and measurement. One of these questions asks how well the organization can measure the impact of research. Here are the population scores:
The lowest point on the chart, the second bar from the left — the impact question. What I find interesting is that UX leaders feel confident that their investment in research is delivering a return but can’t measure the impact. Having well-defined KPI’s in place also scores higher than measuring impact.
Why do organizations struggle to measure research impact?
If you search the internet for reasons why organizations struggle to measure the impact of research, you will find several recurring themes. I’m not going to cover each in detail here, but the key reasons cited are:
- Time — it takes too long to feel the effects of research.
- There are too many intangibles — how can you measure the impact on a person in the organization?
- Lack of ownership — a researcher doesn’t decide how the insight is used.
- Lack of measurement — there simply aren’t any appropriate measures in place.
I think these all miss the point, and mainly because the thinking is flawed.
I think organizations struggle to measure the impact of research because they are trying to measure the wrong things and are unable to isolate its influence on those areas. When we think about product development and what goes into the process, there are multiple elements of which research is only one.
A product’s performance can be influenced by so many factors, for example:
- Proposition fit with user needs
- Ease of use
- Price
- Seasonal variations in demand
- Campaign influence on demand, usage adoption etc.
- Differentiation versus alternatives
- Ecosystem it is part of
And more…
Anecdotally, I have experience of this with contract offers from research sceptics. On various occasions I have been offered risk sharing contracts where we (an agency) charge a lower fee to the client for our work initially but take a share of the rewards over time. I am a huge believer in the value of research, so I have always said yes to these contracts, but I have never actually seen one put in place. The reason is that the impact of research could not be isolated.
There was no lack of motivation from these organizations in trying to find a way to measure impact. The team trying to isolate it often included procurement people who had a vested interest in reducing supplier costs or attaching those costs to an investment case. Even so, they were unable to isolate the impact upfront and therefore to describe it in a contract.
I think they would have equally struggled to isolate the impact of the analysts, designers, developers and user acceptance testers involved in developing products. And I’m not sure that they would even try. There are better measures and people don’t seem to be so fanatical about the need for proving impact for these functions.
Conventional wisdom about measuring research impact
The general consensus is that if you want to measure research impact you need to establish a framework. That makes sense. A measurement framework is a great way of putting in place the key performance indicators, key experience indicators, data capture devices and reporting system that will do the measurement job for you.
There appears to be two ways of approaching measurement. The first is the attempt to isolate the business impact, and the second to justify the investment in the function. As I have already discussed, the first is extremely challenging and I believe an unrealistic goal.
I don’t see senior leadership specifically asking to measure research impact. They may well want a competitive advantage and consider user research as a tool to positively influence product development and design, but I think the role of research is far too granular for them. They will want to see RPU, subscriptions, attrition, and similar more macro measures of business performance.
My belief is that the desire to measure impact is being driven by researchers. Which is not entirely surprising as it is still a function that is struggling with its’ identify. However, I find that some of the measures used are too focussed on justification and a long way from truly measuring impact.
For the research team measuring the impact of research as a way of justifying the functions value is entirely relevant, particularly if you are trying to motivate colleagues. But measures I have seen like receiving greater investment in the research team don’t, in my view, have a direct correlation with research impact. The organization could be in a growth phase or have new leadership that believe in research more than the previous leader, or a myriad of other reasons why the team size is changing that can’t be related directly to impact.
I don’t like measures that are about doing the job or that are open to debate. One such measure is “research questions answered”. That is the job of a researcher, and I don’t think we would have similar measures for other roles in a product team. If it is a measure of volume, I would argue it is entirely wrong, and it reminds me of a case study used in a KPI workshop about baggage handlers at the airport.
They were being measured, and paid a bonus, on the time it took between the aircraft arriving at the gate and the first bag reaching the carousel. That sounds sensible until we find out that the baggage handlers had built a workaround. The fastest runner is given the first bag out of the hold, and they deliver it as fast as their legs will carry them, to the carousel.
On paper performance is good, in the baggage handlers bank accounts the bonus looks good, but the overall outcome for customers is not so good.
Conclusion
For research leadership, measuring impact is no doubt important. If it influences how the function is valued within an organization, and if that is a job that needs to be done, then it is essential. But a set of ‘off-the-shelf’ measures is not the solution.
To effectively measure the impact of research we need to be targeting the right level. It is too hard and would take too long to isolate the impact of research on high-level measures, and in my view, it isn’t worth doing. We should be measuring impact to evidence the value of user research to the organization and in the product development process.
Rather than thinking about how research has impacted the subscription rate of a product, we should be considering how it has influenced the decision making of the product manager. Instead of trying to isolate the influence of research on moving the needle on NPS, we should be seeking to measure how it is helping to prioritize the product road map. Let’s measure how research impacted innovation by identifying unmet needs and product and feature opportunities, not its specific role in the performance of a newly launched product.
I am not suggesting that research doesn’t influence business performance. I strongly believe that it does, but I also believe it is not a good investment of time to isolate that influence. I think it is more powerful to present tangible measures that colleagues and stakeholders recognize as being directly attributable to the fine work researchers do.