I was in a conversation with a CIO earlier this week on the topic of value realization measurement, and I realized I had never really posted on this crucial topic – shame on me! I did come at measurement (somewhat obliquely) in a post last October on The Business Case as a Value Realization Tool, but I suspect some of the ideas got lost in that piece, so I’m going to tackle the issue more directly as a multi-part post.
The biggest single mistake I see with value realization measurement is that people get totally intimidated with the incredible complexity around the topic – so intimidated that they generally give up. It’s as if the business-IT “system” (I use that term in a scientific or engineering sense) is a big black box. We know how to (and generally do) measure the inputs to the black box – capital expense, operating costs, headcount, ratios such as IT spend as a percentage of revenues or IT spend per employee, but we are generally lost as to the outputs, or even worse, the outcomes.
I think there are 3 distinct aspects to the value realization measurement challenge:
- There is indeed enormous complexity involved if you allow it – the relationship between a given technology initiative and an associated business impact is often tenuous at best – especially when the technology impact is largely infrastructural in nature. What, for example, is the value of upgrading a PC? The answer, of course, depends on all sorts of things such as who is the user and what are they using the PC for? How will they use the upgraded PC in ways they could not use the device it is replacing, and how will the new marginal uses lead to additional realized value? Even if not infrastructural, there are often timing effects – the initiative may not yield its full value until some time into the future – perhaps years. There are often effects from interdependencies – we might get a certain amount of increased sales performance through the implementation of a Customer Relationship Management (CRM) system. We might then get a significant multiplier through a new sales incentive program and another multiplier from using some sort of Web 2.0 capability to engage customers in the product design process. Accurately allocating benefits across these initiatives might be tricky, to say the least!
- There are cultural issues that often get in the way. With measurement comes accountability – real or implied! If I’m a business executive that is sponsoring, say, a $150 million initiative and I’m in an environment where value realization is not measured, I am unlikely to stand up and say, “Hey, let’s measure the realized value of this initiative and see how it matches the business case!” If I can get away without it being measured, that’s perfectly fine by me!
- The third aspect I see might be the most intimidating – IT people like to get to precision! They want the answer. This is insidious and prevents most organizations from even trying to measure value realization. And that’s my key point – if you are not trying to measure value realization, you are likely to lose some potential value – what I refer to as value leakage.
Let me explain why trying to measure value is so important for actually driving value realization. I’ll go back to my black box analogy. We know what goes in but not what comes out. So I ask you, my business customer and partner, “What would you like to come out of the box if you had your wishes?” Let’s stick with the CRM example, and assume my customer/partner is Fred, the VP Sales and Marketing. Fred says, “I’d like to increase sales productivity.” “By how much, by when, and how will you measure it?” I ask Fred. “By 25%, by the end of next year, and we’d measure it by looking at the increase in sales volume per quarter per salesperson.”
“OK, Fred, now let me explain, the black box is not really a black box – it’s a system, with people, processes, policies, information, and so on. What will the black box system need to do to increase sales productivity by 25%?” “Well,” says Fred, “It will need to save the sales people time – increase their efficiency so more of their time is spent on selling rather than looking for information and other activities that get in the way. And it will need to improve their close ratio – make them more effective at selling.” “OK. Fred, let’s talk about exactly how the black box might achieve those goals. Let’s drill down on how it might improve the sales close ratio to better understand the best possible ways we might be able to do this. And how we will measure and track the ability for the system to improve the close ratio.”
You can imagine this conversation going to and fro. Fred may have a rich understanding of these issues, and all I’m doing is mining that understanding and breaking the system’s outputs down into capabilities and outcomes. On the other hand, Fred might have no idea how the CRM will contribute to performance, and in that case I’m really consulting to him, rather than mining what he knows – Fred and I are jointly figuring it out. Either way, we might determine that effectiveness will be increased by giving the sales person capabilities that bring her a wealth of information about the customer (e.g., buying history, current market intelligence) so that she’s better able to target her sales efforts and more effective with the ones she targets because she knows so much more about them. The outcomes might include a measure of customer knowledge as assessed by the sales manager by listening in on sales calls or through periodic customer surveys. Another capability might be the opportunity to collaborate with other sales people on a specific sales issue – has anyone ever come across this competitor? What do we know about them? An outcome measure might include a measure of collaboration among sales people, perhaps tied to some sort of reward and recognition for such collaborative behavior.
My example here is trivial (as clearly is my knowledge of the CRM space!) but hopefully you get the point. We’ve taken a big investment (CRM deployment) with a nebulous goal – increase sales productivity – and drilled down to determine the capabilities that the CRM needs to provide, the outcomes those capabilities need to deliver, and the metrics we will use to track those outcomes. Typically, in the process of doing this, we have significantly increased our understanding of the solution, probably identified additional aspects of the solution that might have been overlooked, dramatically increased Fred’s buy-in and commitment to the solution, and have identified some success metrics that we can track. By doing so, we are in a better position to learn and improve our business case processes and value realization strategies.
And this is why I say in the post’s title that “you can win by simply trying.” Rather than assume that value realization tracking is so hard I won’t bother, I’ve worked with my key customer/partner to tease out some key measures. They won’t all be right – that’s fine, they are a starting point we can refine with experience. They won’t be precise, or perhaps even accurate – again, that’s OK. Something is better than nothing, and there was probably enormous benefit in taking Fred through the analytical process. Simply doing that will likely increase value realization. Alternatively, it may have revealed that the benefits were just not there, or too nebulous for the CRM program to go ahead without further analysis and perhaps some benchmarking.
As Lord Kelvin famously said, “When you can measure what you are speaking about, and express it in numbers, you know something about it; but when you cannot measure it, when you cannot express it in numbers, your knowledge is of a meagre and unsatisfactory kind.”