Marc Dowd
Marc Dowd (Principal, European Client Advisory)

There is an old saying that goes “Weighing the pig won’t make it fatter”. This is meant to tell us that no matter how many times you measure something unless you take action based on that measurement you won’t get the outcome you desire. Measurement for its own sake is not a helpful activity. However, the choice and management of Key Performance Indicators in our businesses is a perennial discussion as we try to make them relevant, appropriate and comparable. Earlier in July, we hosted a Peer Connection conversation between CIOs registered for our 2020 European CIO Summit, and the theme was the management of KPIs, in the IT team but also in business more generally.

It’s almost a no-brainer to say that if you measure the wrong things, you’ll get outcomes you hadn’t planned for or are not desirable. But even though everyone accepts this, it doesn’t stop this being a common problem in business, government and just about any type of organisation you care to mention. Procurement teams are our favourite bad guys, driving down prices from suppliers in order to meet a target, and agreeing service level KPIs with those suppliers which are then gamed to show that a service has been delivered when the actual quality, and therefore the outcome, is bad. These outcomes are usually a year or two down the line, of course, when procurement directors have moved on with their hefty bonus…

In IT we live by our service levels. We are often providing the most fundamental of tools and processes to our colleagues, without which business would grind to a halt. We set ourselves targets and agree service levels with business units (along with the cost of delivery!) which then colour every decision we make from then on, regarding prioritisation, investment in new systems, training and support, you name it. If our metrics aren’t aligned to business outcomes, how can we expect our decisions to contribute to those same outcomes?

Some of the risks and issues identified in our Peer Connection session were around the rapid change of technology, the way we deploy and use it and how we might adjust our measurements to be more fit-for-purpose and reflect these changes. This is often done with great reluctance because when you change the way things are measured, you lose much of your ability to compare improvement or otherwise across time periods. Some might even say that this is sometimes done to deliberately hide poor performance!

We also heard views about how easy KPIs are to understand. The classic example of this is the “five nines” description for High Availability of systems. 99.999% uptime means that you only have a little over 5 minutes downtime every year. I have seen it demanded by customers and management teams without any real understanding that the cost difference between 99.999% (5 minutes) and 99.995% (26 minutes a year) can be quite eye-watering. Many organisations may be very happy to accept 19 more minutes downtime a year from a system if it saves them a million euros! And, of course, this doesn’t even begin to address the questions of how likely that target is to be reached and what the penalties are if it isn’t, and how to understand which minutes are more important than others for certain systems… A retailer will not thank you if their systems are available all through the year but down at Christmas.

Finally, the discussion around the value of KPIs turned to whether you are measuring Leading or Lagging KPIs. Simply put, lagging KPIs tell you what has happened, whereas leading KPIs provide some view of what is going to happen in the future. Taking an example from business, if you measure sales that is a lagging KPI, but if you measure the number of quotes your sales team provided that is a leading KPI. The benefit of a leading KPI, of course, is that you can theoretically take action to affect the final outcome. In IT, we have many metrics that are used to tell us whether we are doing a good job, such as the number of outages or serious issues with systems or time taken to resolve service tickets. But we may use leading indicators to help us too – for example an increase in the number of new starter requests for equipment might help us prepare for an influx of questions or problems common to people unfamiliar with our systems. A decrease in the usage of a particular piece of software might tell us that some shadow IT has replaced it or that a business unit has changed their processes, and needs investigation.

Since the start of 2020 we have changed a lot of our working practices, and the fundamental reliance on good IT tools and processes has never been more obvious. So, we should take care that we are measuring those things that help us to make better decisions, that are aligned to our organisational outcomes and show the true value of the service an IT team provides – and be careful that we are not just weighing the pig.

Spread the love