Why don't we measure Return on Investment properly?
Every business worries about and plans for Return on Investment in its projects. IT departments talk about it a lot, since we are often seen as a pure cost and have to put a lot of effort into justifying our budgets. But if we are honest with ourselves, we put a lot more thought into it in the business case than we do in operation.
Every project I have been involved with has presented a business case that claimed x, y or z amount of savings to the business if we proceed with this or that brilliant idea. I have hardly ever seen an IT project challenged a year after closure to show that it has achieved its claims, or is on track to do so. Applications and infrastructure get deployed, new laptops get bought, networks get upgraded and no one comes back to the architects and leaders involved afterwards. That’s odd, isn’t it?
I think the reasons for it come into three basic categories:
Use of Contractors
More and more, corporates are using contractors in the IT department to fulfil their needs. There are lots of good reasons to use contractors. It gets costs off the employment column on the accounts, it makes your labour force more flexible, it means you only pay for what you use and not for holidays, sickness and so on. I understand all of that, even if I think it smacks a little of predatory employment. It’s a short step from there to Uber type gig economy working. But it is common and lots of people like it, so who am I to comment?
Reflect on your career though - how many times have you seen a project given to a contract project manager and a talented group of technical contractors to deliver? All of these people do a good, or not so good, job delivering to their brief and then they disappear. The people who delivered the project are not around to reflect on it a year later. Very often, these teams don’t look to sustainability of the project for the very understandable reason that they are not getting paid for that. They are getting paid to ram something over a line on an arbitrary date.
I am not suggesting that contractors don’t take pride in their work or deliver good product. But I am saying that corporates are not very good at incentivising them to do so. Contractor incentives are too often all about time and cost with little or no reward (or penalty) for quality. And since the project team won’t be around to reflect on the Return on Investment in a year, they don’t give any thought to how it could be measured.
Another area that discourages decent analysis of RoI is the way that traded companies do their accounting. I cannot tell you how many times I have heard senior management, my leadership, tell me that it doesn’t matter about the value of x since it was funded entirely from last year’s operational cost, or the capital cost has been baked into the budget and is thus a fixed cost.
Just because something is a fixed cost, that doesn’t mean there is no point understanding its value. It shocks me that this is a controversial statement.
Again, this is a matter of responding to incentives. No-one would accept an incentive structure that relied on reducing fixed costs or removing cost from past years. Perhaps some do, but it’s fraud so no-one ethical does. As a result, the incentives are all about reducing cost or delivering value this year. Showing this year that what you did last year actually worked, while very nice, doesn’t actually change the value of the business. And since we are all working toward the mid-year and annual reports all the time, no-one is interested. We move on, forget what we did.
The question “How does that help the share price?” is a valid one, but when it excludes assessing the true value of what you have it becomes detrimental to the quality of project delivery and technology investment.
Let’s talk for a bit about how to measure the value of an application we have deployed. It doesn’t matter particularly which one, but let’s say it’s a CRM. How do we measure the value of that?
Process efficiency is often talked about. There are all kinds of disciplines for measuring that efficiency, allowing us to say that executing a given process (logging a new lead, issuing a quote, etc.) takes less time, requires fewer clicks, can be done at a lower skill level, whatever it may be. But in reality we tend to measure just before the project, in a controlled environment, and perhaps just after the project, in a controlled environment. So we compare idealised transactions which never happen in the real world, apply that comparison to all transactions and claim we have hit our target.
Another one we often use is reduced licence costs. This is easier to quantify - I used to pay x, now I pay y, x-y = saving. But again it’s sometimes not as clear as it might seem. For a start, many migrations these days are from perpetual licences hosted on-premise to Software as a Service. So you’ve written off an asset, which may already have been at zero to be fair, and taken on an operating cost. The big selling point with Software as a Service is the flexibility, the ability to tune your licence numbers to suit your need. But there’s often a catch - you can add licences during the term of your contract, but you can only reduce at renewal. And of course, if you sign a three year deal you get better rates than a one year deal. What this means in practice is you have a SaaS licence cost which can only grow and which will take as long to clear off the books as an asset purchase - at the end of which you own nothing. Have you factored that into your business case?
Infrastructure savings is another loop around the licence discussion. You are using computing infrastructure in a SaaS environment. It’s just someone else’s infrastructure, and they are charging you for it as an operating cost at the end of which you will own nothing.
I should say here, I am huge advocate of Software as a Service. It has enormous benefits to businesses large and small. But the sales pitch around it can lead people to write business cases that seem better than they really are.
Functionality benefits are the real reward from SaaS. The ability to do things that were simply not possible previously. Increasingly the services all interconnect, making it possible to build workflows that run from CRM to billing to accounting with little to no coding required and with huge uptime commitments. This is all excellent, but I’ve never seen a convincing method of measuring the value of the new possibilities set against the pre-project status quo, and accounting for the cost of the project. I don’t doubt they exist, and there is much academic work on the subject. My point is that businesses rarely put them into practice because we are all too focused on short term targets and moving onto the next thing.
Overall, I think it fair to say that much more effort is put into the business case at the start of the project than at the end. There are various numbers from academia suggesting how many IT projects fail. 68% (ZDNet, 2009), ‘more than half’ (CIO Magazine, 2016), 62% (CNET, 2008). These numbers vary in size and age, but they don’t paint a very good picture of our ability to learn from our mistakes. Perhaps if we spent a bit more time thinking about past projects we could get the failure rate below 50%.