by Jo Pitt, CIPFA Local Government Technical Manager and Robert Marcus, Quadnet
Someone once said 'there is something of a dark art about calculating the return on investment in IT'. If true, why is this? And why does it matter?
Although Whitehall’s last spending review didn’t really offer much for local government, there was an expectation that councils should accelerate the digital delivery of local services. And this could be read as yet another opportunity for IT to ‘splash the cash’ in what has become a largely unaccountable way, outside of the tight management controls and cost/benefit analysis that Finance usually puts on expenditure.
If current practice is anything to go by, finance is too often shying away from examining the detail of IT expenditure, and is not creating those meaningful key performance indicators (KPIs) that the business is used to seeing in management reports. Rather, left to their own techno-babble and cosy relationships with suppliers and outsourcers, IT are being given a ‘free pass’, which too easily results in slow business systems, and a low-productivity workforce as a consequence.
Three examples of failure of IT
Many local authorities have seen well-funded IT services somehow managing to under-deliver them. The result of such performance failings is either poor customer service or over-budget expenditure that is the result of service heads having to throw human resources at IT problems. For example:-
- staff being sent home from a London revenues and benefits team because its business system was too slow to be usable. Customer service plummeted as a result. And staff morale sank when the lost time was taken out of their holiday entitlement
- another council, also in London, had a cabinet member share the fact that staff within child services created their own, local spreadsheets as the official computer systems could not be relied upon to even be available, let alone speedy, for the inputting and sharing of important data. Staff productivity suffers the most, as data has to be written and re-keyed when a ‘do it once, do it right’ approach is the target
- a council in south east England with 400 staff in social care, constantly facing the obstacle of unresponsive applications causing delays. The head of service had to instigate weekend working to clear the resulting backlog, and the consequential overtime resulted in substantial over-spending.
It needn’t be this way. In place of a raft of IT acronyms, and proposals that evoke a complex, distant and opaque service - the ‘dark art' perhaps – Finance should aim to gain visibility into IT performance that they’ll need, with a measurable strategy for IT.
Measure the quality, not the width
Getting measurable value from IT expenditure is rarely about ‘width’ (thirty of forty KPIs, or contracts that run to a hundred pages or more), it should be about quality – a few, relevant performance measures based around end-user experience is worth many KPIs based on variations of ‘availability’ and ‘up-time’. Indeed, what use is a KPI around server uptime – which is usually achieved 99.99% of the time – when staff are faced with systems that are really so slow as to be unusable? This is why getting visibility into application performance for the people that actually use it is essential.
Measure it, then manage it
To address the problem of IT departmental performance being shrouded in mystery, and so complex that no-one really understands it, measurements that side-step techno-babble and improve business efficiency are what’s needed. For just as Revenue and Benefits departments are rightly measured on claim ‘turnaround time’ rather than ‘the number of cheques issued’, so IT departments could be measured on efficiency rather than the speed with which help-desk calls are answered or the number of help-desk tickets marked ‘closed’.
Here are three efficiency measures that a London council have enjoyed through facilitating strong management, better budgeting and accurate measurement of IT investments:
1. Measure the speed of the applications being used
The chart below shows a simple method for driving change over a few months, by measuring the speed of an application, and so enabling it to be managed.
2. Survey the results of helpdesk calls, rather than just the number of closed tickets
Often when systems are not working or are slow, telephoning a helpdesk doesn’t really help very much. But what if you were to begin measuring the helpdesk by a survey of what occurs after the helpdesk call is closed? Then you might see results like this:
3. Pilot the measurement of efficiency gains in a new system, against cost of implementation
Often when you finance new applications it is because they are seen as ‘efficiency improvers’. Well if that’s the case then why not gather some evidence for a business case? Perhaps undertake a pilot study of end-user performance before an investment is made, in order to calculate what improvement might look like and at what cost it would come at. Does the benefit justify the cost? And if it appears to, use the captured data against data ‘post-investment’, to objectively understand whether the the reality met the promise.
A relatively simple, three-pronged measurement process then is capable of giving finance not only the confidence to engage with IT, but of calculating the value of IT expenditure and even independently holding suppliers and outsourcers to account.
IT a ‘dark art’?
It doesn’t have to be.