Change stickiness: Is what’s getting measured getting done?
Updated: Sep 28
It seems counter-intuitive to suggest that if you’re leading or facilitating IT-enabled change, you should put more effort into tracking its impact against new measurement systems, than on measuring its results against existing business performance.
However, this is a conclusion drawn from research with IT leaders and business change agents  into making lasting change: change that sticks. They would put more than half their effort—and more than 60% at higher complexity organisations—towards setting new bars for performance as opposed to tracking the change’s impact on common business measures such as revenue, productivity efficiencies, marketing performance and so on.
This is not to take away from keeping clear line of sight on key measures that are simple for everyone to understand, communicate and act on as a collective. As one business leader pointed out: “Measuring progress against current metrics is a hygiene factor. But if you stay there and only measure what you already know, you risk a Kodak outcome”.
For those climbing the mountain range of digital and transformational change in organisations, this would be like putting less effort into measuring just miles covered and to what elevations. Instead at each 500 metres of height gain, the set of things to measure—speed across the ground, change in temperature, fatigue and oxygen levels for each person on the team—calls for fresh attention and adaptability to conditions. Measures are very different on the low elevation trek into base camp, than at the last push over a pass, let alone on making a safe descent and regrouping for the next peak. If it’s the first such trek for the group, it clearly calls for flexibility and ability to incorporate learning at each major milestone.
Sponsors and agents of IT-enabled change regularly face multiple decisions about justifying, tracking and realising value from investment. The right measures focus attention, get decisions and galvanise people to get behind the change and be able to take actions on the fly that bring the collective further and more quickly toward the higher goals.
As organisations adopt and accelerate towards new business models, it’s never been more important. It’s not just about fact-checking the measures are still relevant for the business climate, but that the measures are driving the right anticipated changes in behaviour and that they are supportive and respectful to the people impacted. As one business leader reminded us: “People don’t like to be measured.”
Three themes for measuring the right things are summarised as:
Keep it simple: focus on a small number of reliable measures and supplement with story-telling
Be clear and transparent about what underlying behaviours you are trying to encourage through the metrics
Be courageous about adapting your measures of progress through your change.
Less is more
The right measures get attention: the wrong measures risk distraction. People at large organisations comment that “We measure too many things.” So how does the measure of outcome from one aspect of change stand out from the crowd? Does the voice at the back of the stands need to be heard by each individual player on the ground or are they best to join the collective chant in spurring on the team?
A key reason for putting more effort into new measures and progress over impact is that no change is an island. IT-enabled change programs rarely run in isolation from other projects and the baseline of measuring their impact is changing all the time. Outlining their contribution to broader transformation is important, but it should not be the place where the big picture measures for the organisation are tracked or they risk ‘double-dipping’ on benefits or cannabilising the business cases or rationale for other initiatives.
The world of APIs is a great example of where ‘conventional’ change and program justification and tracking is challenging to measure—and therefore manage and propagate—when applying traditional business commercial measures. Many benefits are external to the organisation and beyond current measures, the tooling, capabilities and APIs-as-products mindset call for different ways of looking at progress.
So the advice is to choose wisely. As one CIO recommended, “Whenever you introduce a new metric, you have to instrument for it.” This is a great test for any measure being used to justify or track change: not only does it need to be automated which calls for corresponding data-centricity, but it needs to be sustainable and responsive to further change.
A headline-grabbing measure can fast become a blunt instrument if the behaviour consequences aren’t understood. Getting under the covers of the culture of the organisation and the measures that will drive change is key.
Emotional connection for measures
We’re conditioned to justify IT-related investment choices and decisions with hard facts and evidence, and to back them up with ongoing updates in the organisation’s traditional model for change. However, much human decision-making is invariably instinctive and emotional while appearing logical and deliberate. It’s important that those leading IT-enabled change ask hard questions about measures. How solid are the foundations of the metrics that will resonate with the heads and hearts of the stakeholder community that they’ll be able to survive an organisational restructure, a change in underlying technology or support for updated business measures?
Take an example like justifying investment in an API developer portal. It ultimately expects to contribute to the wider commercial and business goals of opening up new markets, getting to new products and shortening end-to-end IT delivery timescales. However, the specific contribution of a new or improved portal to any of these is difficult to quantify and isolate from all the related changes going on in the environment from such as continuous delivery and automation, cloud enablement or training in new technical skillsets. More sensible measures of its success would be showing how it unifies standards, improves API reuse and embeds API security and architectural alignment, effectively governance outcomes but ones that can be harder to justify. How to measure the impact on the developer experience?
One measure could be the number of architectural exemptions relating to its use and associated reasons. Tracking towards a downward trend in exceptions—relating to API development outside the portal—allows focus on what additional features and capability need to be built into the portal to increase adoption. As these features offer more embedded development standards—such as external-ready security or performance monitoring—take-up becomes broader creating a virtuous circle of API reuse and ultimately faster delivery.
It ain't what you do, it’s the way that do it
Wide-scale transformation and change agendas invariably include goals towards being better able to respond to change. Another reason why participants in our survey pointed to putting more than half their efforts towards new bar-setting. Effectively working with the reality that the process itself of transformation means progress and efforts will evolve; that new outcomes get revealed over time.
One senior technologist pointed out that if your program or change is truly transformational, then all your efforts should go into setting new measures and automating for those: if not, you are at risk of holding back your progress by just measuring success against measures you’re familiar with.
A CIO leading change at a medium complexity organisation pointed out, however, the realities of making change stick and avoiding been seen to be hiding something or creating new measures that aren’t understood. “You still need to build confidence in what you’re doing...It’s about balancing between achieving change and helping people to adjust.”
In summary, is it time to rethink some of our assumptions about where to put effort into justifying and tracking IT-enabled change? In a rapidly changing environment with unpredictable external factors, the best return may come from measurement efforts on the small evolutionary changes and the pace at which the wider organisation can absorb the change. The macro business performance outcomes will speak for themselves.
Claire is based in the UK helping organisations with making their IT- and API-enabled strategies happen. All opinions provided in this article are her own. Special thanks to Mehdi Medjaoui for his input and contribution. If you would like to learn more about how to set up the right measures for API success, please get in touch via LinkedIn direct message.
 Survey participants included: CIOs, experienced IT consultants, senior technologists and tech-savvy business leaders from industries such as: banking, insurance, retail, aviation, education and manufacturing; and in Australia, France, UK and Middle East.
 Higher complexity = IT function with thousands of employees and hundreds of millions of dollars / euros annual investment in tech-enabled change. Lower complexity = IT function with hundreds of employees and tens of millions of dollars / euros annual investment in tech-enabled changed.
 Daniel Kahneman’s Thinking, Fast and Slow (2011)