March 2026·8 min read

Our legacy software is holding us back. When is it actually time to replace it?

Every organisation has systems that should have been replaced years ago. The question is never really whether to replace them — it is when, and how to make the case.

You probably already know the answer. The system is slow, the vendor support is ending, integrating anything new with it requires a workaround, and your team has developed an entire shadow infrastructure of spreadsheets and manual processes to compensate for what it can't do.

The question isn't really "is it time?" — it's "how do I make the case, and how do I do this without it becoming a two-year disaster?"

The scale of the problem

A Stripe and Harris Poll survey found that developers spend an average of 33% of their time dealing with technical debt and legacy systems — time that isn't being spent building new capability. McKinsey research finds that technical debt accounts for 20–40% of the value of organisations' entire technology estate before depreciation — and that most organisations spend the majority of their IT budget maintaining existing systems rather than innovating. These aren't niche findings. They reflect the structural reality of most organisations that have been running technology for more than a decade.

The cost is rarely visible in the accounts. It shows up instead in slow delivery cycles, high integration costs, an inability to adopt new tools, and the accumulated weight of processes that exist only because the system can't do what the business needs.

The signs that are easy to rationalise away

"It still works"

This is the most dangerous one. Legacy systems don't usually fail dramatically — they fail slowly, in ways that become normal. The nightly batch job that occasionally needs a manual restart. The report that takes an hour to run. The new hire who needs three weeks to understand how to use it. None of these feel like crises. Together, they represent a significant drag on productivity and agility that's almost impossible to quantify because it's baked into how everyone works.

McKinsey research on digital transformation found that organisations often underestimate legacy drag by a factor of two or three — not because they're being dishonest, but because the costs are distributed across dozens of small inefficiencies that have become invisible through familiarity.

"We know its quirks"

Institutional knowledge about how to work around a system's limitations is often treated as an asset. It isn't. It's a risk. The people who hold that knowledge leave. The workarounds multiply. And every workaround is a process your organisation is running in parallel with the system it's supposed to replace.

At a legal services firm, we found over forty documented workarounds for a case management system that had been in place for twelve years. Several of them were known only to individuals who had since left the business. The system "worked" — but the true operational cost of running it was invisible in the technology budget.

"The replacement will cost more"

This is often true in year one. It's almost never true over five years, once you account for maintenance costs, licence fees for a system the vendor is winding down, developer time spent on integrations, and the opportunity cost of everything you can't do because the system can't support it.

Gartner's analysis of legacy system costs consistently finds that the total cost of ownership of an ageing system grows at roughly 15% per year as patching costs increase, integration complexity compounds, and the pool of people who can support it shrinks.

"We can't afford the disruption"

The disruption of a planned migration is finite and manageable. The disruption of an unplanned failure, a security incident, or a vendor end-of-life forced migration is not.

The National Cyber Security Centre notes that legacy systems represent a disproportionate share of security incidents — not because they're targeted specifically, but because they're less likely to receive timely security patches and more likely to rely on deprecated protocols and configurations.

The real blocker: making the case

The technical case for replacing a legacy system is usually obvious to the people who work with it. The financial case — the one that moves budget — is harder to construct.

A credible business case usually includes:

Total cost of ownership, honestly calculated. Not just the licence fee, but: internal resource time spent on maintenance and workarounds, third-party integration costs, any specialist support for ageing technology, and the cost of manual processes that exist because the system can't automate them. The Stripe/Harris Poll data (33% of developer time on technical debt) is a useful benchmark for challenging assumptions about what the existing system actually costs.

Opportunity cost. What can't you do right now because this system is in the way? New business processes, AI initiatives, customer experience improvements — what's on the roadmap that's blocked?

Risk quantification. What happens if the vendor ends support? What's the exposure if the system fails? What's the security posture of a system that may not be receiving regular patches?

A realistic migration estimate. The case gets more credible when the proposed alternative is properly scoped — not just "modern cloud platform" but a specific architecture, realistic timeline, and phased approach that de-risks the transition.

The AI connection most people miss

There's a reason legacy systems are increasingly urgent to address, beyond the usual arguments. AI and data capabilities — the things that will define competitive advantage over the next decade — almost universally require modern, accessible, well-structured data.

Legacy systems trap data in formats and architectures that AI tools can't easily use. IBM's research on AI adoption found that poor data quality and inaccessible data are cited as the top barriers to AI implementation — ahead of skills gaps, cost, and governance concerns. This is the less-visible cost of legacy systems: it's not just that they're slow or expensive to maintain. It's that every month they remain in place is another month your AI ambitions are blocked at the data layer.

Here's the shift worth understanding. For most organisations, genuine AI capability used to be out of reach — not because the technology didn't exist, but because building it required modern infrastructure, clean data, and skilled teams that only large enterprises could afford. Legacy systems were one of the structural barriers that kept that advantage with the big players.

Modern cloud architecture, combined with cloud co-funding programmes that cover significant portions of migration costs, has fundamentally changed that equation. A well-executed migration to a modern, AI-ready architecture opens up capabilities — predictive analytics, intelligent automation, real-time data — that weren't economically accessible before. The migration cost that used to be a multi-year commitment can now be phased, funded in part by cloud partners, and delivered in a fraction of the time.

The organisations moving now aren't just fixing an IT problem. They're removing the last structural barrier between where they are and the AI capability that will define competitive advantage in their sector.

How to approach the migration itself

Phase it. A big-bang replacement of a critical system is high risk. Most successful migrations move workloads incrementally, running old and new systems in parallel, validating data integrity at each stage before cutting over. McKinsey's analysis of large-scale digital transformations finds that phased approaches with defined checkpoints significantly outperform programmes that attempt wholesale replacement in a single programme.

Start with the data. Before migrating functionality, understand the data. What's there, how clean is it, what needs remediation? Data quality problems discovered mid-migration are the most common cause of delays and cost overruns. The IBM Institute for Business Value found that poor data quality costs organisations an average of $12.9 million per year — and that number compounds when data quality issues surface during a migration under time pressure.

Define what success looks like before you start. A migration that goes "smoothly" is one where the definition of done was agreed upfront — data integrity validation, performance benchmarks, user acceptance criteria. Without these, the project never ends.

Plan for knowledge transfer. If you're bringing in external help, the engagement should end with your team understanding the new system deeply enough to own and extend it. A migration that leaves you dependent on an external party is a different kind of legacy problem.

When it's not time yet

Sometimes the honest answer is that the timing isn't right. If the business is in a period of significant operational change, if there's no budget certainty for the next 12 months, or if the people needed to make a migration succeed are already at capacity — forcing a major system replacement through anyway is how projects fail.

Better to define the case clearly, sequence it properly, and execute it well than to start a migration the organisation isn't ready to land.


If you're trying to build the case for a legacy replacement — or work out whether your systems are holding back your AI and data ambitions — our Cloud Migration Assessment will give you the architecture options, cost comparison, and business case framework to make a confident decision.

Talk to us about your legacy systems →


Sources

  • Stripe and Harris Poll, The Developer Coefficient, 2018. Survey of 500 CTOs and senior engineering leaders on time allocation and technical debt burden.
  • McKinsey & Company, Tech debt: Reclaiming tech equity, 2020. Research on the scale of technical debt across enterprises, based on interviews with CIOs estimating tech debt at 20–40% of total technology estate value.
  • McKinsey Global Institute, Unlocking Success in Digital Transformations, 2018. Research on digital transformation success factors and failure modes.
  • National Cyber Security Centre (NCSC), Vulnerability Management, updated 2024. UK government guidance on managing security risk from unpatched and unsupported software.
  • IBM Institute for Business Value, Research reports, ongoing. Research on data quality, AI adoption, and the operational cost of poor data management across global organisations.

About the authors

DH

Daren Howell

Founder, CrewCreateAI

20+ years delivering AI and data programmes for global publishers, financial services firms, travel operators, and consumer brands. I've inherited more legacy systems than I can count — and led the programmes that replaced them.

CM

CrewMate

AI Research Agent, CrewCreateAI

CrewMate draws on published research, technology documentation, industry analysis, and publicly available case studies to help identify patterns and strengthen every post.

Want to talk through what this means for your business?

Our AI Opportunity Scan is at no cost. In a few hours we'll give you a clear picture of where to focus — and what to do first.

Book Your Free Scan