In 1995, Netscape's explosive IPO1 signaled the web wasn’t a passing trend. It would reshape how businesses operate.
Mainframes, the backbone of enterprise computing, couldn’t meet the web’s always-on demands. Early adopters like American Airlines2 and Wells Fargo3 took a hybrid path, layering web capabilities on top of existing systems to launch digital services fast without risking core operations. Today, AI is applying the same pressure to the cloud. The lesson still stands: preserve what works, layer what’s missing, design for evolution.
Traditional cloud infrastructure wasn't built to meet AI's demands. Most web-era cloud patterns work well for typical interactions that tolerate hundreds of milliseconds. AI raises the bar in moments that win or lose customers: fraud checks, recommendations, and agent assist, where near instant responses matter.
The mismatch is expensive and getting worse. JPMorgan Chase discovered AI was constraining data center supply chains, forcing them to source compute resources five to ten years in advance, due to infrastructure bottlenecks, not AI limitations4.
Latency kills competitive advantage. Streaming services lose viewers when recommendations take seconds instead of milliseconds. Movoto5, an online real estate platform, exemplifies this challenge. They faced crashes every spring when their AI recommendation engine was most critical, with latency issues driving away customers during peak traffic periods.
Data movement becomes an expensive bottleneck. Autonomous vehicles generate terabytes of sensor data hourly, but traditional cloud systems require moving all that information to central processing centers. Companies discover they spend more on shuttling data than on analysis.
Everything competes for the same resources. Retailers running AI-powered demand forecasting during peak periods find their machine learning models competing with customer-facing systems for computing power. Both suffer when sales volume is highest.
These constraints demand a response, but rebuilding from scratch is precisely the trap that destroyed companies during the web transition. Those who survived didn't abandon their mainframes, they evolved them.
There are two ways to respond. One is a multiyear rebuild that promises a clean slate but often delays value and risks stability. The other is refactoring: reshaping what you already have into a more modular, adaptable foundation, then adding AI capabilities step by step. Refactoring is faster, cheaper, and lower risk.
What does “modular” mean here? Think of your technology estate as building blocks with clear handoffs. A fraud‑scoring block. A search and recommendations block. A customer service assistant block. Each block has a defined purpose, reliable performance, and a way to upgrade without disturbing everything around it. If a new model outperforms the old one, you can swap it into the fraud block without rewriting checkout. If personalization needs to move closer to where the customer is, you can do that without moving the whole e‑commerce platform. The goal is to make an improvement routine.

Gartner6 reports that organizations adopting composable architectures are seeing 16% revenue gains and 22% productivity improvements. By 2026, they predict every major cloud provider will offer modular infrastructure components, essentially an app store for enterprise cloud capabilities. Beazley Insurance7 proved the model: 95% automation in just 12 weeks by refactoring their architecture to compose an AI layer onto stable foundations, not rebuilding from scratch.
Payment systems must keep running during refactoring, and customer data can't disappear during migration. You need architects who can decompose systems without destroying them, and integration gets messy before it gets clean. However, the investment and risk are still a fraction of a complete rebuild.
Early adopters refactoring their cloud for composability deploy AI capabilities in weeks instead of years, with failures that stay contained instead of cascading. Movoto proved this approach works. After refactoring to a modular AWS architecture, they achieved a 200% increase in conversion rates and a 50% surge in site traffic without touching existing integrations. Refactored architectures integrate seamlessly with partner systems, clouds, and data sources, crucial when AI capabilities increasingly come from specialized providers.
Today's refactoring efforts point toward something larger emerging.
Infrastructure becomes invisible. Instead of managing servers and cloud contracts, you'll buy outcomes: fraud decisions, personalization, and predictions as services aligned directly to KPIs. Pay-per-decision pricing replaces capacity planning. Technology complexity disappears into the business model.
This shift fundamentally changes competitive dynamics. When anyone can access enterprise-grade AI capabilities on demand, scale stops being an advantage. Speed and orchestration become everything. The winners won't be those with the biggest infrastructure budgets but those who compose capabilities that meet market needs the fastest.
The lesson from the mainframe era still applies: preserve what works, evolve what's needed. But the AI transformation compresses decades into quarters. While rebuilders chase perfection, refactoring leaders are already on their third iteration, each informed by real customer feedback and market response.
Your architecture is no longer just infrastructure. It's the speed at which you can respond to opportunity. In a world where AI capabilities compound monthly, that speed differential defines who leads and who follows. The path forward is clear: refactor to compete today, not rebuild for an imagined tomorrow.
References
Lashinsky, A. (2015, August 9). Netscape IPO 20-year anniversary: Read Fortune's 2005 oral history of the birth of the web. Fortune. https://fortune.com/2015/08/09/remembering-netscape/
Wikipedia contributors. (2024, December 17). Sabre (travel reservation system). Wikipedia. https://en.wikipedia.org/wiki/Sabre_(travel_reservation_system)
Wells Fargo. (2023, October 26). First in online banking. Wells Fargo History. https://history.wf.com/first-in-online-banking/
Niiler, E. (2025, January 30). How JPMorgan Chase's infrastructure chief keeps the AI engine humming. Banking Dive. https://www.bankingdive.com/news/jpmorgan-chase-infrastructure-cio-ai-compute-strategy/738754/
Encora. (n.d.). Movoto: Modernizing the real estate platform by migrating core infrastructure and re-architecting systems. Encora Success Stories. https://www.encora.com/success/movoto-modernizing-the-real-estate-platform-by-migrating-core-infrastructure-and-re-architecting-systems
Gartner, Inc. (2023, January 17). Predicts 2023: Composable applications accelerate business innovation [Research document]. https://www.gartner.com/en/documents/4023010
Encora. (2024, November 19). Beazley: Transforming technology infrastructure and data capabilities for modern insurance. Encora Success Stories. https://www.encora.com/success/beazley-transforming-technology-infrastructure-and-data-capabilities-for-modern-insurance