Technology Infrastructure: Modern Architecture for Better Cities
Imagine you are the global head of treasury, managing funding for a trillion-dollar balance sheet. Your team might receive 20 different middle-office P&L reports and 20 different risk reports each day, covering dozens of markets around the world.
For banks, unifying data from individual neighborhoods, like lines of business or desks, depends on shared standards and normalized information. But just as cities run on physical infrastructure, sell-side institutions need common utilities to be cohesive. Infrastructures come together across platforms, systems, and architectures to process trades, manage risk, handle settlement, and serve clients.
That vision often collides with a harsh reality. Banks are operating the equivalent of a mid-century electrical grid and aging rail lines while trying to serve a 21st-century metropolis. They have been built on on-prem, server-based, and mainframe-based technologies over the last 40 or 50 years. M&A has reinforced that brittle legacy-system scenario, with minimal or no system integration.
The scale of the challenge
Part of the value of looking at transformation through a bank-wide transformation office is seeing the scale leaders in operational roles rarely do. Large financial institutions can have hundreds or even thousands of systems supporting their capital markets activities. That situation continues to spiral, with technology costs growing roughly four times faster than revenue, according to Accenture.i
On one hand, these technologies still have utility. They have staying power because they provide an edge that allows a business unit to operate and drive P&L. Even mainframes justify their survival because they do what business users need and preserve critical data going back, in many cases, to the 1980s. When something supports a trillion-dollar balance sheet, making the case for change takes a strong will.
On the other hand, there is significant functional overlap among all these systems. They include their own views and models for client records, transaction data, and lifecycle events across front-, middle-, and back-office platforms. The fragmentation makes seemingly easy questions hard to answer, such as having a single enterprise view of a client across the bank.
Future readiness compounds the sense that change needs to happen, even when as-is systems seem good enough for business as usual. When sell-side firms speak with us about the future, they understand that the infrastructure that served them for the last 40 or 50 years doesn't necessarily help growth in the next 40 or 50 years. Their current model leaves them poorly prepared to support innovation in areas such as digital assets, tokenization, and AI.
It also locks up resources and slows down performance. If you're still running a vintage mainframe, you may have a team of 30-40 people consuming maintenance spend. It also keeps manual, legacy processes in place because that’s what the technology allows. Meanwhile, buy-side institutions and investors are expecting real-time capability.
“‘Run the bank’ and ‘mandatory change’ spending often represents up to 70 percent of [banks’] technology budgets. These categories include infrastructure hardware and software, IT operations, regulatory compliance, and other types of unavoidable spending, leaving only limited capacity for investments that can drive competitive differentiation." — McKinsey & Coii
Common ground and common utilities
One of the most valuable discoveries in bank-wide transformation happens when leaders look across desks and asset classes for the first time. We hear similar “ah-ha” moments from many large-bank executives. As they look from equities to credits to rates to FX to repo desks to structuring desks, they find more commonalities than differences. Foundational concepts like defining a security or a trade, execution and settlement, or back-office operations are essentially the same.
That realization is the starting point for platform unification. Yes, each asset class has genuine intricacies, but when every desk builds its own systems for trading, risk management, compliance, settlement, and client reporting, stakeholders sometimes fixate on differences that seem bigger than they are. The first job of a transformation leader is to separate nuance from superficial variation and find a unified way to handle commonalities.
The impact of fragmented commonalities compounds over time. As institutions onboard new clients, product lines, or tradeable assets, they often lack unified systems that can handle booking, risk, compliance, and reporting seamlessly. Banks and insurers named legacy systems impeding siloed data integration (71%) and lackluster data quality, including incorrect and missing information (69%), among their top three concerns in handling financial data in 2024.iii
While achieving complete unification across every asset class and function is unrealistic, pursuing it surfaces opportunities that a desk-by-desk approach never reveals. Think about a city that builds a separate water treatment plant for every neighborhood. It wastes resources and creates maintenance nightmares. Shared utility infrastructure means common pipes, consistent pressure and quality, and reliable service, with each neighborhood free to build what makes it distinctive on top of that foundation.
That mindset is essential. But it needs a technology layer that can deliver on the promise.
Cloud as the new grid
That technology is cloud, but it requires the discipline of long-term planning. We strongly believe that a cloud-native, scalable architecture with standardized data and easy integration is the only way to enable 20-30 years of growth.
That multi-decade time horizon needs to overcome current limitations while offering room for future changes that may not have emerged yet. It’s a central principle of transformation that it never ends because markets continue to evolve. No transformation plan survives contact with the market. If banks can achieve 90% of what's in the plan today, they will face another set of challenges within a few years. You must think big even while you start small.
“Cloud use is increasing. Respondents are deploying an average of 34% of their IT and data budgets on cloud services, while 87% have increased their cloud investment strategy over the past two years to invest somewhat or much more in the cloud.” — LSEGiv
The straight-through paradox
Every bank wants straight-through processing (STP). Buy-side clients expect speed, but manual workflows can’t deliver it. And STP is a prerequisite for AI-enabled operations, which cannot function on top of fragmented, human-dependent processes.
However, it also creates a specific vulnerability. The more you automate a process, the faster you lose the people who understand it. Events like an AWS outage, a clearinghouse failure, a counterparty default demand operators who can trace the full lifecycle from funding through settlement across multiple technology platforms.
When everything runs straight through, that knowledge can atrophy. STP helps banks stay competitive, but they also need institutional knowledge to survive the moments when STP fails. It’s harder to achieve that when systems are scattered across organizational boundaries.
The dynamics of STP apply to any modernized use case. The question is whether the architecture they built can absorb those challenges or whether they are starting over.
That is the argument for building transition layers during migration: extracting and normalizing data from existing systems while feeding it to the critical upstream use cases that cannot go dark. Risk systems, regulatory reporting, funding, central treasury, compliance. It is equivalent to building new subway lines while old ones carry passengers.
Future capacity
Banks that design each incremental initiative on a cloud foundation, with scalability in mind, end up in a fundamentally different position. They balance flexible architecture and integration between legacy and newer technologies. When the next wave of change arrives, whether it's a new asset class, a regulatory shift, or a capability that doesn't exist yet, they have a foundation that can accommodate it. They also need it to create an environment where AI and agentic tools can operate.
Authored By
Ted O’Connor
Ted is a Senior Vice President focused on Business Development at Arcesium. In this role, Ted works with leading financial institutions in the capital markets to optimize data, technology, and operational needs.
Share This post
[i] Accenture, 2026. https://www.accenture.com/content/dam/accenture/final/industry/banking/document/Banking-Top-Trends-FY26-Report-Final.pdf
[ii] McKinsey & Co, 2024. https://www.mckinsey.com/industries/financial-services/our-insights/unlocking-value-from-technology-in-banking-an-investor-lens
[iii] Capgemini, 2024. https://www.capgemini.com/news/press-releases/majority-of-banks-and-insurers-struggle-to-maximize-the-value-fromtheir-cloud-investments/
[iv] LSEG, 2025. https://www.lseg.com/content/dam/lseg/en_us/documents/gated/data-analytics/lseg-cloud-survey-report.pdf