From SaaS to Service as a Software in Investment Management
In the first quarter of 2026, every product announcement from a major AI player and every hot take from an industry watcher has sent fresh turbulence across sectors, and SaaS has faced a reckoning. More importantly, we’re seeing a reversal in the very concept of SaaS: from software as a service to service as a software.
This reversal is happening because AI is helping SaaS evolve from a category of tools that enable humans to do things to one that delivers the actual outcome of tasks that a human would do. In pure SaaS, the software would provide what people need to complete a workflow. The person would decide what to do and when because they understand the why. But agentic features allow software to do the what and the when as well.
Investment management platforms are a perfect case in point. In a traditional SaaS model, the architecture was the product, embedding things like workflow structures, data capture logic, or integrations with counterparties and custodians. Then, firms or their service providers would staff the execution, with operational professionals doing the work.
Take concentration risk management as a concrete example. It used to be that a platform would alert you when a position approached a threshold. It would flag the situation so that you could decide what to do about it. The software enables the identification, but the analysis and the decision remain yours.
In a service-as-a-software scenario, an agent could surface the flag and then run potential scenarios. What does trimming 10% of the position do to your factor exposure? What does it do to your P&L attribution? What has the firm done in similar situations in the past? The person is still making the call, but the pre-decision analysis arrives ready to go. It might even give you the ability to automate the approach you approve.
If this is what’s happening (we believe it’s beginning), then both buyers and providers will have to adjust.
What buyers must learn to ask
When you buy software hosted as a tool, you typically ask a range of questions about the tool’s quality. Everything from uptime and upgrade cycles to technical and user support. Depending on the complexity of the platform, questions may also focus on configurability, integration depth, and workflow. You want to figure out the fit and impact on your operating model. But even with protracted due diligence, the questions are predictable because buying SaaS is a well-established process.
As we move towards service as a software, the diligence questions look more like evaluating a fully outsourced solution. You need to know who the resources are, what they are doing, what the service level is, and how you can oversee and govern outcomes. But instead of evaluating resources that are people with skills, you have agents and whether they can operate within certain reliability ranges. That requires understanding how the platform encodes domain-specific logic through skills and tools, and how its orchestration layer governs accountability when something falls outside expected parameters.
Given the progress of AI, asking “What’s your AI roadmap?” is standard in vendor reviews. But buyers won’t get the answers they need unless they can evaluate whether a provider can stand behind a defined outcome, document how it will be delivered, and accept accountability when it falls short. That’s a different approach from assessing vendors based on the capabilities they show you.
The gap is that most existing procurement processes aren’t yet structured to probe at that level. Service-level agreements still describe system availability rather than the delivery of results. The spotlight shifts to finding evidence of result delivery and outcome reliability at comparable firms.
Consider how a portfolio manager gets answers today. When a strategy underperforms its benchmark, finding out why — pulling attribution data, reconciling across systems, cross-referencing position changes — can take hours of manual work. In a service-as-a-software model, the platform doesn’t surface the data and wait for a human to analyze it. It delivers the explanation: what drove the underperformance, what the firm’s historical response has been in similar situations, and what options are available. The provider is accountable for the quality of that output, not just for the uptime of the system that produced it. That accountability becomes the focal point for how you evaluate vendor fit. The workflow alone is just table stakes.
What changes inside the firm
If you’re buying outcomes rather than tools, something has to change inside your firm as well. Outcomes shed light on new decisions your organization could make about where to direct its operational investment and its people.
The clearest implication is for technical resources. Post-trade operations are defined and measurable — which means they’re also the area where the efficiency gains from agentic AI are most legible and most auditable. Firms that redirect technology investment toward front-office AI — research, portfolio construction, alpha generation — are finding that the operational foundation underneath can carry more of its own weight than it used to.
When agents handle more defined, repeatable operational work, operational roles can change as well. As high-volume, rules-based processing becomes more automated, the operational professionals who understood that work become more consequential. The judgment about whether the model is right, whether an exception is truly an exception, and whether the operational model is keeping pace with portfolio strategy requires someone who has done the work. That expertise moves up.
The big-picture result here is that end-user expertise becomes more consequential when you move from users looking at data to agents recommending or carrying out specific operational actions. It’s true for most roles in your organization. We wouldn’t bet on AI taking those skills out of the equation.
The domain depth question
There’s a reason why the recent memo from Citrini Research on AI-driven software displacement created such a flurry and moved markets. It raises a fundamental question: Which software is actually at risk?
For the foreseeable future, the question of displacement only holds where AI can reproduce the software’s core function at lower cost and comparable reliability, with coding agents able to do the bulk of the work for known software design patterns.
What they can’t do is replace domain knowledge accumulated through years of operational work at investment firms where errors are expensive. SIFMA describes this as the next evolution in financial operations, where highly-skilled individuals use agentic AI to deliver exponential impact.iii
A fund managing a diversified portfolio, reporting to institutional investors, and operating under regulatory scrutiny needs its operational platform to be consistently right, with auditable records, and with clear accountability when something goes wrong. Each time and every time. That’s a higher standard than current foundational technology capabilities can meet at scale and goes beyond what today’s discussions of agent capability acknowledge.
The domain awareness that makes this infrastructure reliable is built through operational work. Something as fundamental as a security master makes the point. Building one that works across asset classes takes cumulative understanding of how instruments are structured and how they behave over time.
Decisions that can’t wait
What it means to stand behind an agentic AI outcome and how you measure whether it was delivered are all live questions. If these aren’t the conversations you’re having with your current operational software providers, it’s worth asking why not. Firms that get clear on those answers now with their current providers are in a better position to shape the capabilities and terms in their own interests. The providers best positioned to deliver on outcome-based models are the ones that have built the operational track record, in partnership with clients, to stand behind those outcomes.
Authored By
Vera Shulgina
Vera is responsible for Arcesium's data strategy with a focus on driving value for clients through data solutions and data partner integrations.
Share This post
[i] Sequoia Capital, 2026. https://sequoiacap.com/article/services-the-new-software/
[ii] Citrini Research, 2026. https://www.citriniresearch.com/p/2028gic
[iii] SIFMA, 2026. https://www.sifma.org/news/blog/ai-digital-assets-the-next-operational-frontier-for-financial-markets