The Agents Are Coming to Finance

May 22, 2025
Read Time: 8 minutes
Innovation & Tech
All Segments
Arcesium Logo Mark
Summary

AI is transforming investment management as it evolves swiftly from early adoption to advanced agentic systems. This post explores how AI agents are enhancing data quality, automating tasks, and driving operational efficiency in hedge funds and asset management. The pace of innovation is rapid, demanding strategic foresight and robust frameworks.

Artificial intelligence is here to stay in financial services for asset managers, private fund managers, and hedge funds. However, the AI maturity level is quite early in the scheme of things, despite the fact that AI and machine learning (ML) have been deeply embedded in finance for a while now.

The 2023 commercial arrival of generative AI’s large language models (LLM) made AI more accessible to the non-data science/ML people among us. Some assert we’re now at a world-changing inflection point with AI. But along with the promise that LLMs have introduced, the AI tools have also presented a lot of unknowns.

I would contend that we are inflecting within inflection points.

This blog peeks inside the minds of developers to see how they are approaching moving AI forward for use in investment management firms. We will also make a closer examination of the state of progress of the technology and how buy-side firms are and will be using AI in the near future. One spoiler: the speed at which technologists are innovating AI will blow your hair back.

READ OUR WHITEPAPER: Age of AI

A fundamental shift in implementing AI agents

Inaccurate, inconsistent, or incomplete data can have severe consequences for hedge funds, leading to errant PnL, incorrect VaR, and non-compliance that result in flawed investment decisions, missed opportunities, and reputational damage. AI capabilities are emerging now that represent a fundamental shift in how we are implementing agents. Not only does the AI agent do something, it also tells you what and how it did and why.

Data quality for trading transactions is a good example of an AI agent use case, especially important work given the rigorous compliance and investor reporting demands facing fund managers.

How AI agents can automate data quality

When it comes to managing operational kickouts—data quality errors that impact trade settlements and key operations such as position and P&L reporting to the desk— AI agents can play a critical role. Agents can detect errors, reason through them using predefined training and guidelines, and formulate a plan to resolve the issues. What’s more, the agent can either recommend or autonomously take corrective actions, significantly reducing manual effort and operational risk.

The next level is the interpretation of those results, that is, why you take a certain action. Often human users in charge of resolving those errors will need to spend time interpreting the kickouts, figuring out exactly what went wrong and remediating. Generative AI is essentially making it simple and actionable for a human user to interpret the output of the system. Beyond this, hedge funds need to stay on top of numerous actions like follow ups. The human in the middle office, for example a trade support analyst, is trying to keep track of actions on massive Excel sheets, which can be essentially automated using agents.

For more information on resolving data quality issues and governance, read our previous article A Data-First Approach to Enhancing Data Governance.

The AI mission, should you choose to accept it

Speaking of blowing your hair back... One of the primary challenges is the fact that the underlying technology for AI agents itself is evolving so rapidly that we have to think five moves ahead to keep pace.

Just a few months ago, our AI/ML team was talking about information retrieval and chat agents. The launch of our AI copilot for Aquata last autumn now feels like ancient history. Since then, we quickly transitioned to developing actionable agentic AI, we’ve focused on developing AI tools that can independently understand a problem, decide the appropriate action to take, and execute on that action to solve for the problem at hand. These include upcoming agents like a pipeline debugger that monitors and helps end users manage data transformation pipelines.

And now, data scientists are developing model context protocols (MCP), which are servers that help users overcome LLM limitations. MCPs handle long-term memory within the application and provide access across working sessions. Security protocols within the tool define who has access to what and allow users to audit the sequence of actions that the agent took to arrive at a given outcome. Observability is key to intelligently managing and enabling smooth context switching. 

But we are not finished. In a few months, we'll be delving into large concept models (LCM), which imitate human thought processes more closely by using semantic reasoning and focusing on abstract concepts instead of merely words. Moreover, a few months after that technologists will be testing out encrypted LLMs.

As AI/ML pioneer Geoffrey Hinton noted on CBS This Morning, “We’re at this very special point in history where in a short time everything might totally change.” Innovations are flooding in almost on a monthly cadence.

The constant is investment; the variable is technology

In mathematical equations, you reduce the number of variables to arrive at an answer. AI technology is advancing so fast it has become the variable in our equation for success. The constant is the domain, i.e., the business of driving risk-adjusted returns. Our team is increasing the number of constants so that we can solve the ultimate equation of AI and innovate smoothy, as we advance from information retrieval agents to reflexive agents, autonomous proactive agents, and then swarm agents, which will interact with each other to execute higher-level tasks. The sequence of complexity keeps on increasing.

One way we can harness that technological complexity is by developing agents for more complex use cases that involve generating, executing, and refining code for domain-specific functions. All LLMs are assessed and measured against certain benchmarks created for specific kinds of tasks.

LLM reasoning has a benchmark (the Hellaswag benchmark). LLM planning has a benchmark. Retrieval and coding have benchmarks. For example, using the ﷟"needle-in-the-haystack benchmark, developers can give the LLM/retrieval-augmented generation (RAG) a colossal corpus of data and see how it can retrieve the fact buried somewhere in a single line. Using AI benchmarks as a standard practice enables practitioners to compare models, assess performance, monitor progress, and pinpoint areas of weakness.

RELATED READING: ROI in the Age of AI

From concept to production: the role of fidelity

So how do we translate this domain into a productive agentic AI framework that makes sense for hedge fund managers? The constant dictates that it needs to have a certain level of observability, traceability, and fidelity in operation before we can push it into an investment lifecycle product. LLMs are creative by their nature. The problem is making them do the same thing again and again, which is very important in the finance domain because we don’t want diverse results or hallucinations. Chief data scientists need to be cautious to choose the right framework that combines LLMs with other external components to develop LLM-powered applications. An experiment is just an experiment until it goes into production and you start seeing value.

The rapid evolution of AI frameworks

In the business of providing modern investment lifecycle and data technology, we don’t have the luxury of beta testing on customers, as ecommerce companies might. We have a responsibility to get it right. It's equal part engineering, equal part dev-ops, and equal part data science. All three are coming together to inform how AI agents work and perform.

Most agentic frameworks are still evolving. Domain-aware agents that learn through the trial and error (i.e. einforcement learning) are the ultimate state of evolution. Even frameworks like LangGraph are yet to support such agents—of course, even that could become a false statement by the time of publication.

The world is moving quite fast. Anybody in the business of making money will be moving much faster in AI agent development than the rest.

Key takeaways

1: What role do AI agents play in finance today?

AI agents are automating data quality checks, trade error resolution, and compliance tasks, helping financial firms save time and reduce risk.

2: Why is generative AI significant for investment management?

It simplifies complex data interpretation and decision-making, making operational tasks more actionable and less reliant on manual effort.

3: How fast is AI technology evolving in this space?

Innovations like model context protocols and large concept models are arriving at a near-monthly pace, requiring firms to anticipate technological shifts.

4: What challenges do LLMs face in financial applications?

LLMs are inherently creative, but finance demands consistency, traceability, and low error tolerance—driving the need for robust, production-ready frameworks.

5: What’s the ultimate goal for AI in investment management?

To create autonomous, domain-aware agents that mirror human learning and decision-making, while ensuring reliability and regulatory compliance.

Join our webinar with Hedgeweek, The Age of AI: The latest on artificial intelligence in hedge fund operations
Adurthi Ashwin SwarupVice President, Group Engineering Manager

Share This post

Subscribe Today

No spam. Just the latest releases and tips, interesting articles, and exclusive interviews in your inbox every week.