AI in Data Management Summit: Converging Perspectives
I recently sat on a panel at the AI in Data Management Summit in New York alongside leaders from JPMorgan, Mastercard, and Citi. The panel was called “The Intelligent Data Marketplace,” and the topics of conversation spilled over into the hallways and roundtables afterward. Four things stood out.
Different starting points, similar conclusions
While panelists came from very large organizations with significant AI initiatives already underway, the audience included people from much smaller firms.
- A small fund with five people tends not to have everything written down, because everyone talks to each other all the time. That makes it harder to get started with AI requirements.
- A large bank or asset manager has great security guardrails, controls, and documentation about what’s expected in every role, but that maturity can also create resistance to change.
- Organizations in the middle, where there’s enough turnover that you have to write things down at some point, may ultimately be best positioned for agentic workflows because they already have something to build on.
The starting conditions look different depending on where you sit, but what surprised me was how much the panelists converged. There were instances where people started from what seemed like conflicting positions but then reached the same conclusion. They feel the same pressures. They’re running significant AI initiatives and landing on the same discoveries.
It’s like scientific research. Different scientists eventually arrive at the same physics because underlying physical laws are the same. The discovery here was to do small things that lead to big things, working within the constraints of operational workflows and data management.
Those constraints are absolute because you can’t afford to have production data deleted or end up with inexplicable outputs. No matter the scale or starting point, discipline and preparation determine what you get out of these tools.
Eat your vegetables
The metaphor for that discipline is that you have to eat your vegetables.
In practice, this means you have to make sure your data is in order, your data governance is good, and the data is clean. You must document what’s important to your organization, what your standards are, and how projects should work when building AI agents. If you’ve done the hard work of getting a data catalog, writing down what you actually care about, and defining how projects should run, you can give that same material to agents, and they’ll do pretty well. If you haven’t, and you just assume everyone should know better, you’re going to have trouble.
At that level, developing AI is like onboarding a cohort of interns. You can’t have a personalized training conversation with every single one of them. You give them a healthy diet of documents and peer support. You can’t just give them a username and password and expect productivity.
You also know that one of them is going to go off the rails and make mistakes at some point. So you put controls in place to prevent anyone from impacting your production environment and make sure there’s someone to catch mistakes. The preparation you do for your people is the same preparation that makes your agents effective.
The adoption pattern reflects this same healthy diet. Organizations are taking a taste, trying agentic workflows on a single process, building confidence, then getting more greens on their plate. That’s the right move. If the first thing you did was connect a model to your production environment and say, “do things that seem good,” that’s a recipe for disaster.
The smarter approach starts by experimenting with a long-running workflow, building a way to evaluate the results, confirming there are no false positives or that any issues are manageable by a human, and then expanding from there. Some organizations came in with their nutrition already sorted. Others are working through the prerequisites before they can scale.
Small models for specific problems
Every time a powerful new technology arrives, the possibilities seem infinite. But the question is always the same. You still have to figure out which of those investments is going to make money, and what you want to prioritize.
The interest in small language models shows a similar pivot toward realism and focus. People at the summit were mentioning small models more often than they did in the early days of LLMs. The broader conversation used to be dominated by questions about whether the right play is OpenAI or Anthropic and how to give these general-purpose systems enough context to make decisions. Now there’s a new question gaining traction, and it’s about where small, domain-specific AI models fit.
The trend is shifting toward realizing that a general-purpose model that can do everything is more expensive. It’s like hiring a whole person when you only need one skill. That’s the problem that came up in a recent story about Chipotle’s ordering chatbot. Someone typed “how do I reverse a string in Python?” into the chat, and the bot answered the question, and then asked what it could get started for them.
It’s a funny story that has made the rounds on Reddit and programming blogs, but it also makes a point. A general-purpose model powering a help bot will happily field questions about Python before it takes your burrito order. You could imagine the same situation where a chatbot designed for trade matching answers sensitive questions about risk that it shouldn’t be able to answer.
But a purpose-built model doesn’t have an interface for that. It stays focused on the task for which it was trained. For well-defined operational problems with strong training data, it could deliver faster results at lower cost than routing every query through a general-purpose LLM. That requires engineering investment and someone who understands how model training works. But the model is lightning-fast, and the efficiency gains are increasingly favorable as organizations get serious about where their AI budgets are going.
From “is this magic?” to “where’s the return?”
The broader mood at the summit reflected a turning point in how the industry thinks about AI spending. Everyone started by thinking AI looks like magic. Nobody wanted to be left behind. The message across the industry was that you don’t have a choice about getting involved, because everybody wants to see you doing something.
Now the harder questions are surfacing. A lot of tokens are being burned right now, and not all of that consumption translates into real value. The more meaningful measure, and this came through in multiple sessions at the summit, is the impact on decision-making. The volume of queries processed or data pulls completed tells you very little about whether AI is improving how an organization operates.
As some early investments fail to pay off, the strategic questions sharpen. Now organizations are figuring out how to get a handle on it and move in a smart way. They’re deciding where to invest the time and money to build real AI expertise, where to be buyers of proven solutions, and where to consume what’s available off the shelf. Part of that exercise is self-knowledge, understanding where you can genuinely contribute, and where you should find someone who already sells a good solution. Some organizations are building a layer of abstraction around their vendors, so they can switch if a better option appears or move between providers to get the best cost per query for high-volume use cases.
The organizations that ate their vegetables and did the foundational work on data governance and documentation before the hype arrived are the ones converting AI spending into operational results. Everyone else is getting there, but the gap between those who prepared and those who didn’t is becoming visible.
Authored By
Matt Katz
As Arcesium's Field CTO, Matt leads Arcesium's Forward Deployed Software Engineering and Client Success teams. His work to empower clients and simplify technical challenges stems from a 25-year career in financial technology working with clients and software. Outside work, he enjoys books, bikes, and boards.
Share This post