Watch our webinar to learn how financial services firms can leverage modern data management platforms to solve critical business needs.

In this conversation, our professionals explore the differences between advanced solutions, the challenges of traditional operating platforms, and the potential of new data platforms to enable scale and agility for key front-to-back use cases. Mahesh Narayan, Arcesium’s Institutional Asset Management Segment Head, moderated a discussion with Mitya Miller, Arcesium’s Head of Relationship Management and Forward Deployed Teams, and Greg Muecke, Vice President of Product Management, on today’s shifting landscape in a new era of financial technology.

Key Insights

  • Benefits financial institutions can derive from modern financial data platforms
  • How modern financial data platforms differ from traditional operating platforms
  • Use cases for institutional asset managers, asset owners, hedge funds, and private equity firms of all sizes

ON-DEMAND WEBINAR


Speakers

Mahesh---300Dmitry-300Greg-300


Video Transcript

[00:08] – Angele Paris
Welcome to all our attendees and panelists. My name is Angele Spiteri Paris and I am head of partner projects here at Global Fund Media. Today we bring to you a webinar detailing the developments and progress in the realm of financial data platforms. Here to moderate the discussion is Mahesh Narayan, Institutional Asset Management Segment Head at Arcesium.

[00:33] – Mahesh Narayan
Thank you, Angele. Let me quickly start by introducing two of my colleagues who will be doing most of the talking. Mitya Miller, who heads our Forward Deployment Solutions team and Greg Muecke, who is the product head for the Arcesium Data Platform product, which is going to be the topic of today today’s webinar. We will start off with a quick introduction to Arcesium before we jump into the meat of the discussion. Here we’ll discuss firstly the data platform business challenge for financial institutions. Second, we will move to how modern data platforms are looking and how they address many of these challenges. And finally, a few client use cases in production. We will have time for Q and A, so feel free to type in questions in the chat as we proceed through the webinar. But firstly, a quick introduction to Arcesium, especially for those who don’t know us. We are a fintech firm specializing in mid- to back-office solutions and data platform software and solutions. We were spun out of D.E. Shaw about seven years ago and have investments from many large strategic investors like J.P. Morgan and Blackstone Asset Management.

We are not a startup. We have well over 1,500 employees in size five offices globally, several dozens of clients across asset managers, asset owners, hedge funds, private equity firms. Our products have received many awards, especially in the data platform and data software space. To the right is a schematic of the Arcesium data platform, which makes up the backbone of all our product offerings. The platform itself is cloud native, built API first, robust data models, connectors to market data vendors and third-party applications and various tools for data quality management and whole set of capabilities and tools that power the full mid to back-office investment management workflows. And we will speak to a number of these as we go along.

Before we jump into the discussion, let’s do a quick poll and get a pulse of the audience here. Please go ahead and vote. The question that’s been asked is what challenges do today’s legacy financial data platforms pose for your business operations, data and technology teams? Number of choices here. Is it the lack of domain awareness as in asset class, support workflows and so on? Is it limited data management tools as in the ability to ingest data; data quality rules?

Is it the lack of flexibility and the lack of self-service capabilities which sort of lead to a higher TCO total cost of ownership? Or is it the inability to sort of future proof any decisions you’re making for your front, mid, back-office data and technology architectures. Or is it scalability? Is it something else? Right? Or is it all of the above?

So let’s pause it 30 seconds or so to gather your responses.

Angele I don’t know if you’re starting to see people vote and the results. My sense is there’s going to be a lot of interest in identifying sort of I mean, all of these challenges are critical. And yes, this is exactly what we hear from our clients and prospects, right? A lot of people in terms of domain awareness, a lot of people are.

[04:24] – Mitya Miller
A lot of people are saying “all of the above”. So yes.

[04:28] – Mahesh Narayan
Exactly. Great. So 44% think it’s all of the above, and then domain awareness comes after that. Flexibility and self-service tools and TCO data management tools and scalability. Yeah. Okay, awesome. Well, let’s move on. I think that’s a great segue to this section. The data platform business challenge.

Mitya, we are seeing a lot of interest across asset managers, hedge funds, asset owners, private equity firms, and we do these calls literally on a daily basis. Right. There’s a lot of interest in sort of modern financial data platforms. But before we get into that bit, what is it? Tell us more about the challenges that traditional operational platforms pose.

[05:15] – Mitya Miller
Sure. And thanks everyone for joining. It’s good to speak to you today.

So look, traditional operational systems and enterprise data management as they existed in the industry for a number of years worked generally well for the firms, for their respective businesses. Whether it is investment, book of records, accounting systems, investor allocation systems, security masters, reference data management systems. They all do their job. But they, nevertheless, are limited in how much one can do with the very valuable data that is contained or generated in those systems. So they are system producers, but they’re not general-purpose tools.

So if you look at, for example, operational or workflow application, let’s say, IBOR, or an investor allocation system. Their job by definition is to get a certain set of figures or facts for the business to be correct or to get a certain report out. Any kind of data interoperability or integration with other systems is at best an afterthought; whether it’s a vendor solution or something built in house. So they are designed to be constrained on purpose in what data they can organize and manage. They typically contain lots of data processing logic. But it is fit for purpose and constrained in how you can extend it or how you can maintain it.

They are not really equipped or built to be able to stitch data together between multiple data sets. And they have the basic level of data governance and data lineage concerns. They do what we would typically see as authorization and permissions stuff, but they don’t do the entire spectrum of data governance. Or as another example, let’s consider an on premises, a very powerful data warehouse that an asset manager built. So it served the firms really well. But it presents two challenges. It presents a challenge with agility who can maintain it. Only IT, right and it presents on challenge technologically. Because of the way the technology stack was picked quite some time ago and where it runs, you don’t get the scalability or easy ability to move those workloads to the cloud.

[07:38] – Mahesh Narayan
Understood. And we hear a lot about these when you speak with our clients and prospects on a daily basis. How do these limitations sort of manifest themselves in the end in the asset managers world?

[07:51] – Mitya Miller
Yeah. So from their business perspective, we discussed how those traditional systems are very good, but they are rigid and there is an agility problem; they’re not agile. So what we see a lot is the challenges to be able to support new asset classes, new data sets, which is the very important topic. The business data is the foil for the business these days. New workflows, launching new investment products. That’s one business impact. The other impact that is very serious is they all work in silos.

So data harmonization, aggregation across them becomes a challenge. It’s an expensive process to deal with it. There are some unresolved industry challenges in general, things we particularly focus on and try to help our customers. For example, we often hear about capital and public markets data and private markets strategies, data or funds just not unified by anything. There is no commonality and it’s hard to reason about the data for an asset manager that has both. And then some of the topics I brought up on top bring the cost of ownership issue. We view this as higher the development, there are still upgrades of software that have to happen, development maintenance takes longer, costs more money, and so on.

And then the data governance and data quality is these two buzzwords. But it’s nevertheless the case when the firm does not have good data quality and data governance frameworks, it generates to the errors in the data reporting, workflows, communication, data integration links that generates business impact. It either slows it down or it generates impact on the investment returns ultimately. So those are the four business impacts.

[09:45] – Mahesh Narayan
That makes a lot of sense and moving on, I guess modern data platforms or the modern financial data platforms that are being designed today, how are they addressing some of these challenges?

[10:00] – Mitya Miller
Yeah, so in contrast, the modern data platforms bring a set of general-purpose capabilities that either address all of these shortcomings or optimize them or improve upon them. So it brings direct business impact to an asset manager. So first of all, velocity of your data projects, whether you are looking at the investment manager that is looking to get access to a new data set, to do something with it or investment research or reporting to their clients and ability to expose those to more than just technology teams.

Data platform as a name implies, it is a platform so you can by definition build on it. It’s not fixed, it’s not constrained, it’s not rigid, so you can deploy those systems more flexibly, integrate with the rest of your application and software systems ecosystem. And then the thing that we are very focused on is empowering the business users to actually construct and manage their own workloads. So not just technology teams, democratize access to the data, give business user a power to construct something in terms of reporting, analytics, getting the data in or out. We also see that the modern data platforms help you with the total cost of ownership.

Yes, it is not going to be as expensive as a bunch of legacy platforms with the glue code stringed together to make them all work. Future proofing is another element you think of how you will meet the challenges of the next data set or the new asset class crypto, otherwise how would you react to the business changing on the dime to you and so on.

And one other very modern aspect of them is they typically are not all but typically have usage-based or consumption-based commercial model, which means that you as a customer or the user of those technological tools pay for what you use, not for what not a fixed large subscription. So you can scale up your usage starting with one data set or starting with one particular use case and then expand from there without incurring both big bang projects or higher upfront subscription fees.

[12:32] – Mahesh Narayan
Sorry. The commercial bid comes up all the time and that’s a big improvement over the legacy platforms. Right, let’s do a deep dive into what the platforms look like. Greg, what we’re looking at here is the architecture for our Arcesium’s financial data platform. Right? But in some sense this is how most modern financial data platforms should look like. Can you walk us through this and then we can pick some of these areas and sort of do a deeper dive?

[13:07] – Greg Muecke
Yeah, absolutely. Thanks Mahesh. And so I think to start out, one of the things to highlight is if you just flip back to the prior slide, there’s a handful of technology and business trends that together are creating an opportunity for the implementation of a modern data platform. Just some examples of these are cloud-native offerings developments in compute from the MVPs like Snowflake, to the separation of compute and storage with Trino-related technologies, to some of the work creating Acid transactions in data lake storage formats, as well as some of the advanced analytical capabilities across AI and machine learning.

So the modern data platform is really about a cohesive integration of what is a rapidly evolving set of technologies in this landscape. If we look at, at a high level, the main components or layers of the modern data platform in the way our Arcesium systems is architected, there’s first an ingress layer where at the bottom of the diagram, in the middle is the processing that highlights this different storage technologies and querying and compute capabilities. And then at the very top is the consumption layer. Whether this is getting egressed out from a data platform to another system, whether that data is getting consumed by BI tools or advanced analytical tools or other operational products that a system might integrate with.

But at a high level, these three layers have component parts that we’ll drill into over the course of the next few minutes. And then alongside all of this is a foundation of governance and observability. So it’s not only critical to have all the capabilities that I was just discussing around ingress, storage, transformation, and then a way to consume or use that data. But our customers are looking for all of this to have a central way to manage the governance and to create visibility into a number of different things, including usage of your system. Especially in light of the fact that many systems are usage based from a commercial perspective, but also having a clear understanding of what data do I have, who’s permissioned to use it, and what are some of the characteristics of the data that I have.

[15:45] – Mahesh Narayan
Fantastic. What strikes me about this chart is traditional operational platforms are not designed like this, right? They just don’t have these layers and capabilities built out like this that differentiates.

[15:58] – Greg Muecke
Yeah, absolutely. I think a couple of other things to highlight that are differentiating around a modern data platform versus what you might term a more legacy platform is this cohesive integration of these technologies, the support for structured and unstructured data. But also batch and streaming ingress and egress and the support for multiple types of analytical tools all operating from the same set of data. Because we have this blurring of the lines between data lake and data warehouse, which creates this ability to avoid this data synchronization issue where maybe your operational systems are operating on one set of data and your analytical systems are operating on a different set. This architecture enables a customer to leverage the same set of data for both.

And then taking that a step further to look at how that compares between how a generic modern data platform would compare to something more specific to the investment domain. Some of the areas where we think there’s a lot of value is specifically on two areas. One is the data modeling and then another is on the connectivity to counterparties or administrators or service providers. And so I think when we look at ways that we think we can bring a lot of value to our customers, we focus a lot on those last two parts.

[17:29] – Mahesh Narayan
Understood. So let’s do a deep dive on some of these layers of the stack. Let’s start with data pipelines. In fact, this is my favorite topic. It is so hard to build these pipelines, monitor them. How do these platforms help build pipelines today?

[17:46] – Greg Muecke
So I’d say it starts with two core themes that Mitya had alluded to. One is self-serviceability and the other is the low code nature of these systems. So customers expect the ability to do things themselves. That is the direction that we view the market going. And self-service ability is a core attribute to a modern financial data platform. In addition to that, closely related but separate topic is having low-code capabilities. So in order to truly extend access throughout an organization, as much as we can all subject to the right authorization controls, having these capabilities be built in such a way that for some users who are familiar with writing data transformation in Python, there is a way to do that, that’s self-serviceable. But for someone who might not have that as a skill set, those users are also able to in effect do the same types of functions. And so those are two of the main ways that we view some modern financial data platforms evolving in the space and creating useful data pipelines.

[00:19:03.080] – Mahesh Narayan
Perfect. And that itself is a huge value add. Let’s go to sort of the next bit, which is data monitoring. Right? It’s becoming so critical as more data sources keep getting added and ingested. So what do these platforms offer here in terms of help?

[19:18] – Greg Muecke
Absolutely. So I think I touched base on this when I briefly touched on ingress and the way that these systems bring data into a platform. So one thing to highlight is that support for different types of data, whether it’s structured or semi or unstructured, but there’s even the sort of starting point is ensuring that you have the connectivity to different counterparties or service providers. And in financial markets that’s such a core part of the day to day, hour to hour, minute to minute operating model is knowing what data am I getting from what source is it. My trading counterparties, my prime brokers, my fund administrators is in line with my expectations and giving users a way to both track that and then to take action on that.

And so I think one of the core things that we view as an important ingredient to a modern data platform is this connectivity. Both the tools to do it, but then also the actual connections to these sources. That’s something that we think is very important and drives a lot of value for our customers.

[20:29] – Mahesh Narayan
Understood. So it offers a lot of visibility and all the data is exposed. Great. Let’s move quickly to data quality rules. This always comes up, especially in light of how data platforms need to be used for data curation, transformation and mastering of data. Right. And this is another area where the platforms, modern platforms, are going to offer more value. So help us understand what’s changing here.

[20:54] – Greg Muecke
Absolutely. And I think there’s been a lot of work recently around creating standards, around what do we think are the attributes of the terms used often are data observability. And there’s five pillars of data observability. There’s freshness, how recent is this information data quality observability into is this data of the quality that I expect. And some of the other aspects to data observability include understanding the volume of your data and also having visibility into the schema. And then lastly, the lineage, where is it coming from and where is it going? And data quality is one of these five core components of data observability. And here what we’re looking at is a self-service screen where a user can define a data quality rule themselves. And when doing so, in line with some of the themes of self-serviceability and low code aspects of a modern data platform, having the ability to define data quality rules across a number of different dimensions in a very flexible way is a core aspect of a modern data platform, in our view.

[22:12] – Mahesh Narayan
Fantastic. So self-service declarative rules that you can use to build your rule library? In some sense.

[22:22] – Greg Muecke
Like two examples to get some more concrete is if you wanted to define data type data quality rules, where you’re checking for not business logic per se, but something more about the schema of the data that you’re receiving and the expectations you have. These systems have the ability to do that, but they also can support business logic, whether that’s within a given record, in a data set, aggregates within a data set, or even across data sets. So this gives you the capability to define rules. And as we flip to the next slide, we’ll get into how do you create visibility into those rules as they execute.

So if we look back to the journey we’ve taken, the last few slides we’ve talked about how do we establish connectivity to data sources, how do we track the ingress of those data as they come into our system? We’ve defined rules for monitoring data quality, but then you need to have systems in place to be able to observe data quality and to take remediating action as and when issues come up. So the ability to have dashboards and to create visibility into that is a core part of the way that we view the data lifecycle that should be structured.

[23:48] – Mahesh Narayan
Fantastic. Let’s quickly move on to data models, catalogs, lineage. We could spend an hour on this itself, right? It’s a big topic. It’s always interesting for clients and prospects. So how do these platforms help here? What’s new?

[24:05] – Greg Muecke
Absolutely. So one of the big concerns that arise with the advent of data lakes is a data lake turning into a data swamp. And if you don’t have the right visibility and governance, how do you avoid just being buried in a sea of information, but without any way to navigate through that. It can become quite cumbersome. One of the ways to help manage that is providing clear visibility into what data exists and where did it come from and where is it going. And the solution to that is having data catalog and data lineage capabilities. And there’s a number of companies operating the space that are creating some interesting technologies. And the main idea here is that within a modern data platform, you need to have a clear definition of what every single data set is. You need to understand what the attributes of that data set are, what its intended purpose is, but then also an important aspect both for diagnosing and root cause analysis on issues as and when during the day a transformation pipeline might break. Understanding what the root cause of that might be, but also having the ability to understand potential downstream impact. If I make a change to this column or I change the way this is calculated, what is the downstream impact? And data lineage provides that visibility.

And some of the interesting parts of data lineage that I think are worth calling out is you can have data set visibility, so what are the upstream data sources, but you can also have visibility into at a column level. How is this column calculated, what is the function that’s calculating it? And then you can also get into record level lineage. So for a given record, where exactly did this come from and what job that was processing this data ran. So I can quickly get to the heart of any issues that I might have. So these tools are all about creating visibility into the data that you have, making data exploration, self-serviceable, and supporting root cause analysis for diagnosing issues.

[26:24] – Mahesh Narayan
Fantastic. And what we’re seeing here on the screen are sort of how we implement this in our platform, right? As we go up our stack. One last example, we hear this all the time. How does your platform help blend and stitch data? Right. This is critical. Legacy platforms have this challenge. You may be able to bring your data in, but you may not be able to stitch it. You don’t have a semantic layer to sort of produce some other version of the data. So help us understand this bit.

[26:58] – Greg Muecke
Yeah, absolutely. I think one of the cool things about having I’d mentioned early on the importance of an investment lifecycle data model. And then a lot of the tooling that we’ve talked about so far, what’s very powerful about this is once you have all these tools in place, it becomes very easy and efficient to create in a self-service way, different views or slices and dices into the data that you do have. As an example, if you have your data model in your tables with clear relationships between them, the tools that we’ve implemented, you know, enable a user to create on their own new visualizations, to group and aggregate data sets by different dimensions and then to publish those for their and their colleagues. Use is a starting point of this. And I think it extends another step further here we’re looking at holdings grouped by different dimensions, whether it’s bisector location exactly. And so this information, all the data exists and the relationship of that data exists. So a user could slice and dice in various ways that they’d want. And then taking it another step further, if a user then had some alternative data set that they wanted to bring in, the ingress capabilities and the transformation and mapping capabilities that modern data platforms have allow a user to quickly do that mapping.

So it’s all rooted in the power of having a well-defined data model.

[28:43] – Mahesh Narayan
Fantastic. Thanks Greg, for the deep dive into each of these buckets. Mitya, I’m going to go back to you as we go back to the original sort of view of how modern financial data platforms look like. And this is the Arcesium platform. And what I see are these two Arcesium operational products highlighted here, both on the data source side and one on the output side. So what does that mean? What’s the benefit of that?

[29:13] – Mitya Miller
Yeah, indeed. The point being that for a data platform without data, well, it’s not very useful, is it? So by design, our view is that the modern data platform for an investment manager have to make operational systems output fully integrated into the broader data landscape. For the firm, this is something we focus on and that’s an important element of what we are building. And so it needs to have capability to integrate with the operational platforms natively as a flip of a switch without engineering and development work, and very importantly, bi-directionally. You see it as a data source, but also a destination where the data from the data platform can go back.

In our case, we are fortunate that we are running the same data models across all aspects of the platform modules that we provide to our clients. And so it’s natively integrated, but it’s not limited to just Arcesium’s stack. In general, as a data platform, you need to expect to integrate with a number of data sources, natively and operational platforms. Despite hiding their data between file-based interfaces or APIs that maybe are more for business user reporting and less for the native data integration, you nevertheless need to cross this barrier and integrate with it natively.

[30:42] – Mahesh Narayan
Got it. That makes total sense. So let’s talk about typical use cases that a platform like this could address.

[30:50] – Mitya Miller
Yeah. So given everything we talked about till now and integration with operational products, these applications can cover the entire spectrum of front to back use cases for investment management businesses such as trade processing, aggregating and harmonizing data. Around investment book of record reporting, ESG investor reporting at all levels, reporting to management, company integration with the external reporting, getting all the data inputs NAV oversight across your admins and your service providers like custodians but also front office being able to access new data sets for investment research to be able to cross reference on the same reference data. That something you are researching is indeed exists in your tradable portfolio or universe and back and forth. There is really no limit for investment manager of what use cases you can power and through such platforms and what you can accomplish.

[31:57] – Mahesh Narayan
All right, that makes perfect sense. One last bit before we go into some live production examples. We talk about this all the time with clients and prospects, the build versus buy decision, right? What’s your thinking there in terms of how clients should have approached at least the decision making around these the build.

[32:23] – Mitya Miller
In, the build versus buy classical question really implies a tremendous amount of work here. So let’s go through it a little bit. Let’s consider it. There is two dimensions to this. Number one, a potential firm can pick up a bunch of modern data stack tools. You can see the diagram similar to this one, where you will have a bunch of vendors combined for the business in all of these individual boxes. And if you take the best of breed approach, you will have the technological challenge of making sure this all works together, integrated, also providing unified user experience. This is neither easy nor is it cheap. As someone who have tried to make this work a number of years, for a decade plus, this is not easy. The second dimension to this is that you need to embed whatever the technical tools you pick that are very powerful. And we live in like golden era of the modern data stack. So there is lots of very good tools, I don’t mean to be negative about them. But you need to for an investment management business, you need to embed them with the knowledge about your business, what we call domain awareness, but others called subject matter.

So that implies constructing data models, data quality rules that we talked about, connectivity to your data sources, whether it’s internal in the firm or external building workflows, as to how the exception based processing will work on this platform, how will reporting workflows, work, how will sign offs will work and so on. So, because of these two dimensions, these projects typically take a long time, multi-year and have a higher risk of failure. What we hear from our partners and the broader industry in general, as we talk about the data challenges is that every asset manager and lots of private market shops we talk to, they either are in the process of building a data platform or they are considering starting. And as we look at it and think about it roughly, the way I like to put it is look, everyone is building a data platform. You shouldn’t be doing this alone because fundamentally the industry describes the terms and definition of what we all work on. In a similar way, there is enough synergies here where modern data platform embedded with the financial services and investment lifecycle really gives you ability to bootstrap your process.

And then we also did some analysis on the cost and total cost of ownership and in our view it is favorable to the buy. Meaning to go with the vendor that have thought of all these problems, have solved it for a bunch of clients and battle tested it. And that goes into not only just underlying cloud and infrastructure costs, but also your engineering cost, data engineers, data sciences talent, which is the biggest portion of considering to build something like this.

[35:33] – Mahesh Narayan
So buy and build on top use the integration mechanisms of the platform.

[35:38] – Mitya Miller
Yes, data platform implies you can build. This isn’t your accounting system of all the era you can build on top of these platforms.

[35:52.] – Mahesh Narayan
Fascinating. Okay, so lastly, let’s spend some time on production, client use cases, Mitya, there’s a bunch here, right? We have four examples here covering banks, asset managers, hedge funds and private equity firms. Where do you want to start?

[36:11] – Mitya Miller
So yes, we have been working with our customers and partners for a number of years and you can see from these examples that they really do show the breadth of the use cases that our platform can address. And also nevertheless, there is a common element here that it all runs on the unified data platform. It runs on the same technology tools that Greg spoke about. And hopefully this illustrates that the data platforms, including the one Arcesium has built, can go both wide and deep.

So, the first example is a major global asset manager and that really had a data harmonization issue. Lots of accounting systems across multiple asset classes with no unification of the data across of them and also high volume. So we integrated 60 plus data sets across fund accounting, valuations, investor reporting. We dealt with the high-volume instruments and we were able to help them support new asset classes, being able to get them to higher data quality and apply data quality rules not at the source, which is where you get to the duplication of work. But they deduplicate the work, give them ability to run the governance and quality framework on the unified and harmonized data.

One other element of this use case that was important is that because of high volume and high complexity, it isn’t a simple sort of I have constructed my snowflake data warehouse and now I’m golden. But it’s nevertheless for a given data source where we’re able to achieve this velocity, the four to eight weeks quick build for the new things that they needed to bring to the platform. Set example is also an asset management example where they needed to harmonize their trade flows from the managers that they allocate business to they needed to do the data platform that allocates data. Consolidates the data from multiple OMS and produces the PNL tax lot and accounting reporting on top. So the focus here was on the governance and on scalability. By governance I mean consolidating the data, but yet give portfolio managers access to only what they’re supposed to see and at the same time move this to the cloud. Sort of doesn’t matter where the OMS is, whether it’s on prem and where it is, but consolidate the data into cloud that give them callability.

Third example is a large hedge fund who needed to upgrade and modernize their enterprise data management platform. They did it with Arcesium, a bunch of things here that are common thread throughout the data platforms. The ability to bring bi-temporality to the data, ability to make it highly flexible and be able to onboard new market data connectivity easier via self service, improve the data quality. So again, the same common elements around a lot of data management use cases are addressed in the data platform just because the tools are there to be able to facilitate that.

And then lastly, private equity, private markets example, the customer of ours had a challenge where they had a data aggregation and harmonization problem. Multiple admins as typical for private markets, internal booking records, deal flow system, portfolio monitoring system, all of that data needed to be harmonized. And then using the data platform ability to build these domain aware pipelines and business logic in the reports, we were able to give them look through and asset level IRR and performance reporting at the level that their admins. And their investor partnership accounting system is not able to do this natively it’s because we can ingest all of the data at the lowest level and build up various aggregated and drill down reporting using our data platform.

So those are the four examples that hopefully show you just how broad is the applicability of the technological concepts that Greg covered and how when constructed together thoughtfully, it really gives a powerful tool to our customers.

[40:38] – Mahesh Narayan
So very versatile. Broad set of use cases, broad set of customer segments. Right. And that’s like you said, because of the way the financial data platforms have been built. Right, perfect. Do we have time for questions? Maybe we can take a couple.

[41:03] – Angele Paris
We have a few minutes for questions.

[41:09] – Mahesh Narayan
I’m seeing one question and I’ll pose it to Greg here. Are you leveraging standard and open-source technologies as part of your stack?

[41:20] – Greg Muecke
Yeah, absolutely. So certainly where there’s the opportunity to do. So we take a close look at open source technologies. In our case, we do have some that are part of our implementation.

[41:35] – Mahesh Narayan
Got it. And then one question for Mitya on the governance tools. Can you share more about the data governance, whether it’s Mitya or Greg either, if you can, speak to the data governance aspects. Greg touched on it briefly, but if there’s more to share.

[41:53] – Mitya Miller
So data governance brings a set, a portfolio of tools you need to make sure that the data is secured in terms of who can see what data, and according to the license agreements with the underlying data vendors or service providers. And also all of the aspects about lineage, what happened with the data and what happened with the observability as to whether the data conforms to the quality, and so on. So we view this specifically in our Arcesium as a set of technological tools that power all of this, but it needs to be crafted into a well-designed solution that allows customers to increase their data quality out of the box. Again, the whole point is time to market, so you don’t have to build this from scratch every single time or every single use case.

[42:53] – Mahesh Narayan
Understood. Just one last question on the usage-based commercials that Mitya you mentioned, are the commercials fully usage based? I can pick that up. And please feel free to add the data platform itself. You’re saying is fully usage based, right. And any operational platforms will have sort of their own commercials as needed, right?

[43:23] – Mitya Miller
Yes. So, as I alluded to earlier, we are one of the type of the platforms that does the consumption based. It’s a great alignment tool between customer needs and Arcesium as a platform business. We provide a lot of transparency into how the usage is structured. We provide a lot of tooling to optimize this. And at the same time, this is something that the industry broader is still getting used to. Worth acknowledging that as the traditional buy side vendor, vendors have structured this commercially differently, but it’s nevertheless their future, at least for these types of tools.

[44:08] – Mahesh Narayan
Understood. There’s a few more questions on the chat. We’ll get back to you directly. I think we’re right at time. This was great. Thank you, Mitya, thank you, Greg, for covering so much. We could keep going, and I’m sure there’s a lot more we could actually show and demo and speak to for everybody who’s watching us on the webinar or on demand. Feel free to reach out directly to either of us or to your Arcesium sales contact or visit our website if you want more information. Thank you very much for joining.

[44:45] – Angele Paris
Thank you, everyone. This was all the time we have today. Thank you all the attendees, all the panelists, for their insight. If you have any additional questions or your question was not addressed due to time, please go to the Arcesium website and follow up and an email will be sent to you with your response. Thank you very much.

[45:13] – Mitya Miller
Thank you all.

[45:14] – Mahesh Narayan
Thank you, everyone. Bye.

 

back arrowBack to Insights