Data demands for capital markets have increased across a firm’s ecosystem. From roles in senior management, central treasury, and market surveillance, to positions supporting regulatory reporting, compliance, and central risk, organizations are facing an influx of data and challenges in managing it all. 

In the capital markets, any given data point regularly moves through a complex network of operational systems, extract-transform-load (ETL) processes, control and authorization checks, and reporting tools. This intricate flow of data can often make it difficult for banks to troubleshoot data quality, identify the origin of their data, and understand if or how the data changed. As a result, many struggle to identify, diagnose, and fix data exceptions.  

Data lineage provides the foundation for effective data management, enabling businesses to make informed decisions and drive innovation. However, the complexity of legacy systems often complicates the thread of data provenance, leading to challenges in traceability and accountability. As the value of data becomes increasingly prevalent in the capital markets and banks contend with demands to create greater transparency into their business, it’s time to take a closer look at the importance of data lineage.  

What is data lineage?  

Data lineage is the process that lets a user track, access, and visualize the metadata of information sources, the business logic behind how and why their data is transforming, and the timing associated with a specific data product. In simple terms, data lineage tracks the flow of data from its origin, to what changed, and where it’s going. 

A data management system typically tracks information at three stages:  

  • Data product or dataset level
  • Record or row level
  • Column or data element level

Data lineage is like a guide that maps out the intricate web of data sources, transformations, and relationships. An in-depth understanding of data lineage is especially crucial during data migrations because lineage ensures a smooth transition as information moves to a new environment. By tracing origins and evolution of data sources, organizations maintain data integrity and uncover hidden insights and opportunities for optimization.  

Use cases where data lineage is critical 

  • Regulatory reporting
    The complexity of preparing data for regulatory reporting often means that firms are stitching together information from multiple systems. A lineage capability lets teams diagnose the origin of a specific dataset, including the system, timestamp, and each ETL process. As banks respond to new and ongoing regulations, such as the Basel III endgame, they must maintain accurate and complete financial transaction records. Data lineage provides a clear view of an institution’s data flow to ensure financial data is accurate, complete, and auditable. 
  • Optimizing processes
    Responding to customer requests or inquiries about reported figures requires a deep understanding of your data’s journey. Historically, specialized teams deployed asset class-specific accounting or sub-ledgers to aggregate system-wide data. As data is aggregated across various systems, employing data lineage tools can optimize the labor-intensive task of ingesting data, harmonizing it, and responding to specific client questions about the data’s history. 

The challenges of data provenance in legacy systems 

Legacy systems — characterized by their age, fragmentation, and outdated architectures — present multiple challenges that inhibit banks from understanding and preserving the provenance of their data. These challenges stem from several inherent shortcomings: 

  • Insufficient information
    Legacy processing systems simply don’t track the right level of data lineage. At best, banks maintain an internal audit of who changed a given transaction record. Yet, even with this foresight to record the data, information is often still trapped deep in the system’s internal reports and hard to extract. 
  • Aged systems
    Banks historically relied on internally built, server-based solutions to manage their data. Tracing data origins or mapping logic for a given field is often a matter of a software engineer debugging the source code of the business application. This labor-intensive practice is only repeatable for so many data points. As an added challenge, banks often don’t capture the metadata from the source code. This detail enables them to continue tracking the piece of information as it moves to upstream and downstream uses. 
  • Ad-hoc reporting
    The banking industry is populated with a tremendous amount of business logic that happens on the fly as a given report is generated. Because systems were not purpose-built to follow the flow of data, information is commonly not traceable. Banks often find they are not able to replicate the logic and output for a report generated in their system a few months ago, never mind over a longer time horizon.  

What can your firm do to improve data lineage?  

To preserve the accuracy and completeness of your data, it is best to start with evaluating your systems, prioritizing data cataloguing, and regularly auditing your data processes. This documentation should encompass detailed metadata for each data element, including its origin, any transformations, and its downstream uses. 

Metadata provides information on the quality and reliability of any given dataset, ultimately leading to time and efficiency savings. Tools that let users swiftly access relevant information on their own without the need for assistance from data teams will also be critical as data volumes continue to accelerate. Modern tools track most of the metadata automatically, meaning that a data engineer doesn’t need to code it explicitly. What’s more, self-service tools provide managers with data lineage information that enables them to interact with the data programmatically. 

How modern data platform architectures help  

Modern data platforms play a pivotal role in addressing the challenges banks face when evaluating their legacy systems. By embedding lineage capabilities during the construction of data pipelines, product generation, or reports, businesses ensure a more transparent and traceable data journey. 

Rather than re-engineer systems, assess how a modern approach to data ingestion and transformation can help ingress and normalize data. Robust ETL/ELT capabilities will enable you to structure data for multiple use cases and maintain lineage for an auditable trail of information of what transpired and why. 

These are the exact use cases and industry demands we had in mind as we designed the capabilities of Arcesium’s data platform, Aquata™.  Purpose-built for the investments industry, Aquata is engineered to ingest, validate, harmonize, and distribute investment lifecycle data. We engineered the platform to elevate the collaboration, control, and trust in data across all levels of your organization. 

As demands for transparency grow in the capital markets, learn how technology and data management are influencing banks’ response to growing industry demands 

Author:  
Dmitry (Mitya) Miller 
Mitya is a Senior Vice President and General Manager of Aquata, Arcesium’s comprehensive self-service data platform purpose built for the investment management industry. 

 

back arrowBack to Insights