About Us Icon About Us Icon Business Analyst Icon Business Analyst Icon CEO Icon CEO Icon Datameer Icon Datameer Icon Envelope Icon Envelope Icon Facebook Icon Facebook Icon Google Plus Icon Google Plus Icon Instagram Icon Instagram Icon IT Professional Icon IT Professional Icon Learn Icon Learn Icon Linkedin Icon Linkedin Icon Product Icon Product Icon Partners Icon Partners Icon Search Icon Search Icon Social Networks Icon Social Networks Icon Share Icon Share Icon Support Icon Support Icon Testimonial Icon Testimonial Icon Twitter Icon Twitter Icon

Datameer Blog

Riding the Waves of Financial Services Compliance on the Big Data Lake

By on July 29, 2015

In today’s financial services industry, the pendulum continues to swing further in the direction of lower risk and higher regulation, making it harder than ever to increase profit margins. Dodd-Frank has taken banks out of higher-risk, higher-revenue lines of business to the point that they are forced to write their own wills. Meanwhile, competitive pressures are challenging banks to provide innovative experiences for their consumer clients, while compliance requires adherence to the latest alphabet soup of regulations coming down the pipeline: BCBS 239/RDARR, MiFID II, Volcker, and others.

elephant_surferEach new industry regulation and associated deadline creates a wave on the data lake. Consequently, IT must ensure continuous improvement of big data architectures to meet risk reporting, data aggregation, and data governance requirements.

Not surprisingly, compliance efforts and their supporting IT architectures and business processes are consuming a bulk of financial services annual operational budgets. Luckily, the associated big data architectures such as Hadoop and associated analytics tools, are also a main source of technology innovation. Leveraging the components of these architectures not only for compliance, but also for customer and fraud analytics, for example, is allowing banks to create leverage and spend efficiency across their business lines.

As such, it is critical to select a future proof big data architecture that provides the flexibility to meet constantly changing requirements and enables robust data integration, transformation, and governance for big data discovery.

Waves on the Horizon
Take, for example, all of the different major classifications of data (reference data, transactional data, operational data, security data, etc.). Each are managed by different teams and have widely different characteristics of size, shape, and frequency of change. Every new compliance requirement may require that a new set of analytics and reporting be performed on specific sub or super-sets of this data. BCBS 239 alone has asked that bank risk reports “include, but not be limited to, the following information: capital adequacy, regulatory capital, capital and liquidity ratio projections, credit risk, market risk, operational risk, liquidity risk, stress testing results, inter- and intra-risk concentrations, and funding positions and plans.” Needless to say, attempting compliance with such vastly different data sets and requirements has historically been an incredibly painful and manual process.

Choosing the Right Surfboard
These data complexities are no issue whatsoever in Datameer. Our architecture enables flexible data integration and analysis within your Hadoop data lake, while providing enterprise-level governance via a folder structure and permissions hierarchy that allows for teams to access the specific sets of data that they need to analyze.

More specifically, Datameer lets you ingest both structured and unstructured data through 70+ native data connectors (to servers, databases, web services, distributed file systems) and you can connect to custom data sources using the SDK. Specific date/time partitioning, scheduling, and retention policies can be applied. Do you only want to analyze options transactions from the last 6 months and import that data every week? No problem!

Datameer’s data transformation architecture uses a familiar Spreadsheet UI and supports multi-source, multi-view, and multi-step data pipelines. Analyses can easily be completed by one group and then passed on to other groups who rely on the data as a component of downstream analysis – all while maintaining that single golden source of the data. For example – credit risk metrics related to a specific set of products and analyzed by one team could be completed in one workbook and then used downstream by a team responsible for consolidated credit risk metrics across the entire set of bank products. Once defined, this entire data analysis pipeline can be automated via job scheduling and workload management that can be tailored to each specific data set. Complete data lineage can also be viewed within the tool or extracted via the REST API for easy reporting and auditing of the full pipeline of data ingestions, transformations, and calculations.

Datameer’s architectural balance of flexibility and robust governance is the key to successfully riding the constant waves of financial services compliance requirements. Learn more about our future-proof big data architecture, our work in financial services, or sign up to attend a live demo today!


Connect with Datameer

Follow us on Twitter
Connect with us on LinkedIn, Google+ and Facebook


Jason Demby

Subscribe