A US bank, renowned for their commitment to excellence and unparalleled financial solutions.
THE CHALLENGE / REQUIREMENT
The client faced a unique challenge in their data warehousing and application modernization journey. They needed to migrate their ETL jobs and data stores for data processing applications across LOBs, a task made more complex by the lack of documentation and complex business logic across applications.
Mphasis executed a detailed reverse engineering phase for each application data processing rewrite project, including for reconciliation jobs, CCM engine, treatment engine, SAS jobs, data lake, and DW jobs.
Despite the challenges presented by the complex business logic and lack of documentation, we successfully rewrote and tested a variety of data processing jobs originally developed in Ab-Initio and Informatica to Spark.
We leveraged Apache Spark as the data processing framework and Java as the implementation language, deploying the Java-Spark jobs on EMR, AWS, Spark Clusters, Kubernetes, and private cloud.
Through this project, we:
Technologies used : AWS Cloud |S3 | EMR| AKS | Kubernetes Private Cloud | Terraform | Jenkins
Future-proof multiple critical applications that had data processing layers by migrating to Spark on AWS and private cloud.
Achieved huge license cost savings by decommissioning Informatica and Ab-Initio and implementing ETLs in Java Spark on AWS Cloud. This helped the client reduce operational costs and reallocate resources to areas that drive business growth.
Reduced compute costs by implementing jobs as step jobs on a serverless EMR. This helped to optimize workflows and enhance performance while reducing costs.
Lower maintenance and management of data processing jobs across LOBs by implementing ETL jobs using a data processing framework, helped streamlined workflows and reduced complexity, allowing the client to focus on their core business.