The client’s legacy system was handling three billion transactions a day, with over 10 petabytes of stored data, plus 4,500 reports and dashboards for 27 business units. But this was showing limitations – and many of the 1,200+ technical users had to work around these.
Complex architecture had multiple failure points that required numerous manual interventions, meaning an average system availability of 85%. Most data were in older formats, and there were multiple copies and expensive license commitments, along with data security challenges. Slow, expensive and cumbersome technology resulted in low end-user satisfaction, slow business reporting and, at best, next day reporting.
Rearchitecting the system significantly reduced complexity, with the Databricks Lakehouse solution for unified Delta lake on AWS virtualization enabling 9,000+ ingestion jobs to be reduced to a single one – with data availability also increased to 99%. There were significant performance gains, with a 10x performance increase in datamart processing, and an 80% reduction in new data ingestion time and production incidents.
There is now roles-based access control, with private accounts for finance and HR. Costs have been cut, with reduced data duplication. Licenses have been eliminated, while the burden of ongoing support has been lessened and development costs have been cut.
The client’s legacy system was handling three billion transactions a day, with over 10 petabytes of stored data, plus 4,500 reports and dashboards for 27 business units. But this was showing limitations – and many of the 1,200+ technical users had to work around these.
Complex architecture had multiple failure points that required numerous manual interventions, meaning an average system availability of 85%. Most data were in older formats, and there were multiple copies and expensive license commitments, along with data security challenges. Slow, expensive and cumbersome technology resulted in low end-user satisfaction, slow business reporting and, at best, next day reporting.
Rearchitecting the system significantly reduced complexity, with the Databricks Lakehouse solution for unified Delta lake on AWS virtualization enabling 9,000+ ingestion jobs to be reduced to a single one – with data availability also increased to 99%. There were significant performance gains, with a 10x performance increase in datamart processing, and an 80% reduction in new data ingestion time and production incidents.
There is now roles-based access control, with private accounts for finance and HR. Costs have been cut, with reduced data duplication. Licenses have been eliminated, while the burden of ongoing support has been lessened and development costs have been cut.
End to end delivery, from analysis and design, to delivery and support.
We’ve pioneered, designed and developed data quality, and logging and alerting frameworks, providing input on lessons learned from previous architecture projects.
We’ve developed key architectural features, including CI/CD pipeline, a terraform-based framework deployment process, data quality, logging and alerting frameworks.
We’ve helped analyze the incoming data for PII, configured the ingestion job to handle data ingestion for 10,000+ tables and validated the data for completeness.
We’ve supported the migration and validation of 2,800+ Jobs and 3,600+ datamart tables, across 27 business units.
Our data integrity team has monitored the health of ingestion jobs and datamarts.
Data Experts
Specialist Data Domains
Data Project Delivery Success
Defining a vision for data use and how to get there.
Helping you make sense of your numbers and complex data.
Ingesting, moving and processing data in the most effective way.
Enabling you to tell meaningful stories through data exploration (including automation, dashboard design and graph modelling).
Providing an effective blueprint for your data environment, using technical design to support business strategy, design/deployment, and delivering modern approaches like Fabric.
We offer strategy, architecture and hands-on engineering of secure, scalable, cloud landing zones and hosting platforms.
We unlock developer productivity with Infrastructure-as-Code templates and deployment pipelines with frictionless controls.
We implement preventative and detective controls for highly regulated enterprises, to ensure the security of their users, data and apps in the cloud.
We facilitate the adoption of cloud native architectures and modern infrastructure management practices for application ecosystems.
We work with customers to build agile operating models that transcend traditional silos, allowing them to extract the full value from their public cloud investments.
| Su | Mo | Tu | We | Th | Fr | Sa |
|---|---|---|---|---|---|---|