Delta Lake Migration & Auto-Scaling ETL Platform
Key Expertise
Experience
12+ years
Timezone
CET (UTC +1)
Skills
AI / ML
Languages
Databases
Infrastructure
Frameworks
Integrations & Protocols
Overview
End-to-end ownership of the data-exchange (DX) ETL platform on Databricks for a Tier-1 US telecom and media operator, supporting large-scale ingestion, transformation, and analytics workloads. The project encompassed migrating storage to Delta Lake for ACID guarantees, building auto-scaling compute infrastructure for volatile workloads, and automating operational tooling to reduce manual ops overhead across the data engineering team.
Achievements
Improved query performance by approximately 3x via the Delta Lake migration, cut compute costs by ~25% through Lambda-driven auto-scaling, and eliminated significant manual overhead by automating CloudWatch dashboard and alarm provisioning across 10+ services.
Responsibilities
- Built and maintained 15+ production ETL pipelines on Spark/Databricks, processing large-scale daily ingestion and transformation workloads for downstream analytics consumers.
- Led the migration of pipeline storage to Delta Lake, introducing ACID compliance and time-travel semantics that boosted query performance roughly 3x and eliminated entire classes of correctness bugs.
- Designed auto-scaling infrastructure using AWS Lambda to dynamically provision Databricks resources based on workload signals, reducing compute spend by ~25% while sustaining throughput during peak loads.
- Developed Databricks notebooks that automated provisioning of AWS CloudWatch dashboards and alarms across 10+ services, replacing manual configuration and accelerating incident response.
- Enhanced Concourse CI jobs and reporting pipelines, improving release cadence and giving the team continuous visibility into data quality and pipeline health metrics.
Technologies Used
Key Expertise
Experience
12+ years
Timezone
CET (UTC +1)
Skills
AI / ML
Languages
Databases
Infrastructure
Frameworks
Integrations & Protocols
This project was delivered by
Vitalii P.
More Projects by Vitalii P.
Centralized Data Platform & Configuration-Driven Framework
Senior Big Data Engineer
Co-architected a configuration-driven unified abstract framework that abstracts heterogeneous data sources - HDFS, S3, Kafka, and Iceberg - behind a single declarative interface for a next-generation centralized data platform. The framework standardizes how dozens of teams build, deploy, and operate Spark pipelines, replacing fragmented per-team implementations with a consistent foundation that enforces best practices and shortens time-to-production.
Spark Pipeline Migration from YARN to Kubernetes
Senior Big Data Engineer
Modernization of mission-critical content-moderation data infrastructure for one of the world’s largest technology companies, migrating legacy Spark-on-YARN pipelines to a cloud-native Spark-on-Kubernetes platform. The initiative enables elastic scaling, reduces operational overhead, and aligns the data stack with the broader enterprise shift toward containerized infrastructure across thousands of services.
Ready to Build Your AI Team?
Get matched with the right AI experts for your project. Book a free discovery call to discuss your requirements.
We respond within 24 hours.