Experience
8+ years
Timezone
CET (UTC +1)
Skills
AI / ML
Languages
Databases
Infrastructure
Frameworks
Integrations & Protocols
Overview
The project involved building and maintaining an enterprise-scale data platform for a global apparel and footwear company. The platform processed shopping and transactional data to deliver curated datasets for analytics, reporting, and business decision-making. It combined Spark-based batch processing, lightweight Lambda workflows, Redshift analytical transformations, and unified orchestration. In later phases, the platform was migrated from AWS-based pipelines to Azure Databricks as part of the company’s cloud modernization strategy.
Achievements
Improved AWS Glue job execution time by 40% through Spark performance optimization, transformation refactoring, and better resource utilization. Designed and implemented the target architecture for migrating data processing workloads from AWS to Azure Databricks. Delivered reliable shopping data pipelines within strict SLAs and created reusable PySpark and Pandas-based libraries that reduced repetitive development effort. Helped improve reporting consistency by exposing trusted curated datasets through Amazon Redshift.
Responsibilities
- Designed and maintained production pipelines for shopping and transactional data, ensuring timely delivery of curated datasets within strict SLA requirements.
- Optimized AWS Glue Spark jobs, reducing execution time by 40% through Spark tuning, transformation refactoring, and improved resource usage.
- Designed and implemented the migration architecture for moving data processing workloads from AWS-based pipelines to Azure Databricks.
- Built and optimized PySpark jobs on AWS Glue and Databricks, improving pipeline throughput, reliability, and maintainability.
- Developed lightweight Pandas-based AWS Lambda jobs for small-scale processing workflows and operational automation.
- Created reusable PySpark and Pandas-based internal libraries to standardize recurring data engineering patterns.
- Implemented analytical transformations and stored procedures in Amazon Redshift to support reporting and downstream business use cases.
- Orchestrated workflows using AWS Step Functions and Databricks Workflows, and automated infrastructure provisioning with Terraform, Jenkins, and Azure DevOps.
Technologies Used
Experience
8+ years
Timezone
CET (UTC +1)
Skills
AI / ML
Languages
Databases
Infrastructure
Frameworks
Integrations & Protocols
This project was delivered by
Yaroslav K
More Projects by Yaroslav K
Telecom BI Platform Migration to Hadoop
Big Data Engineer
The project involved migrating a legacy Oracle-based BI platform to a unified Hadoop-based solution for a major telecom company. The system supported ingestion and processing of TAP/RAP files containing telecom charging and tax data, while maintaining compatibility with existing Oracle ETL pipelines. The key challenge was improving scalability and processing efficiency while ensuring a smooth transition to a distributed data processing architecture.
Identity Verification Data Platform Modernization
Senior Big Data Engineer
The project involved modernizing a large-scale data processing platform used for identity validation, fraud detection, and analytical reporting. The system ingested data from external service providers and transformed it into reliable metrics for BI dashboards. A key part of the initiative was migrating the platform from Delta Lake to Apache Iceberg while preserving performance, stability, and cost efficiency. To reduce migration risk, a temporary dual-stack architecture was introduced, allowing Delta and Iceberg pipelines to run in parallel during the transition.
Ready to Build Your AI Team?
Get matched with the right AI experts for your project. Book a free discovery call to discuss your requirements.
We respond within 24 hours.