Experience
8+ years
Timezone
CET (UTC +1)
Skills
AI / ML
Languages
Databases
Infrastructure
Frameworks
Integrations & Protocols
Overview
The project involved migrating a legacy Oracle-based BI platform to a unified Hadoop-based solution for a major telecom company. The system supported ingestion and processing of TAP/RAP files containing telecom charging and tax data, while maintaining compatibility with existing Oracle ETL pipelines. The key challenge was improving scalability and processing efficiency while ensuring a smooth transition to a distributed data processing architecture.
Achievements
Contributed to the replacement of a legacy Oracle BI platform with a Hadoop-based solution built around Apache Spark. Delivered end-to-end batch processing pipelines for telecom data and helped improve the scalability and performance of business-critical reporting workflows.
Responsibilities
- Contributed to replacing legacy Oracle BI processing with a unified Hadoop-based solution built on Apache Spark.
- Implemented ingestion pipelines for TAP/RAP telecom charging and tax files using Spark batch processing.
- Ensured interoperability with existing Oracle ETL pipelines to support a smooth migration path.
- Delivered end-to-end data processing pipelines for business-critical telecom reporting.
- Optimized pipeline performance to improve processing efficiency and scalability.
Technologies Used
Experience
8+ years
Timezone
CET (UTC +1)
Skills
AI / ML
Languages
Databases
Infrastructure
Frameworks
Integrations & Protocols
This project was delivered by
Yaroslav K
More Projects by Yaroslav K
Enterprise Retail Data Platform
Big Data Engineer
The project involved building and maintaining an enterprise-scale data platform for a global apparel and footwear company. The platform processed shopping and transactional data to deliver curated datasets for analytics, reporting, and business decision-making. It combined Spark-based batch processing, lightweight Lambda workflows, Redshift analytical transformations, and unified orchestration. In later phases, the platform was migrated from AWS-based pipelines to Azure Databricks as part of the company’s cloud modernization strategy.
Identity Verification Data Platform Modernization
Senior Big Data Engineer
The project involved modernizing a large-scale data processing platform used for identity validation, fraud detection, and analytical reporting. The system ingested data from external service providers and transformed it into reliable metrics for BI dashboards. A key part of the initiative was migrating the platform from Delta Lake to Apache Iceberg while preserving performance, stability, and cost efficiency. To reduce migration risk, a temporary dual-stack architecture was introduced, allowing Delta and Iceberg pipelines to run in parallel during the transition.
Ready to Build Your AI Team?
Get matched with the right AI experts for your project. Book a free discovery call to discuss your requirements.
We respond within 24 hours.