Identity Verification Data Platform Modernization
Experience
8+ years
Timezone
CET (UTC +1)
Skills
AI / ML
Languages
Databases
Infrastructure
Frameworks
Integrations & Protocols
Overview
The project involved modernizing a large-scale data processing platform used for identity validation, fraud detection, and analytical reporting. The system ingested data from external service providers and transformed it into reliable metrics for BI dashboards. A key part of the initiative was migrating the platform from Delta Lake to Apache Iceberg while preserving performance, stability, and cost efficiency. To reduce migration risk, a temporary dual-stack architecture was introduced, allowing Delta and Iceberg pipelines to run in parallel during the transition.
Achievements
Led the migration from Delta Lake to Apache Iceberg while maintaining schema compatibility, partitioning consistency, and metadata reliability. Improved effective cluster resource utilization by 60% while keeping the solution cost-efficient. Tuned Spark execution and Iceberg configurations to achieve comparable or better performance than the legacy implementation.
Responsibilities
- Designed and implemented PySpark-based pipelines for generating aggregated identity verification metrics for BI dashboards.
- Built initial prototype pipelines using Athena and EventBridge before migrating the logic to production-grade Spark jobs.
- Implemented a Scala-based dual-writer architecture to support parallel Delta Lake and Apache Iceberg writes during the migration phase.
- Led the Delta Lake to Apache Iceberg migration, ensuring schema compatibility, partition strategy alignment, and metadata consistency.
- Tuned Spark configurations and execution logic to improve Iceberg pipeline efficiency.
- Integrated pipelines into Airflow DAGs and managed infrastructure changes using Terraform.
- Worked across mixed Python and Scala codebases, including maintenance and extension of legacy Scala modules.
Technologies Used
Experience
8+ years
Timezone
CET (UTC +1)
Skills
AI / ML
Languages
Databases
Infrastructure
Frameworks
Integrations & Protocols
This project was delivered by
Yaroslav K
More Projects by Yaroslav K
Telecom BI Platform Migration to Hadoop
Big Data Engineer
The project involved migrating a legacy Oracle-based BI platform to a unified Hadoop-based solution for a major telecom company. The system supported ingestion and processing of TAP/RAP files containing telecom charging and tax data, while maintaining compatibility with existing Oracle ETL pipelines. The key challenge was improving scalability and processing efficiency while ensuring a smooth transition to a distributed data processing architecture.
Enterprise Retail Data Platform
Big Data Engineer
The project involved building and maintaining an enterprise-scale data platform for a global apparel and footwear company. The platform processed shopping and transactional data to deliver curated datasets for analytics, reporting, and business decision-making. It combined Spark-based batch processing, lightweight Lambda workflows, Redshift analytical transformations, and unified orchestration. In later phases, the platform was migrated from AWS-based pipelines to Azure Databricks as part of the company’s cloud modernization strategy.
Ready to Build Your AI Team?
Get matched with the right AI experts for your project. Book a free discovery call to discuss your requirements.
We respond within 24 hours.