Lead Data Engineer required to join my Client’s Technology Team and contribute to their Group Architecture Board in building the next generation of their eCommerce data platform. You will play a pivotal role in designing and maintaining a modern, cloud-based data architecture. Working closely with product, technology, and wider business stakeholders, you will be crucial in developing scalable data pipelines, improving reliability, and enabling smarter, data-driven insights and decisions.
You will have a direct voice in how we design, build, and evolve our platform and the technologies we adopt!
- We are looking for a pragmatic, innovative, and self-motivated problem solver who delivers results through collaboration and technical excellence.
- Data Architecture Specialist: Design and develop robust data models, ETL/ELT processes, and data marts that support complex analytics and operational use cases.
- Pipeline Optimization: Maintain and optimize data pipelines and warehouse environments to ensure top-tier performance, reliability, and scalability.
- Cross-Functional Collaboration: Effectively collaborate with cross-functional teams to translate business and technical requirements into high-quality, robust data solutions.
- Data Quality Ownership: Perform deep data analysis to validate assumptions, identify issues, and own data quality, consistency, and accuracy end-to-end.
- Best Practices Champion: Confidently promote best practices for data engineering, governance, and platform reliability within a modern cloud-based architecture.
You will need to have skills and experience in the following:
- Cloud Data Architecture: Strong knowledge of modern, cloud-based data architecture and tooling (eg, S3, Glue, Redshift, Athena, Lake Formation, Iceberg/Delta).
- AWS Platform Build: Demonstrable experience designing and building modern data platforms in AWS.
- ETL/Orchestration Expertise: Expertise in ETL/ELT design and data orchestration, specifically with Apache Airflow.
- SQL Mastery: Strong SQL skills with significant experience in query tuning and performance optimisation.
- Programming Proficiency: Proficiency in Python and Bash (for data processing, Scripting, and automation).
- Data Streaming: Familiarity with streaming data technologies (eg, Kinesis, Kafka, or similar).
- DevOps Principles: Experience with version control (git) and CI/CD for data pipelines.
- BI/Visualisation: Familiarity with BI and visualisation tools such as Tableau, QuickSight, or similar.
- Data Governance Tools: Experience with data cataloguing, metadata management, and data lineage tools.
- Contextual Understanding: Passionate about data quality, governance, and understanding the business context behind the data.
This role is London-based and operates on a hybrid working model, requiring 2 to 3 days per week in the office. The compensation package offers a salary up to £100k plus a performance bonus and a comprehensive benefits package.
London (Hybrid)
To £100k plus bonus