Permanent | Hybrid Working | South East England
We are partnering with a fast-paced, customer-focused organisation undertaking a significant data transformation programme. With major investment in a modern lakehouse platform and access to rich, high-volume datasets, this is an opportunity to help reshape how data is stored, analysed and leveraged across the business.
As a Senior Platform Data Engineer, you will provide technical leadership within the Data Engineering team, working closely with Data Scientists, Machine Learning Engineers and business stakeholders to deliver scalable, high-value data solutions.
Key Responsibilities
- Design and build robust, scalable data pipelines to serve analytics and data science communities
- Provide hands-on expertise across modern data engineering technologies (e.g. Databricks, Spark, Python, SQL, Scala)
- Collaborate with data scientists and ML engineers to develop and deploy machine learning models addressing key business challenges
- Coach and mentor engineers (including contractors), raising development standards and engineering maturity
- Partner with Business Analysts to translate requirements into measurable business outcomes
- Oversee code quality and project deliverables across releases
- Establish and maintain documentation libraries and data catalogues for developed products
- Contribute to platform evolution and continuous improvement of data engineering practices
Core Skills & Experience
Technical Capability
- Significant experience designing and delivering data solutions on cloud-based, distributed big data platforms
- Strong hands-on software engineering experience in Python, with knowledge of modern development practices (TDD, CI/CD, automated deployment pipelines)
- Experience with automated testing of data transformation pipelines (e.g. Pytest, dbt unit testing frameworks)
- Strong SQL skills, including performance tuning and debugging
- Experience in data warehouse optimisation (schema evolution, indexing, partitioning)
- Infrastructure as Code experience (e.g. Terraform or CloudFormation)
- Strong knowledge of distributed data processing frameworks such as Apache Spark (or similar technologies such as Flink, Hadoop or Beam)
- Familiarity with lakehouse architecture principles and modern data platforms
- Experience implementing data quality, lineage and observability frameworks
- Understanding of data governance, security and privacy principles, including handling of sensitive data within regulated environments
Engineering & Collaboration
- Comfortable working independently while aligning with overarching data strategy
- Strong communication skills across technical and non-technical audiences
- Structured, analytical problem-solver with a focus on outcomes and innovation
- Passionate about modern data technologies and continuous learning
Desirable Experience
- Experience building real-time or event-driven data pipelines in commercial production environments (e.g. streaming architectures, Kafka, Spark Streaming, Beam)
- Understanding of unbounded data processing and streaming design patterns
- Familiarity with machine learning workflows and model lifecycle considerations
- Experience supporting analytical visualisation tools (e.g. Tableau, Power BI or similar platforms)
- Experience working within airline, ecommerce, retail or similarly data-rich industries
- Experience within AWS cloud environments
- Experience building transformation frameworks using dbt