Make yourself visible and let companies apply to you.
Role title
Roles
Apache Airflow Jobs
Trending Apache Airflow jobs
Get notified about new jobs that match this search?
Global IT Data Engineer Senior Specialist
Boston Consulting Group
London
Remote or hybrid
Senior
Private salary
RECENTLY POSTED
+4

Who We Are

Boston Consulting Group partners with leaders in business and society to tackle their most important challenges and capture their greatest opportunities. BCG was the pioneer in business strategy when it was founded in 1963. Today, we help clients with total transformation-inspiring complex change, enabling organizations to grow, building competitive advantage, and driving bottom-line impact.

To succeed, organizations must blend digital and human capabilities. Our diverse, global teams bring deep industry and functional expertise and a range of perspectives to spark change. BCG delivers solutions through leading-edge management consulting along with technology and design, corporate and digital ventures—and business purpose. We work in a uniquely collaborative model across the firm and throughout all levels of the client organization, generating results that allow our clients to thrive.

What You’ll Do

Design, build, test, and maintain Data pipelines, managing data platform, develop integrations from diverse data sources including on-premises systems and external APIs, support and troubleshoot production processes. Contribute to scalable pipeline design, resolve data discrepancies, and ensure SLAs are met while continuously improving data models, code efficiency, and data quality. Adhere to best practices in data integrity, testing, security, and documentation, while continuously expanding technical expertise and staying current with evolving tools and platforms.

YOU’RE GOOD AT

Developing and maintaining medium-high complexity data pipelines and applications within large-scale data platforms.

Applying structured problem-solving skills to analyze data issues and identify root causes.

Managing multiple tasks and priorities in a fast-paced, Agile environment.

Communicating clearly with both technical and non-technical stakeholders.

Working collaboratively in a matrixed organization with diverse teams and varying technical expertise.

Demonstrating intellectual curiosity and a willingness to learn new technologies and methodologies.

Being a proactive team player with a positive attitude and strong ownership mindset.

What You’ll Bring

  • Bachelor’s Degree in Computer Science, Engineering, or related field (or equivalent practical experience).
  • 4-6 years of relevant experience in Data Engineering.
  • Expertise in SQL, especially within cloud-based data warehouses like Snowflake, API-integrated data pipelines using Python.
  • Hands-on experience with AWS technologies such as AWS Lambda/Glue, S3, and CloudFormation etc.
  • Familiarity with cloud data warehouse platforms such as Snowflake, developing data pipelines in DBT (Data Build Tool); or other ETL/ELT tools, languages and frameworks is a plus.
  • Working knowledge of version controlling tools, CI/CD processes and deployments using tools such as GitHub, Git Actions, Terraform etc.
  • Experience working with semi-structured and unstructured data formats such as JSON and XML.
  • Familiarity with Data orchestration tools such as Airflow.
  • Experience working in Agile/Scrum environments is preferred.
  • Experience within GenAI space is a plus.

Who You’ll Work With

You will be a member of the Data Hub Squad, a team focused on ingesting, transforming, streamlining, and exposing high-quality data to support the Marketing function in making data-driven decisions.

You will collaborate closely with the Chapter Lead, Product Owner, other data engineers, analysts, and other teams within the Marketing Portfolio.

Additional info

This position will involve daily collaboration with the Product Owner, Chapter Lead, other engineers and analysts throughout Agile process. The successful candidate will demonstrate:

  • Strong analytical abilities and creative problem solving
  • Ability to work independently with general direction and flexibility in a fast-paced environment
  • Good organization and excellent communication skills across cultures
  • Integrity and a positive attitude, especially while handling stressful situations
  • Work with project stakeholders to understand business requirements and implement data solutions for diverse problems

Boston Consulting Group is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, age, religion, sex, sexual orientation, gender identity / expression, national origin, disability, protected veteran status, or any other characteristic protected under national, provincial, or local law, where applicable, and those with criminal histories will be considered in a manner consistent with applicable state and local laws.

BCG is an E - Verify Employer. Click here for more information on E-Verify.

AWS Data Engineer
Bis Henderson
Multiple locations
Hybrid
Mid - Senior
£80,000
RECENTLY POSTED
+1

Location: Leicestershire, hybrid

Salary: circa £70,000 - £80,000

Summary:

We are looking for a hands-on Data Engineer to lead the build of a modern AWS-based data platform, taking ownership from core infrastructure through to curated, business-ready datasets.

Key Responsibilities:

Design, build, and operate a scalable AWS data platform, including storage, compute, security, and monitoring

Develop robust, idempotent ETL pipelines across diverse data sources (APIs, databases, files, and event streams)

Implement medallion architecture (bronze, silver, gold) to transform raw data into high-quality, analytics-ready datasets

Establish infrastructure-as-code and CI/CD practices to ensure reproducibility and continuous improvement

Own data modelling, master data management, and aggregation layers to support reporting, analytics, and ML use cases

Skills and Experience:

Proven experience building and running production data platforms on AWS end-to-end, ideally in a multi-site business environment

Strong proficiency in Python and SQL, with hands-on experience in Spark/PySpark and modern table formats (e.g. Delta Lake, Iceberg, Hudi)

Expertise in AWS data services (e.g. S3, Glue, Redshift, Lambda, Step Functions, Kinesis) and infrastructure-as-code (Terraform or CDK)

Experience with workflow orchestration tools such as Airflow or similar

Ability to design scalable architectures, make pragmatic trade-offs, and communicate effectively with both technical and non-technical stakeholders

Processing Your Data

Bis Henderson Recruitment is a leading provider of recruitment, interim management and consultancy services to the supply chain and logistics industry. Should you respond to this advertisement we may store your CV and contact details and will process this data for recruitment purposes only. Should we process your data, then we will always tell you that we are doing so.

Please visit our website to read our Privacy Policy in full, in this Policy you will find information about our compliance with the UK General Data Protection Regulations.

All applicants must have an unrestricted right to work in the UK as our client will not support visa sponsorship for this role.

TPBN1_UKTJ

Data Engineer
Lynx Recruitment Limited
London
Remote or hybrid
Junior - Mid
£85,000
RECENTLY POSTED

LynxRecruitmentissupportingaleadingorganisationwithinfinancialservicessearchingforaDataEngineertodesignandbuildscalable,cloud-firstdataplatformsthatdriveinnovationanddata-leddecision-making.

Youwillworkacrossthefulldatalifecycleengineeringsecure,high-performancepipelinesandmoderndataplatformsthatenableanalytics,AI,andenterpriseinsights.

KeyResponsibilities

  • BuildandoptimiseETL/ELTpipelines
  • Designandmanagedatawarehouses,lakes,andcloud-nativeplatforms
  • Implementmonitoring,governance,security,andregulatorycomplianceframeworks
  • Deliverhigh-qualitydatasetstoanalyticsandbusinessteams
  • SupportAIandadvancedanalyticsinitiatives

Requirements

  • DegreeinIT,ComputerScience,SoftwareEngineering,orrelatedfield(minimum2:1classification)
  • StrongSQL&Python
  • Hands-onETL/ELTexperience(e.g.,Matillion,Talend,Fivetran,ADF)
  • ExperiencewithSnowflake(AWS/Azure/GCPbeneficial)
  • Strongdatamodellingknowledge
  • Understandingofdatagovernanceandsecuritybestpractices
  • Abilitytotranslatebusinessneedsintoscalablesolutions

Desirable
Airflow,dbt,Kafka,CI/CD,BItools,governanceplatforms,financialservicesexperience.

Interestedinbuildingmodern,enterprise-scaledataplatformsinahighlyregulatedenvironment?Applynow!

Data Engineer (SC Cleared)
scrumconnect ltd
Multiple locations
Hybrid
Mid - Senior
£450,000 - £475,000
RECENTLY POSTED
+7

Apache Spark Python AWS Cloud Data Pipelines

A hands-on data engineering role within a large-scale cloud data programme, responsible for building, maintaining, and troubleshooting data pipelines using Apache Spark, PySpark, Apache Airflow, and a broad suite of AWS services. You will apply strong analytical and engineering skills to deliver trusted, well-governed data assets in a modern, cloud-native environment.

About Scrumconnect

Scrumconnect is a leading UK technology consultancy delivering digital transformation across public and private sectors, contributing to over 20% of the UK s major citizen-facing public services. We specialise in cloud engineering, data platforms, and agile delivery, helping clients build scalable, secure, and user-centred digital solutions that create real impact.

Active SC clearance is a mandatory, non-negotiable requirement. Candidates must hold current, in-date Security Check (SC) clearance at the time of application. Sponsorship is not available. Applications without active SC clearance will not be considered.

Working arrangement: This role is hybrid. Candidates must be willing and able to travel to the Newcastle office three days per week. Remaining days may be worked remotely from anywhere in the UK.

About the role

You will work as a Data Engineer on a complex, cloud-based data programme designing, building, and maintaining data pipelines that process large volumes of data across a modern AWS-native stack. Using Apache Spark and PySpark for distributed data processing, Apache Airflow for orchestration, and a range of AWS services for storage, compute, and analytics, you will help deliver reliable, well-governed data assets to downstream users.

You will apply strong data analysis skills to identify root causes of data issues, work with dimensional data models and slowly changing dimensions, and implement infrastructure as code using Terraform. Familiarity with DWP engineering best practices and the ability to translate customer expectations into applied technical functionality are key to success in this role.

Key responsibilities

Data pipeline development

Build and maintain scalable data pipelines using Apache Spark and PySpark, processing and transforming large datasets across distributed cloud infrastructure.

Workflow orchestration

Configure and manage Apache Airflow DAGs for task orchestration, ensuring reliable scheduling, monitoring, and execution of data processing workflows.

Root cause analysis

Perform data analysis to identify and resolve root causes of pipeline failures and data quality issues including reviewing EMR output logs and CloudWatch metrics.

Data modelling

Apply understanding of dimensional data models and slowly changing dimensions (SCD) to design and maintain well-structured, analytically trusted data assets.

Infrastructure as code

Provision and manage cloud infrastructure using Terraform. Containerise solutions using Docker and manage deployments through GitLab CI/CD pipelines and release tagging.

Security & encryption

Apply understanding of both Server Side and client-side encryption patterns within AWS. Work within IAM policies and data governance standards appropriate to a regulated government environment.

Technical skills required

Languages & analytics

  • Python primary language for pipeline development and data processing
  • SQL used for querying, transformation, and validation across data stores
  • PySpark for distributed data processing using Apache Spark on AWS EMR
  • Familiarity with basic data structures for constructing robust, scalable solutions

Data processing & orchestration

  • Apache Spark understanding of distributed data processing architecture and execution
  • Apache Airflow configuring DAGs and managing task orchestration at scale
  • Jupyter Notebooks for exploratory data analysis and pipeline prototyping
  • Understanding of dimensional data models and slowly changing dimensions (SCD Types 1, 2, 3)
  • Data analysis skills to identify root cause of issues within pipelines and data assets

AWS services

  • Amazon EMR running Spark workloads and reviewing output logs
  • Amazon Athena ad hoc querying of data in S3
  • Amazon Textract and Comprehend familiarity with AI/ML document extraction and NLP services
  • AWS S3, IAM, CloudWatch, EC2, ECR core platform services used day-to-day
  • AWS console proficiency navigating, configuring, and monitoring services
  • Understanding of Server Side and client-side encryption within AWS

Infrastructure, DevOps & delivery

  • Terraform Infrastructure as Code for provisioning and managing AWS environments
  • Docker containerisation of data engineering solutions
  • GitLab source code management, CI/CD pipeline configuration, release tagging, and component versioning
  • Familiarity with DWP engineering best practices
  • Ability to translate customer expectations into applied, functional technical solutions

Technology stack at a glance

PythonPySparkSQLApache SparkApache AirflowJupyter NotebooksDimensional modelling/SCDAWS EMRAmazon AthenaAWS S3AWS IAMAWS CloudWatchAWS EC2/ECRAmazon TextractAmazon ComprehendTerraformDockerGitLab CI/CDGitLab Tags

Contract Senior Software Engineer (Java or Python)
SoCode Limited
London
In office
Senior
£650/day - £700/day
RECENTLY POSTED
+5

Contract Senior Software Engineer (Java or Python) Inside IR35Contract length: 6 months (with potential to extend)
Location: London*
Working Environment: On-site*

You will be joining a private equity firm as a senior software engineer, to work across the following responsibilities:

  • Develop and support P&L, accounting, and returns-calculation applications across trading books
  • Build and extend our Client Reporting Framework
  • Add instrument and asset class coverage in our Trade Repository system
  • Manage data exchange with third-party vendors via SFTP and AWS S3
  • Use AI coding assistants (Claude, Cursor, GitHub Copilot) to compress delivery timelines while maintaining full code ownership and quality accountability
  • Collaborate with quant researchers and traders to translate complex financial requirements into auditable, production-grade code
  • Provide production support, working with Platform and SRE teams as needed

Key Requirements:
Technical:

  • 8+ years commercial experience in Python or Java across the full development lifecycle
  • 5+ years in financial services (buy-side strongly preferred)
  • Solid relational database skills; MS SQL Server a strong plus
  • Snowflake experience required
  • Familiarity with AWS (S3, Lambda, EC2, Glue or similar)
  • Proficiency with Git, CI/CD pipelines, and observability tooling (e.g. Datadog)

AI & Tooling:

  • Proven, hands-on experience shipping production code using AI coding assistants such as Claude, Cursor, or GitHub Copilot
  • Demonstrable examples where AI tooling reduced delivery time by 2x or more on a meaningful task
  • Strong prompt engineering skills and the ability to critically evaluate AI-generated code for correctness, security, and financial accuracy

Desirable, but no essential:

  • Experience with Apache Airflow or similar workflow schedulers
  • REST or GraphQL API design experience
  • Knowledge of fixed income, FX, or derivatives products and associated P&L/risk methodologies
HVAC Engineer
The Search Consultant
Stourbridge
In office
Mid - Senior
£38,000 - £45,000
RECENTLY POSTED

Air Conditioning/HVAC Engineer Offices in Hagley and Nationwide travel Salary £38K-£45K depending upon experience We are seeking a skilled and experienced Air Conditioning/HVAC Engineer to join our existing team of Engineers, servicing commercial sites across the UK as part of our successful Facilities Management division. The role will suit an experienced Engineer with a strong background in Air Conditioning work. Salary offers will vary based on the experience and ability to undertake planned servicing, reactive and installations work, and qualifications held by the successful applicant. Additional technical experience of working within a clean room environment, /multi skilled qualifications would be of huge interest and experience Benefits: Company vehicle and fuel card Enrolled into company pension Overtime paid Travel paid door to door Responsibilities: Carry out planned preventative maintenance (PPM) Service and Reactive Maintenance works for Commercial and Retail clients Install HVAC systems and ductwork for Commercial and Retail clients. Install or service controls including thermostats, actuators, and smart energy systems. Commission new equipment, complete system testing, and verify operational performance Ensure all work complies with F-Gas Regulations, UK Building Regulations, and HSE guidelines. Follow all workplace safety rules. Maintain safe working practices in live commercial environments. Complete service reports, RAMS (Risk Assessment & Method Statements), commissioning sheets, and asset logs. Work independently and as part of a team, demonstrating a proactive and problem-solving approach. Diagnose faults and carry out repairs on refrigeration circuits, air conditioning units, ventilation systems, pumps, valves, and controls. Maintain accurate records of work completed via portal CAFM systems, and production of F GAS certification. Communicate findings and recommendations through our CAFM system and Estimate remedial works and provide detailed information for quotes. Use diagnostic tools, gauges, multimeters, airflow meters, and BMS interfaces. Adjust, balance, and optimise HVAC systems. Out of Hours Emergency Callout cover as part of the wider engineering team rota. Maintain accurate records for refrigerant handling and leak testing. Respond to emergency call-outs when required. Candidate Specification: Skills: F-GAS qualification is essential. City & Guilds Level 2 or 3 in Air Conditioning & Refrigeration or equivalent. NVQ Level 2/3 in Mechanical Engineering, Heating & Ventilation, or Building Services. At least 3 years of demonstrable experience in Commercial F-Gas, preferably in a PPM/Facilities Management environment, is essential. Strong knowledge with the ability to troubleshoot issues effectively. A full, clean driving licence is essential for this position. IPAF & PASMA certification is advantageous. Computer/PDA literate, able to complete and submit electronic reports accurately and effectively

Data Engineer (Snowflake, DBT)
Harnham - Data & Analytics Recruitment
London
Hybrid
Mid - Senior
£500/day - £550/day
RECENTLY POSTED

DATA ENGINEER

6-MONTH CONTRACT

LONDON (1 day in office)

£500 - £550 per day (Outside IR35)

This position as a Snowflake Engineer offers the opportunity to join a leading travel company based near West London, currently undergoing a major cloud data transformation.

THE COMPANY

This established Pharma brand is known for its innovation and commitment to delivering seamless customer experiences through data. With a focus on modernising its data platforms, the company is investing in Snowflake and cloud-native tooling to better understand customer journeys, improve operations, and fuel growth. Contractors and perm hires we’ve placed here consistently highlight the collaborative culture, exciting technical challenges, and strong support from leadership.

THE ROLE

As a Data Engineer, you will play a crucial role in a data transformation program, focusing on optimising data pipelines and enabling cloud-driven insights. You will also be responsible for post-project documentation, ensuring clear communication with non-technical stakeholders.

Your key responsibilities will include:

  • Designing and implementing a Snowflake-based data warehouse, ensuring scalability and efficiency.
  • Developing and optimising ELT pipelines, leveraging Snowflake best practices for data ingestion and transformation.
  • Enhancing performance and query optimisation, ensuring cost-effective and high-performing workloads.
  • Working with dbt and Airflow to transform raw data into structured datasets for analytical consumption.
  • Creating clear documentation to communicate technical processes to non-technical teams.

KEY SKILLS AND REQUIREMENTS

To succeed in this role, you should have:

  • Strong commercial experience with Snowflake and its ecosystem.
  • Proficiency in SQL for data modelling and transformation within Snowflake.
  • Experience with dbt to develop scalable data pipelines.
  • Knowledge of ELT processes and best practices for cloud-based data architecture.
  • Hands-on experience with performance tuning and query optimisation in Snowflake.
  • Familiarity with cloud platforms (AWS/GCP/Azure) and their integration with Snowflake.

HOW TO APPLY

Please register your interest by sending your CV via the apply link on this page.

Senior Data Engineer
Harnham - Data & Analytics Recruitment
London
Fully remote
Senior
£10,000
RECENTLY POSTED
+1

SENIOR DATA ENGINEER

£100,000

REMOTE (LONDON)

An opportunity to join a fast-growing, consumer-focused digital platform that is redefining how modern users interact with data-led products. This is a senior, high-impact role within a scaling data function, offering ownership, influence, and the chance to work on systems operating at significant scale.

THE COMPANY

A well-funded, UK-based digital entertainment business operating within a regulated environment. Founded in London and backed by recent investment, the company has grown quickly by focusing on strong product design, performance, and an engineering-first culture.

The business values scalable solutions over quick fixes, with a collaborative and technically ambitious environment. The role sits within a growing data function that plays a critical part in supporting continued customer growth and increasingly real-time analytics use cases.

THE ROLE

As a Senior Data Engineer, you will be responsible for building and scaling core data infrastructure, working closely with senior data and engineering stakeholders. You’ll be hands-on across multiple initiatives while helping shape platform direction and technical standards.

Key responsibilities include:

  • Designing, building, and maintaining scalable, production-grade data pipelines
  • Scaling the data platform to support increasing data volumes and user growth
  • Developing and owning robust data models used across the business
  • Helping build a full staging and testing environment that closely mirrors production
  • Improving reliability, monitoring, and observability across data systems
  • Supporting a move towards lower-latency analytics from an engineering perspective

This role offers genuine architectural input and is well suited to engineers who enjoy complex technical challenges in a scale-up setting.

YOUR SKILLS AND EXPERIENCE

Required:

  • 5+ years’ experience in data engineering or software engineering roles
  • Strong Python and SQL capability
  • Experience building data platforms in AWS
  • Excellent data modelling fundamentals (non-negotiable)
  • Experience working in smaller companies or growing environments
  • Engineering-first mindset, ideally with a backend or software engineering background

Desirable:

  • Experience with Snowflake
  • Orchestration tools such as Dagster or Airflow
  • dbt for data transformation and modelling
  • Apache Iceberg or modern table formats
  • Background in high-growth, consumer-facing products

SALARY AND BENEFITS

  • Base salary up to £100,000
  • Competitive benefits package
  • Fully remote working within the UK
  • Optional London office access for collaboration
  • Opportunity to work on a modern, evolving data stack

HOW TO APPLY

Please register your interest by sending your CV to Harry Lack via the Apply link. All applications will be handled confidentially.

Data Engineer
Liquidline
Ipswich
In office
Mid - Senior
£50,000 - £60,000
RECENTLY POSTED
+6

Liquidline is the fastest-growing commercial coffee solutions provider in the UK and Ireland—not that we’re bragging! Our customers are companies that take pride in offering quality refreshments to their employees and clients. Our success is built on outstanding customer service, hard work, and a strong team culture. We believe in delivering WOW experiences to both our customers and our valued employees.

We are proud to be Great Place to Work certified, a testament to our dedication to fostering a culture of support, growth and development, as well as promoting well-being, and winning together. With our core company values—passion, thoughtfulness, responsiveness, innovation, and smart working—at the very heart of our business, we are committed to cultivating an environment that inspires excellence.

We’re looking for a Data Engineer to play a pivotal role in transforming Liquidline’s data landscape. You’ll take ownership of the technical foundations of our data platform – from ingestion and infrastructure to deployment – helping us move from legacy, on-premise systems to a modern, cloud-first data architecture. Working closely with the BI & Data Lead and an Analytics Engineer, you’ll be the primary architect of our new data platform, supporting our acquisition strategy and turning fragmented legacy data into a competitive, AI-ready asset. If you enjoy building things from the ground up, modernising complex systems, and shaping how data is used across a business, this role offers real ownership and influence.

The Role - Data Engineer

  • Designing a greenfield data platform – building and shaping our modern data stack (e.g. BigQuery, Airbyte), moving beyond standalone SQL Server environments.
  • Leading integrations – supporting our NetSuite migration and creating scalable data frameworks for future acquisitions.
  • Modernising legacy systems – refactoring complex, business-critical T-SQL into modular, high-performance cloud workflows.
  • Raising engineering standards – introducing Git, CI/CD pipelines, automated testing and best-practice engineering processes.
  • Preparing data for AI and ML – designing resilient architectures that ensure high-quality, accessible data for future AI initiatives.

What You Will Need In The Role Of Data Engineer

  • 3–5+ years’ experience in Data Engineering, ideally involving cloud warehouse builds or migrations
  • Expert-level SQL/T-SQL with proven experience refactoring and optimising complex logic
  • Hands-on experience with cloud data warehouses (e.g. BigQuery, Snowflake), ingestion tools (Airflow, Airbyte), and analytics tooling such as dbt and Power BI
  • Strong working knowledge of Git/version control, CI/CD pipelines, and Python for automation
  • Solid understanding of star schemas and Kimball methodologies
  • Experience supporting advanced Power BI models (DAX and Power Query)
  • Exposure to ML/AI concepts (e.g. scikit-learn, TensorFlow) or an interest in preparing data for LLMs, RAG architectures or predictive models
  • Experience with Docker and/or Terraform (desirable)

What You Will Learn & What Liquidline Can Offer You

Being a part of Liquidline is more than just a job – it’s a chance to grow, develop and thrive! We are deeply invested in the success of our team and our comprehensive benefits package is designed to support, and reward our employees. The package includes, but is not limited too:

  • ??25 Days Annual Leave + Bank Holidays: Extra time off to rest and recharge.
  • ??Long Service Annual Leave Entitlement & Buy/Sell Scheme: More flexibility for your time off needs.
  • ??Candidate Referral Program: Help grow our team and earn rewards
  • ??Company Bonus Scheme: Celebrate success with us.
  • ??Enhanced Sick Pay: Supporting you when you need it most
  • ??Enhanced Parental Leave: Comprehensive support for your family journey
  • ??Salary Sacrifice Pension Scheme: Save for your future with ease.
  • ??Life Assurance & Income Protection (UK Only): Peace of mind for you.
  • ???HSF Health Plan: Access affordable healthcare.
  • ???YuLife Wellbeing Platform: One stop shop for wellbeing, rewards and support.
  • ???Employee Assistance Programme: Mental health support, virtual GP services and more.
  • ???Lunch on Liquidline & Bi-Annual Conferences: Enjoy lunch on Liquidline, and Bi Annual company conferences.

Liquidline is a fast-growing, family owned business that has expanded from 92 to over 300 employees since 2020. With ambitious plans for the next five years, there’s never been a better time to join us! Our dynamic and innovative environment offers endless opportunities for personal and professional growth.

We are proud to be an Equal Opportunities Employer, treating everyone with fairness, respect and appreciation. At Liquidline, we embrace diversity and value the unique experiences and perspectives of every individual. Together, we are always Winning Together!

HVAC Design Engineer
TalentTech Recruitment
Wednesbury
In office
Junior - Mid
£35,000 - £45,000
RECENTLY POSTED

Commercial & Industrial HVAC - West Midlands Office Based

Wednesbury, Walsall, Dudley, Tipton, West Bromwich, Oldbury

£35,000 - £45,000 basic salary + Progression, Training + Benefits

  • Do you have experience in the design, estimation, and quotation of commercial HVAC, AHU, and ASHP?
  • Looking for an organisation that’s a little more relaxed?
  • Interested in an environment that focuses on training and progression opportunities with succession planning and longer-term career focus?

This may be the ideal opportunity for you. The client is looking for a mechanically biased design engineer with some proven experience in HVAC design, estimation, and calculations.

Your Role as a HVAC Design Engineer:

  • Based daily from the Wednesbury site (potential for hybrid further along the line).
  • The role is predominantly design focused.
    • Utilising AutoCAD to design functioning and costed HVAC systems.
    • Conduct calculations for airflow volumes, utilising floor plans.
  • As the role is fluid, you’ll be assisting with the estimation side too.
    • Formulating quotes, updating margins and spreadsheets, compiling basic Bills of Materials (BoM’s).
  • Liaise with M&E contractors and consultants.
  • Working on 1 - 6 projects at a time; dealing with the full cycle from enquiry to accepted quotation.
  • Mon - Fri position, 08:30 - 16:30.

Ideal Background for the HVAC Design Engineer Position:

  • Design experience is neededwithAutoCAD.
    • Prior knowledge of CADvent would be very beneficial.
  • Proven existing design and estimation experience with AHU, HVAC, and/or ASHP systems.
  • Have existing office-based design experience.
  • Able to assist with the estimation side of the role.
  • You’ll be professional and have excellent communication skills.
  • Strong working knowledge of Excel.
  • Able to commute daily to Wednesbury.
  • Full right to work in the UK as sponsorship cannot be provided.

The Company recruiting for the HVAC Design Engineer:

  • This renowned manufacturer is looking to grow and strengthen their projects design and estimation team.
  • Leader with over 100 years’ experience in the commercial and industrial HVAC space.
  • They can offer genuine employee career development and extensive on-going training.

The Package for a HVAC Design Engineer:

  • £35,000 - £45,000 depending on experience.
  • Training, support, and progression to project management.
  • Pension & benefits.
  • 25 days holiday plus bank holidays.

Please apply for this job online if you are interested and feel you fit the above criteria.

Dave is the main point of contact for the role.

INDENG

Machine Learning Engineer
Sanderson
London
Fully remote
Mid - Senior
£700/day - £750/day
RECENTLY POSTED
+2

£700-750/day overall assignment rate to umbrella

Fully remote

6 month initial

Apply today to join a forward-thinking, tech-driven FTSE 100 organisation using data science and AI to enhance customer experience, optimise supply chains and drive sustainable growth. With 40% of sales from sustainable products, this is a company that combines scale, innovation and purpose.

As a Machine Learning Engineer, you’ll help maintain the stability and performance of core data and ML systems across Europe. This technical engineering role focuses on reliability, optimisation and critical fixes, ideal if you excel at investigating and debugging complex data flows and ML issues in live production environments.

We’re looking for individuals with:

  • Experience: Proven background as a Machine Learning Engineer.
  • Technical Skills: Strong in SQL and Python (Pandas, Scikit-learn, Jupyter, Matplotlib).
  • Data transformation & manipulation: experience with Airflow, DBT and Kubeflow
  • Cloud: Experience with GCP and Vertex AI (developing ML services).
  • Expertise: Solid understanding of computer science fundamentals and time-series forecasting.
  • Machine Learning: Strong grasp of ML and deep learning algorithms (e.g. Logistic Regression, Random Forest, XGBoost, BERT, LSTM, NLP, Transfer Learning).

Reasonable Adjustments:

Respect and equality are core values to us. We are proud of the diverse and inclusive community we have built, and we welcome applications from people of all backgrounds and perspectives. Our success is driven by our people, united by the spirit of partnership to deliver the best resourcing solutions for our clients.

If you need any help or adjustments during the recruitment process for any reason, please let us know when you apply or talk to the recruiters directly so we can support you.

Senior Data Engineer
Robert Half
London
Hybrid
Senior
£400/day - £500/day
RECENTLY POSTED

Robert Half have partnered with a London-based pharmaceutical manufacturing organisation who are looking to engage a Senior Data Engineer to play a key role in scaling and maturing their data function.

This is an initial 12-month contract, playing a key role in stabilising and improving a currently fragmented data architecture while introducing best-practice engineering and delivery standards. The role requires 1 day per week onsite in London.

Responsibilities:

  • Lead the design and implementation of a clear, structured data engineering framework
  • Build, enhance, and maintain data pipelines within Snowflake
  • Develop transformations and data models using DBT
  • Orchestrate workflows using Apache Airflow
  • Write clean, modular Python code for data engineering solutions
  • Improve pipeline reliability, performance, and observability
  • Introduce and embed CI/CD, testing, and deployment best practices (Azure DevOps)
  • Define and implement clear data domains and schemas
  • Address ongoing ETL failures and long-running jobs
  • Work closely with technical and non-technical stakeholders in an Agile delivery environment
  • Contribute to roadmap definition and future platform evolution (including potential Databricks adoption)

Skills:

  • Strong hands-on experience as a Data Engineer at Senior or Lead level
  • Extensive experience with Snowflake as a core data platform
  • Advanced SQL with strong business logic and modelling capability
  • Experience with DBT, Airflow, and Python
  • Experience building modular, scalable solutions within Snowflake
  • CI/CD pipeline experience, ideally using Azure DevOps
  • Exposure to Docker and modern software engineering practices
  • Strong understanding of software delivery lifecycles and engineering best practices
  • Background in software engineering (rather than purely analytics-focused roles)
  • Experience working in Agile / Scrum delivery environments
  • Databricks experience or exposure desirable

Contract:

  • Initial 12-month contract
  • London-based, 1 day per week onsite
  • Competitive day rate

Robert Half Ltd acts as an employment business for temporary positions and an employment agency for permanent positions. Robert Half is committed to diversity, equity and inclusion. Suitable candidates with equivalent qualifications and more or less experience can apply. Rates of pay and salary ranges are dependent upon your experience, qualifications and training. If you wish to apply, please read our Privacy Notice describing how we may process, disclose and store your personal data:

Senior Data Engineer
Harnham - Data & Analytics Recruitment
Multiple locations
Fully remote
Senior
£80,000 - £100,000
RECENTLY POSTED

£80,000 - £100,000

Remote (UK-Based)

An opportunity for a highly technical Senior Data Engineer looking to join a fast-growing online gambling and sports betting platform that focuses on its first-in-class mobile app.

THE COMPANY

This future-ready online gambling and sports betting platform offers customers the best mobile app experience in the industry.

THE ROLE

As a Senior Data Engineer, you will sit within a high calibre data engineering team, owning and evolving core ETL pipelines that underpin the entire data platform. The focus is on building reliable, well-modelled data assets that can scale with the business.

Specifically, you can expect to be involved in the following:

  • ETL pipelines for data ingestion and data modelling
  • Using AWS to store and process data, with Snowflake as the data warehouse
  • Use of Python and SQL
  • Data orchestration and use of Terraform

SKILLS AND EXPERIENCE

The successful Senior Data Engineer will have the following skills and experience:

  • Strong Python and SQL skills
  • AWS or GCP (NOT Azure)
  • Orchestration, ideally with Airflow or Dagsta but open to others
  • Data warehousing - Snowflake, Redshift, BigQuery (NOT Databricks)
  • Terraform

BENEFITS

The successful Senior Data Engineer will receive the following benefits:

  • Salary between £80,000 - £100,000 - depending on experience

HOW TO APPLY

Please register your interest by sending your resume to Majid Latif via the Apply link on this page.

Founding Data Engineer
Edison Hill Search
London
Hybrid
Senior - Leader
£75,000 - £100,000
RECENTLY POSTED
+2

We re working with a high-growth Series A startup operating at the intersection of eCommerce and Fintech, building a product that is redefining how people shop online.

Their proposition removes friction from the buying experience allowing customers to try before they buy, without upfront payment bringing a more natural, in-store experience into the home.

The business is scaling quickly, with strong commercial traction and increasing complexity behind the scenes.

And that complexity is now centred around data.

Why this role exists

Data sits at the heart of the business spanning customer behaviour, payments, returns, and partner performance.

As the company has grown, the volume and importance of that data has outpaced the underlying infrastructure. Multiple sources, evolving definitions, and increasing reliance from across the business have created the need for a more robust, scalable foundation.

They are now looking to hire a Founding Data Engineer to take ownership of that foundation.

This is a pivotal hire someone who can design, build, and define how data is structured, trusted, and used across the company.

The opportunity

This is not a role focused purely on pipelines or reporting.

You will own the data environment end-to-end shaping the architecture, defining standards, and enabling the wider business to make better decisions through reliable, well-structured data.

You ll work closely with both technical and non-technical stakeholders, helping translate real-world business questions into clean, usable data models.

The company has also introduced an AI-assisted querying layer to make data accessible across the organisation. A key part of your role will be ensuring the outputs from that layer are accurate, well-defined, and trustworthy.

What they re looking for

They re interested in individuals who have taken ownership of data infrastructure in a production environment and are comfortable designing for scale.

Strong SQL and experience with modern data tooling (e.g. orchestration, warehousing, ETL/ELT) are expected.

Beyond that, the key differentiator is mindset.

They are looking for someone who:

  • Thinks beyond implementation and understands the commercial impact of data
  • Is naturally curious and engaged in how data is used across a business
  • Is comfortable working in an environment where not everything is defined
  • Takes ownership and is motivated by building things properly

Environment

You ll be joining a business at a stage where:

  • The product is established and scaling
  • The data challenges are real and increasingly complex
  • The foundations are still being defined

This offers a balance of ownership and stability the opportunity to shape something meaningful, without the uncertainty of a true greenfield environment.

Tech (for context)

A modern, cloud-based data stack including a mix of structured and unstructured data sources, orchestration tooling, and distributed storage.

RDS Postgres, MongoDB, AWS Athena, Parquet, AWS Glue, Airflow, Python, Docker, S3, GraphQL, REST. You won’t know all of it, you’ll be strong in your core area and curious about the rest.

Nice to have*: Python proficiency, CI/CD for data workflows, graph database experience (Neo4j), startup or early-stage background.

Depth in your core area is more important than experience across every tool.

Package

  • Competitive salary + equity
  • Hybrid working (London-based)
  • Strong exposure to leadership and decision-making
  • Opportunity to play a foundational role in a scaling business

Process

The process is designed to assess both technical capability and how you think about problems:

  1. Initial application + 3 short competency questions
  2. Introductory conversation
  3. Technical discussion
  4. In-person working session based on a real-world scenario
  5. Offer

Why this process matters

The role requires more than technical delivery. The team is specifically looking for individuals who show curiosity, initiative, and a genuine interest in how data drives business decisions not just how it is built.

Interested?

If you re looking for a role where you can build, own, and genuinely influence, this is worth a conversation.

Apply or get in touch for a confidential discussion.

EHS Partners Limited, Edison Hill Search & Edison Hill Scale are operating and advertising as an Employment Agency for permanent positions and as an Employment Business for interim / contract / temporary positions. EHS Partners Limited are an Equal Opportunities employer and we encourage applicants from all backgrounds. Please apply below at your earliest convenience.

Platform engineer
Beat My Salary
Reading
In office
Mid - Senior
Private salary
RECENTLY POSTED
+13

Location : Reading

NO Visa sponsorship

Eligibility :ILR/Citizen/Dependent/Settled

Domain : Telecom

Job summary :

  • Worked for large-scale, mission critical environments in Telecom domain.
  • Implement service mesh architectures using Istio for traffic management, security, and observability.
  • Lead incident response, root cause analysis, and continuous improvement activities.

Core application skills as a platform engineer:

  • OpenShift, Kubernetes, Prometheus, Grafana, RabbitMQ, Redis, MongoDB, PostgreSQL, NGINX and F5 load balancers, VMware vSphere, HashiCorp Vault, Keycloak, ArgoCD, Helm Charts, SonarQube, Snyk.
  • Implement object storage platforms using S3-compatible storage and MinIO.
  • Setting up Apache Airflow, Cisco NSO, Itential at platform level.
  • Develop, test and maintain backup and disaster recovery strategies.
  • DevOps, CI/CD development and DevSecOps tooling.

Foundational Skills:

  • Linux, Networking (TCP/IP, DNS, load balancing), Python, Ansible, Bash, Gitlab, REST APIs, microservices architecture.
Lead Test Engineer (SC Cleared)
scrumconnect ltd
Newcastle upon Tyne
Fully remote
Senior
£55,000
RECENTLY POSTED
+8

About Scrumconnect

Scrumconnect Consulting is a UK-based digital transformation consultancy delivering agile, secure technology solutions for public and private sector clients. This is a fully remote role based in India, supporting our UK operations and client base. You will work closely with our UK leadership team and must be comfortable operating within UK regulatory and legal frameworks.

About the Role

We are looking for a Test Engineer to join a large-scale cloud data engineering programme, operating across a modern AWS-native technology stack including Apache Airflow, Amazon Athena, AWS Glue, S3, EMR, and DynamoDB.

You will own testing across automated pipelines, data workflows, and cloud infrastructure - identifying risks, championing test frameworks, and coaching colleagues in quality engineering practices.

This is a hands-on, technically deep role. You will write code, build and extend test frameworks, and take accountability for test quality across releases, while actively mentoring junior team members.

Key Responsibilities

Automated Testing

  • Design, implement, and maintain automated test suites across data pipelines, cloud services, and applications
  • Automate data and application testing tasks and build test coverage using existing or new infrastructure

Risk Identification & Reporting

  • Identify and raise awareness of risks arising from automated test results
  • Analyse and report on test activities, results, issues, and risks
  • Translate technical findings into clear, prioritised communication

Framework Development

  • Identify, evaluate, and implement new test frameworks
  • Enhance existing frameworks to improve testing confidence and coverage

Pipeline & Data Testing

  • Validate data pipelines using Apache Airflow, AWS Glue, Athena, and EMR
  • Ensure data integrity, transformation accuracy, and performance under load
  • Analyse data in multiple formats to validate new functionality
  • Perform production data analysis to identify root causes of issues

Mentoring & Coaching

  • Mentor and advise team members on testing best practices
  • Contribute to a quality-first engineering culture

CI/CD & DevOps

  • Integrate and manage automated tests within GitLab CI/CD pipelines
  • Ensure automated execution and fast feedback loops

Technology Stack

Apache Airflow, Amazon Athena, AWS S3, AWS Glue, AWS EMR, AWS EC2, AWS ECR, AWS DynamoDB, AWS CloudWatch, AWS IAM, Python, Java, SQL, Bash, GitLab CI/CD, Jupyter Notebooks, Apache Spark, Terraform, Docker

Key Skills

  • Strong proficiency in Python, Java, SQL, and Scripting (eg, Bash)
  • Experience with Apache Airflow for orchestration and log analysis
  • Hands-on experience with AWS services (S3, Glue, Athena, EMR, CloudWatch, IAM, DynamoDB, EC2, ECR)
  • Experience using GitLab and GitLab CI/CD pipelines
  • Ability to analyse data using Jupyter Notebooks and Amazon Athena
  • Understanding of Apache Spark and EMR (basic to intermediate)
  • Experience with infrastructure as code (Terraform)
  • Familiarity with Docker and containerization
  • Knowledge of Server Side and client-side encryption
  • Understanding of dimensional data models and slowly changing dimensions
  • Experience in data creation/generation for testing
  • Strong analytical skills for validating data and identifying issues
  • Ability to understand how customer requirements translate into functional solutions

Skills & Experience Required

  • Proven experience as a Test Engineer or QA Engineer in cloud/data environments
  • Strong Python skills for automation, data validation, and test development
  • Experience testing data pipelines (data quality, transformation, performance)
  • Experience designing or extending automated test frameworks
  • Strong problem-solving and root cause analysis skills
  • Ability to clearly communicate risks to technical and non-technical stakeholders
  • Experience mentoring or coaching junior engineers
  • SFIA Level 4 capability - works autonomously and influences others
  • Active SC clearance (mandatory)

Diversity and Inclusion

At Scrumconnect Consulting, we believe that diversity drives innovation. We are committed to creating an inclusive environment where every individual is respected, valued, and supported. We welcome applications from candidates of all backgrounds and experiences.

Full Stack Software Engineer
83zero Ltd
London
Hybrid
Mid - Senior
£100,000 - £130,000
RECENTLY POSTED
+6

Full Stack Software Engineer (Data-Focused)

Location: London - Hybrid - 1-2 days per week

Salary: (Apply online only)k + Bonus

Type: Permanent

Sponsorship: Not Available

Role Overview:

We are seeking a Senior or Staff-Level Fullstack Software Engineer with a strong background in data to join our Data Science & Engineering team.This is a greenfield initiative that seeks to consolidate several lines of business under a single modern architecture. In this role, you will empower the business with the technology and tools needed to leverage data throughout the organization.

This Includes:

  • Deploying open-source BI tools and platforms
  • Setting up streaming data collection and validation services
  • Implementing third-party analytic solutions
  • Assisting with data-centric migration tasks

Key Responsibilities:

  • Partner with Data Engineers to deploy and maintain internal data tools and platforms
  • Implement and maintain integrations with third-party data platforms and APIs
  • Work with DevOps and Platform Engineers to support:
  • Infrastructure as Code using Terraform
  • Containerisation and deployment via Kubernetes (AKS)
  • CI/CD automation via ArgoCD
  • Monitoring via OTLP and Sumo Logic
  • Contribute to architectural decisions, with a strong emphasis on data security
  • Assist migration and data engineering teams in moving legacy products to a new technology stack
  • Support documentation, testing, and ongoing maintenance of production systems

Required Qualifications:

  • 7+ years of experience in software engineering roles, with a track record of building solutions end-to-end
  • Experience deploying applications in cloud environments, preferably Microsoft Azure
  • Strong experience in REST API design and OpenAPI standards
  • Extensive experience working with databases, data platforms, and analytical tools
  • Ability to research, propose, and implement solutions with minimal supervision
  • Experience building internal-facing CRUD applications
  • Familiarity with front-end development and ability to make basic modifications to open-source UI components
  • Experience enabling data scientists and engineers with Python-based development workflows and tooling

Technology Stack:

  • Azure Cloud Services
  • C# (.NET 10)
  • Python
  • React.js and modern JavaScript (ES6+)
  • Snowflake (Data Warehouse)
  • Apache Airflow
  • Terraform
  • dbt
  • SQL Server
Data Engineer
Intelligent Steps
London
Remote or hybrid
Mid - Senior
£500/day - £600/day
+4

Overview We are looking for a skilled Data Engineer with strong experience in Snowflake to join our growing data team. You will be responsible for designing, building, and maintaining scalable data pipelines and architectures that support analytics, reporting, and data-driven decision-making across the organization. \* 6 months initial contract (OUTSIDE IR35) \* Remote with occasional travel into London (1 day per month) Key Responsibilities \* Design, develop, and maintain robust data pipelines and ETL/ELT processes \* Build and optimize data models within Snowflake for performance and scalability \* Ingest data from various sources (APIs, databases, streaming platforms, etc.) \* Ensure data quality, integrity, and governance across systems \* Collaborate with data analysts, scientists, and business stakeholders to deliver data solutions \* Monitor and troubleshoot data workflows and pipeline performance \* Implement best practices for data security, privacy, and compliance \* Document data architecture, processes, and workflows Required Skills & Experience \* Proven experience as a Data Engineer or in a similar role \* Strong hands-on experience with Snowflake data platform \* Proficiency in SQL and data modeling techniques \* Experience with ETL/ELT tools (e.g., dbt, Apache Airflow, Talend, Informatica) \* Experience with cloud platforms (AWS, Azure, or GCP) \* Familiarity with data warehousing concepts and best practices \* Understanding of data governance and data quality principles Preferred Qualifications \* Experience with modern data stack tools (e.g., dbt, Fivetran, Kafka) \* Knowledge of CI/CD pipelines and DevOps practices \* Experience working with large-scale or real-time data processing systems \* Familiarity with BI tools (e.g., Power BI, Tableau, Looker) \* Snowflake certification is a plus

Software Developer (SFIA 4) SC / Newcastle - CEA
Peregrine
Multiple locations
Hybrid
Mid - Senior
£59,000 (Negotiable)
+6

Software Developer Permanent | Hybrid (willing to travel to Newcastle) | Python | Apache |  Data | SC cleared Hybrid work arrangement. Office attendance is up to 60%. Location is flexible: London, Leeds, Newcastle, Sheffield, Blackpool, Manchester and Birmingham. You must be willing to travel to Newcastle when required. The Role: We are looking for Software Developers with strong Python and data processing with Apache Spark responsible to lead the design, development, and operation of data‑driven applications and pipelines in a collaborative Dev and DevOps environment. The role focuses on writing and improving application code while working closely with DevOps engineers to support automated deployments, infrastructure management, system monitoring, reliability, and scaling. The post holder collaborates with Product Owners, Business Analysts, and users to translate business needs into robust technical solutions, operates and improves production services, analyses data and system issues to identify root causes, and champions engineering best practices. They also provide technical leadership through coaching and mentoring junior colleagues and contributing to continuous improvement across development, delivery, and operational processes. Responsibilities: Engineers will contribute to research, development and delivery across: æ Design, build, and operation of data ingest and publishing pipelines æ Workflow orchestration and task scheduling using managed services æ Collaboration with Product Owners, Business Analysts, and users to shape technical solutions æ Production support, monitoring, and continuous improvement of system resilience, stability, and performance æ Data analysis and investigation to identify root causes of defects and operational issues æ DevOps collaboration, including supporting automated deployments, infrastructure management, monitoring, and scaling æ Coaching and mentoring junior engineers and promoting engineering best practices Skills & Experience: æ Understanding of data processing using Apache Spark æ Use of Python, SQL, and familiarity with PySpark æ Experience using Apache Airflow for task orchestration æ Understanding of EMR and reviewing output logs æ Use of Jupyter notebooks and/or Amazon Athena to query and validate data æ Data analysis to identify root cause of issues æ Understanding of dimensional data models and slowly changing dimensions/historic data capture æ Use of AWS console and services such as, but not limited to; CloudWatch, IAM, S3, Glue, ECR, EC2, EMR, Dynamo DB, LakeFormation æ Familiarity with Amazon Textract and Comprehend æ Understanding of both server-side and client-side encryption æ Use of GitLab for source code management pipelines for CI/CD æ Use of GitLab Tags for component versioning in shared repositories æ Understanding of Docker and containerization of solutions æ IaC using Terraform æ Experience of understanding how customer expectations transition to applied functionality æ Familiarity with, and implementation of Engineering best practices æ Use of gitlab for release tagging and deployments æ Familiarity with basic data structures for constructing a solution æ Active BPSS, SC clearance or eligible for clearance Desirable skills: æ Experience supporting AI or data driven platforms æ Knowledge of cyber security or fraud prevention domains æ Experience working within government or critical national infrastructure environments Find out more: or check out our LinkedIn page

Junior Data Engineer
Hays Specialist Recruitment Ltd
London
Hybrid
Junior
£310/day
+3

Junior Data Engineer - Public Sector Contract: Initial 7 months (extension possible) Rate: £310 per day, Inside IR35 Location: Remote with travel to Waterloo (2-3 days per month) Security Clearance: SC‑eligible (5 years UK residency required) I am working with a key consultancy delivering a major UK public sector programme and are looking for a Junior Data Engineer / Scientist to join a mixed delivery team building and operating secure, reliable data platforms that support critical public services.This role is designed for someone early in their data career who wants to develop strong engineering fundamentals in a real production environment. The role - a junior, generalist data engineering position. This is an engineering‑led role, not a specialist or senior position. The team is ideally looking for a generalist in their first few years within data engineering or data science, who is building breadth across data platforms, pipelines and operations. You'll focus on: \* Designing, building and maintaining data pipelines \* Supporting the operation of data lakes and data warehouses \* Implementing and improving ETL / ELT processes \* Using Python and SQL to transform, validate and move data \* Working with analysts and developers to turn data requirements into technical solutions \* Monitoring data quality, documenting data models and lineage, and resolving issues \* Automating data workflows and operational tasks \* Participating in Agile delivery, sprint work and collaboration \* Supporting incidents and helping improve platform reliability over time \* Working within public sector data governance, security and privacy standards This role offers exposure to how data platforms are built, operated and supported in a regulated environment - forming the foundations of a long‑term data engineering career. What this role is not To avoid misalignment, it's important to be clear about what this role is not focused on: \* ❌ Not a Data Analyst role \* ❌ Not a Power BI / dashboard developer role \* ❌ Not an insight, reporting or MI position \* ❌ Not a modelling, ML or research‑focused role \* ❌ Not an LLM, AI or advanced data science role While you may work alongside analysts and data scientists, this role does not centre on: \* Building dashboards \* Producing insights or reports \* Statistical modelling \* Predictive or machine learning solutions The emphasis is on data engineering foundations and platform delivery. Ideal candidate profile This role is best suited to someone who: \* Is in their first few years of a data engineering or data science career \* Wants to build core engineering skills rather than specialise immediately \* Has hands‑on experience with SQL and Python \* Understands basic data modelling and ETL concepts \* Is comfortable learning through delivery in a production environment \* Is interested in how data platforms work end‑to‑end, including operations and support \* Is keen to grow within public sector data platforms Who this role is unlikely to suit This role is unlikely to be appropriate for candidates who: \* Are very senior data engineers or architects \* Have primarily worked in advanced ML, AI, or research‑focused roles \* Are specialised Power BI, reporting or MI developers \* Are looking for a role centred on analysis, insights or modelling \* Are seeking leadership, ownership of platform strategy, or advanced optimisation work Applications that demonstrate significant seniority or deep specialisation rather than junior‑to‑mid generalist experience may not be progressed. Required skills and experience Your CV should clearly demonstrate: \* A degree in a technical discipline (Computer Science, Data Science, Mathematics or similar) \* Hands‑on experience with SQL \* Experience using Python, Java or Bash \* Understanding of ETL processes and data modelling fundamentals \* Experience with version control (e.g. Git) \* Comfort working in Agile / DevOps environments \* Awareness of data security and privacy \* Eligibility for UK SC clearance Nice to have (but not essential) \* Exposure to AWS, Azure or GCP \* Familiarity with tools such as Airflow, dbt, Spark \* Awareness of CI/CD pipelines or containerisation \* Experience in public sector or regulated environments Important note for applicants This role is deliberately positioned as a junior, generalist data engineering role. Please ensure your CV clearly demonstrates hands‑on data engineering fundamentals, rather than senior leadership, advanced AI/ML work, or analytics‑only experience. Hays Specialist Recruitment Limited acts as an employment agency for permanent recruitment and employment business for the supply of temporary workers. By applying for this job you accept the T&C's, Privacy Policy and Disclaimers which can be found at hays.co.uk

Data Engineer (Spark/ Kubernetes) (Financial Services)
Hays Technology
London
Remote or hybrid
Mid - Senior
£625/day - £745/day
+1

Your new company Working for a renowned financial services organisation Your new role We are seeking a Data Engineer to support the replacement of a legacy ETL tool with a modern Apache Spark based data platform. This is a hands‑on engineering role focused on building and supporting Spark jobs, with an emphasis on performance, reliability, and scalability. The role is focused on building nonperformance Apache Spark jobs, with a strong emphasis on performance optimisation. Working in containerised environments using Kubernetes is a key element also as well as experience across Python/ Scala and Java. The role sits within a small Agile delivery team of four engineers (two onshore and two in Shenzhen), working closely with a Senior Data Engineer. You will be responsible for development work, sprint delivery, demos, documentation, and stakeholder engagement. This position suits a mid to senior level engineer with strong Spark development experience rather than design, infrastructure, or management responsibilities. What you’ll need to succeed Strong hands‑on experience with Apache Spark - Writing and tuning Spark jobs /PySpark development experience.
Experienced with Airflow and SQL. Strong experience working in with containerised environments using Kubernetes.
Experience with programming in Python or Scala
Experience with an Ops way of working, not pure development only - you know how to deploy solutions.
Experience with OpenShift would be highly desirable!
Experience working in an Agile way of working (Scrum, sprints, demos)
Financial services or professional services experience background required.
What you’ll get in return Flexible working options available. What you need to do now
If you’re interested in this role, click ‘apply now’ to forward an up-to-date copy of your CV, or call us now. Hays Specialist Recruitment Limited acts as an employment agency for permanent recruitment and employment business for the supply of temporary workers. By applying for this job you accept the T&C’s, Privacy Policy and Disclaimers which can be found at (url removed)

Page 1 of 3