Who We Are
Boston Consulting Group partners with leaders in business and society to tackle their most important challenges and capture their greatest opportunities. BCG was the pioneer in business strategy when it was founded in 1963. Today, we help clients with total transformation-inspiring complex change, enabling organizations to grow, building competitive advantage, and driving bottom-line impact.
To succeed, organizations must blend digital and human capabilities. Our diverse, global teams bring deep industry and functional expertise and a range of perspectives to spark change. BCG delivers solutions through leading-edge management consulting along with technology and design, corporate and digital ventures—and business purpose. We work in a uniquely collaborative model across the firm and throughout all levels of the client organization, generating results that allow our clients to thrive.
What You’ll Do
Design, build, test, and maintain Data pipelines, managing data platform, develop integrations from diverse data sources including on-premises systems and external APIs, support and troubleshoot production processes. Contribute to scalable pipeline design, resolve data discrepancies, and ensure SLAs are met while continuously improving data models, code efficiency, and data quality. Adhere to best practices in data integrity, testing, security, and documentation, while continuously expanding technical expertise and staying current with evolving tools and platforms.
YOU’RE GOOD AT
Developing and maintaining medium-high complexity data pipelines and applications within large-scale data platforms.
Applying structured problem-solving skills to analyze data issues and identify root causes.
Managing multiple tasks and priorities in a fast-paced, Agile environment.
Communicating clearly with both technical and non-technical stakeholders.
Working collaboratively in a matrixed organization with diverse teams and varying technical expertise.
Demonstrating intellectual curiosity and a willingness to learn new technologies and methodologies.
Being a proactive team player with a positive attitude and strong ownership mindset.
What You’ll Bring
Who You’ll Work With
You will be a member of the Data Hub Squad, a team focused on ingesting, transforming, streamlining, and exposing high-quality data to support the Marketing function in making data-driven decisions.
You will collaborate closely with the Chapter Lead, Product Owner, other data engineers, analysts, and other teams within the Marketing Portfolio.
Additional info
This position will involve daily collaboration with the Product Owner, Chapter Lead, other engineers and analysts throughout Agile process. The successful candidate will demonstrate:
Boston Consulting Group is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, age, religion, sex, sexual orientation, gender identity / expression, national origin, disability, protected veteran status, or any other characteristic protected under national, provincial, or local law, where applicable, and those with criminal histories will be considered in a manner consistent with applicable state and local laws.
BCG is an E - Verify Employer. Click here for more information on E-Verify.
Location: Leicestershire, hybrid
Salary: circa £70,000 - £80,000
Summary:
We are looking for a hands-on Data Engineer to lead the build of a modern AWS-based data platform, taking ownership from core infrastructure through to curated, business-ready datasets.
Key Responsibilities:
Design, build, and operate a scalable AWS data platform, including storage, compute, security, and monitoring
Develop robust, idempotent ETL pipelines across diverse data sources (APIs, databases, files, and event streams)
Implement medallion architecture (bronze, silver, gold) to transform raw data into high-quality, analytics-ready datasets
Establish infrastructure-as-code and CI/CD practices to ensure reproducibility and continuous improvement
Own data modelling, master data management, and aggregation layers to support reporting, analytics, and ML use cases
Skills and Experience:
Proven experience building and running production data platforms on AWS end-to-end, ideally in a multi-site business environment
Strong proficiency in Python and SQL, with hands-on experience in Spark/PySpark and modern table formats (e.g. Delta Lake, Iceberg, Hudi)
Expertise in AWS data services (e.g. S3, Glue, Redshift, Lambda, Step Functions, Kinesis) and infrastructure-as-code (Terraform or CDK)
Experience with workflow orchestration tools such as Airflow or similar
Ability to design scalable architectures, make pragmatic trade-offs, and communicate effectively with both technical and non-technical stakeholders
Processing Your Data
Bis Henderson Recruitment is a leading provider of recruitment, interim management and consultancy services to the supply chain and logistics industry. Should you respond to this advertisement we may store your CV and contact details and will process this data for recruitment purposes only. Should we process your data, then we will always tell you that we are doing so.
Please visit our website to read our Privacy Policy in full, in this Policy you will find information about our compliance with the UK General Data Protection Regulations.
All applicants must have an unrestricted right to work in the UK as our client will not support visa sponsorship for this role.
TPBN1_UKTJ
LynxRecruitmentissupportingaleadingorganisationwithinfinancialservicessearchingforaDataEngineertodesignandbuildscalable,cloud-firstdataplatformsthatdriveinnovationanddata-leddecision-making.
Youwillworkacrossthefulldatalifecycleengineeringsecure,high-performancepipelinesandmoderndataplatformsthatenableanalytics,AI,andenterpriseinsights.
KeyResponsibilities
Requirements
Desirable
Airflow,dbt,Kafka,CI/CD,BItools,governanceplatforms,financialservicesexperience.
Interestedinbuildingmodern,enterprise-scaledataplatformsinahighlyregulatedenvironment?Applynow!
Apache Spark Python AWS Cloud Data Pipelines
A hands-on data engineering role within a large-scale cloud data programme, responsible for building, maintaining, and troubleshooting data pipelines using Apache Spark, PySpark, Apache Airflow, and a broad suite of AWS services. You will apply strong analytical and engineering skills to deliver trusted, well-governed data assets in a modern, cloud-native environment.
About Scrumconnect
Scrumconnect is a leading UK technology consultancy delivering digital transformation across public and private sectors, contributing to over 20% of the UK s major citizen-facing public services. We specialise in cloud engineering, data platforms, and agile delivery, helping clients build scalable, secure, and user-centred digital solutions that create real impact.
Active SC clearance is a mandatory, non-negotiable requirement. Candidates must hold current, in-date Security Check (SC) clearance at the time of application. Sponsorship is not available. Applications without active SC clearance will not be considered.
Working arrangement: This role is hybrid. Candidates must be willing and able to travel to the Newcastle office three days per week. Remaining days may be worked remotely from anywhere in the UK.
About the role
You will work as a Data Engineer on a complex, cloud-based data programme designing, building, and maintaining data pipelines that process large volumes of data across a modern AWS-native stack. Using Apache Spark and PySpark for distributed data processing, Apache Airflow for orchestration, and a range of AWS services for storage, compute, and analytics, you will help deliver reliable, well-governed data assets to downstream users.
You will apply strong data analysis skills to identify root causes of data issues, work with dimensional data models and slowly changing dimensions, and implement infrastructure as code using Terraform. Familiarity with DWP engineering best practices and the ability to translate customer expectations into applied technical functionality are key to success in this role.
Key responsibilities
Data pipeline development
Build and maintain scalable data pipelines using Apache Spark and PySpark, processing and transforming large datasets across distributed cloud infrastructure.
Workflow orchestration
Configure and manage Apache Airflow DAGs for task orchestration, ensuring reliable scheduling, monitoring, and execution of data processing workflows.
Root cause analysis
Perform data analysis to identify and resolve root causes of pipeline failures and data quality issues including reviewing EMR output logs and CloudWatch metrics.
Data modelling
Apply understanding of dimensional data models and slowly changing dimensions (SCD) to design and maintain well-structured, analytically trusted data assets.
Infrastructure as code
Provision and manage cloud infrastructure using Terraform. Containerise solutions using Docker and manage deployments through GitLab CI/CD pipelines and release tagging.
Security & encryption
Apply understanding of both Server Side and client-side encryption patterns within AWS. Work within IAM policies and data governance standards appropriate to a regulated government environment.
Technical skills required
Languages & analytics
Data processing & orchestration
AWS services
Infrastructure, DevOps & delivery
Technology stack at a glance
PythonPySparkSQLApache SparkApache AirflowJupyter NotebooksDimensional modelling/SCDAWS EMRAmazon AthenaAWS S3AWS IAMAWS CloudWatchAWS EC2/ECRAmazon TextractAmazon ComprehendTerraformDockerGitLab CI/CDGitLab Tags
Contract Senior Software Engineer (Java or Python) Inside IR35Contract length: 6 months (with potential to extend)
Location: London*
Working Environment: On-site*
You will be joining a private equity firm as a senior software engineer, to work across the following responsibilities:
Key Requirements:
Technical:
AI & Tooling:
Desirable, but no essential:
Air Conditioning/HVAC Engineer Offices in Hagley and Nationwide travel Salary £38K-£45K depending upon experience We are seeking a skilled and experienced Air Conditioning/HVAC Engineer to join our existing team of Engineers, servicing commercial sites across the UK as part of our successful Facilities Management division. The role will suit an experienced Engineer with a strong background in Air Conditioning work. Salary offers will vary based on the experience and ability to undertake planned servicing, reactive and installations work, and qualifications held by the successful applicant. Additional technical experience of working within a clean room environment, /multi skilled qualifications would be of huge interest and experience Benefits: Company vehicle and fuel card Enrolled into company pension Overtime paid Travel paid door to door Responsibilities: Carry out planned preventative maintenance (PPM) Service and Reactive Maintenance works for Commercial and Retail clients Install HVAC systems and ductwork for Commercial and Retail clients. Install or service controls including thermostats, actuators, and smart energy systems. Commission new equipment, complete system testing, and verify operational performance Ensure all work complies with F-Gas Regulations, UK Building Regulations, and HSE guidelines. Follow all workplace safety rules. Maintain safe working practices in live commercial environments. Complete service reports, RAMS (Risk Assessment & Method Statements), commissioning sheets, and asset logs. Work independently and as part of a team, demonstrating a proactive and problem-solving approach. Diagnose faults and carry out repairs on refrigeration circuits, air conditioning units, ventilation systems, pumps, valves, and controls. Maintain accurate records of work completed via portal CAFM systems, and production of F GAS certification. Communicate findings and recommendations through our CAFM system and Estimate remedial works and provide detailed information for quotes. Use diagnostic tools, gauges, multimeters, airflow meters, and BMS interfaces. Adjust, balance, and optimise HVAC systems. Out of Hours Emergency Callout cover as part of the wider engineering team rota. Maintain accurate records for refrigerant handling and leak testing. Respond to emergency call-outs when required. Candidate Specification: Skills: F-GAS qualification is essential. City & Guilds Level 2 or 3 in Air Conditioning & Refrigeration or equivalent. NVQ Level 2/3 in Mechanical Engineering, Heating & Ventilation, or Building Services. At least 3 years of demonstrable experience in Commercial F-Gas, preferably in a PPM/Facilities Management environment, is essential. Strong knowledge with the ability to troubleshoot issues effectively. A full, clean driving licence is essential for this position. IPAF & PASMA certification is advantageous. Computer/PDA literate, able to complete and submit electronic reports accurately and effectively
DATA ENGINEER
6-MONTH CONTRACT
LONDON (1 day in office)
£500 - £550 per day (Outside IR35)
This position as a Snowflake Engineer offers the opportunity to join a leading travel company based near West London, currently undergoing a major cloud data transformation.
THE COMPANY
This established Pharma brand is known for its innovation and commitment to delivering seamless customer experiences through data. With a focus on modernising its data platforms, the company is investing in Snowflake and cloud-native tooling to better understand customer journeys, improve operations, and fuel growth. Contractors and perm hires we’ve placed here consistently highlight the collaborative culture, exciting technical challenges, and strong support from leadership.
THE ROLE
As a Data Engineer, you will play a crucial role in a data transformation program, focusing on optimising data pipelines and enabling cloud-driven insights. You will also be responsible for post-project documentation, ensuring clear communication with non-technical stakeholders.
Your key responsibilities will include:
KEY SKILLS AND REQUIREMENTS
To succeed in this role, you should have:
HOW TO APPLY
Please register your interest by sending your CV via the apply link on this page.
SENIOR DATA ENGINEER
£100,000
REMOTE (LONDON)
An opportunity to join a fast-growing, consumer-focused digital platform that is redefining how modern users interact with data-led products. This is a senior, high-impact role within a scaling data function, offering ownership, influence, and the chance to work on systems operating at significant scale.
THE COMPANY
A well-funded, UK-based digital entertainment business operating within a regulated environment. Founded in London and backed by recent investment, the company has grown quickly by focusing on strong product design, performance, and an engineering-first culture.
The business values scalable solutions over quick fixes, with a collaborative and technically ambitious environment. The role sits within a growing data function that plays a critical part in supporting continued customer growth and increasingly real-time analytics use cases.
THE ROLE
As a Senior Data Engineer, you will be responsible for building and scaling core data infrastructure, working closely with senior data and engineering stakeholders. You’ll be hands-on across multiple initiatives while helping shape platform direction and technical standards.
Key responsibilities include:
This role offers genuine architectural input and is well suited to engineers who enjoy complex technical challenges in a scale-up setting.
YOUR SKILLS AND EXPERIENCE
Required:
Desirable:
SALARY AND BENEFITS
HOW TO APPLY
Please register your interest by sending your CV to Harry Lack via the Apply link. All applications will be handled confidentially.
Liquidline is the fastest-growing commercial coffee solutions provider in the UK and Ireland—not that we’re bragging! Our customers are companies that take pride in offering quality refreshments to their employees and clients. Our success is built on outstanding customer service, hard work, and a strong team culture. We believe in delivering WOW experiences to both our customers and our valued employees.
We are proud to be Great Place to Work certified, a testament to our dedication to fostering a culture of support, growth and development, as well as promoting well-being, and winning together. With our core company values—passion, thoughtfulness, responsiveness, innovation, and smart working—at the very heart of our business, we are committed to cultivating an environment that inspires excellence.
We’re looking for a Data Engineer to play a pivotal role in transforming Liquidline’s data landscape. You’ll take ownership of the technical foundations of our data platform – from ingestion and infrastructure to deployment – helping us move from legacy, on-premise systems to a modern, cloud-first data architecture. Working closely with the BI & Data Lead and an Analytics Engineer, you’ll be the primary architect of our new data platform, supporting our acquisition strategy and turning fragmented legacy data into a competitive, AI-ready asset. If you enjoy building things from the ground up, modernising complex systems, and shaping how data is used across a business, this role offers real ownership and influence.
The Role - Data Engineer
What You Will Need In The Role Of Data Engineer
What You Will Learn & What Liquidline Can Offer You
Being a part of Liquidline is more than just a job – it’s a chance to grow, develop and thrive! We are deeply invested in the success of our team and our comprehensive benefits package is designed to support, and reward our employees. The package includes, but is not limited too:
Liquidline is a fast-growing, family owned business that has expanded from 92 to over 300 employees since 2020. With ambitious plans for the next five years, there’s never been a better time to join us! Our dynamic and innovative environment offers endless opportunities for personal and professional growth.
We are proud to be an Equal Opportunities Employer, treating everyone with fairness, respect and appreciation. At Liquidline, we embrace diversity and value the unique experiences and perspectives of every individual. Together, we are always Winning Together!
Commercial & Industrial HVAC - West Midlands Office Based
Wednesbury, Walsall, Dudley, Tipton, West Bromwich, Oldbury
£35,000 - £45,000 basic salary + Progression, Training + Benefits
This may be the ideal opportunity for you. The client is looking for a mechanically biased design engineer with some proven experience in HVAC design, estimation, and calculations.
Your Role as a HVAC Design Engineer:
Ideal Background for the HVAC Design Engineer Position:
The Company recruiting for the HVAC Design Engineer:
The Package for a HVAC Design Engineer:
Please apply for this job online if you are interested and feel you fit the above criteria.
Dave is the main point of contact for the role.
INDENG
£700-750/day overall assignment rate to umbrella
Fully remote
6 month initial
Apply today to join a forward-thinking, tech-driven FTSE 100 organisation using data science and AI to enhance customer experience, optimise supply chains and drive sustainable growth. With 40% of sales from sustainable products, this is a company that combines scale, innovation and purpose.
As a Machine Learning Engineer, you’ll help maintain the stability and performance of core data and ML systems across Europe. This technical engineering role focuses on reliability, optimisation and critical fixes, ideal if you excel at investigating and debugging complex data flows and ML issues in live production environments.
We’re looking for individuals with:
Reasonable Adjustments:
Respect and equality are core values to us. We are proud of the diverse and inclusive community we have built, and we welcome applications from people of all backgrounds and perspectives. Our success is driven by our people, united by the spirit of partnership to deliver the best resourcing solutions for our clients.
If you need any help or adjustments during the recruitment process for any reason, please let us know when you apply or talk to the recruiters directly so we can support you.
Robert Half have partnered with a London-based pharmaceutical manufacturing organisation who are looking to engage a Senior Data Engineer to play a key role in scaling and maturing their data function.
This is an initial 12-month contract, playing a key role in stabilising and improving a currently fragmented data architecture while introducing best-practice engineering and delivery standards. The role requires 1 day per week onsite in London.
Responsibilities:
Skills:
Contract:
Robert Half Ltd acts as an employment business for temporary positions and an employment agency for permanent positions. Robert Half is committed to diversity, equity and inclusion. Suitable candidates with equivalent qualifications and more or less experience can apply. Rates of pay and salary ranges are dependent upon your experience, qualifications and training. If you wish to apply, please read our Privacy Notice describing how we may process, disclose and store your personal data:
£80,000 - £100,000
Remote (UK-Based)
An opportunity for a highly technical Senior Data Engineer looking to join a fast-growing online gambling and sports betting platform that focuses on its first-in-class mobile app.
THE COMPANY
This future-ready online gambling and sports betting platform offers customers the best mobile app experience in the industry.
THE ROLE
As a Senior Data Engineer, you will sit within a high calibre data engineering team, owning and evolving core ETL pipelines that underpin the entire data platform. The focus is on building reliable, well-modelled data assets that can scale with the business.
Specifically, you can expect to be involved in the following:
SKILLS AND EXPERIENCE
The successful Senior Data Engineer will have the following skills and experience:
BENEFITS
The successful Senior Data Engineer will receive the following benefits:
HOW TO APPLY
Please register your interest by sending your resume to Majid Latif via the Apply link on this page.
We re working with a high-growth Series A startup operating at the intersection of eCommerce and Fintech, building a product that is redefining how people shop online.
Their proposition removes friction from the buying experience allowing customers to try before they buy, without upfront payment bringing a more natural, in-store experience into the home.
The business is scaling quickly, with strong commercial traction and increasing complexity behind the scenes.
And that complexity is now centred around data.
Why this role exists
Data sits at the heart of the business spanning customer behaviour, payments, returns, and partner performance.
As the company has grown, the volume and importance of that data has outpaced the underlying infrastructure. Multiple sources, evolving definitions, and increasing reliance from across the business have created the need for a more robust, scalable foundation.
They are now looking to hire a Founding Data Engineer to take ownership of that foundation.
This is a pivotal hire someone who can design, build, and define how data is structured, trusted, and used across the company.
The opportunity
This is not a role focused purely on pipelines or reporting.
You will own the data environment end-to-end shaping the architecture, defining standards, and enabling the wider business to make better decisions through reliable, well-structured data.
You ll work closely with both technical and non-technical stakeholders, helping translate real-world business questions into clean, usable data models.
The company has also introduced an AI-assisted querying layer to make data accessible across the organisation. A key part of your role will be ensuring the outputs from that layer are accurate, well-defined, and trustworthy.
What they re looking for
They re interested in individuals who have taken ownership of data infrastructure in a production environment and are comfortable designing for scale.
Strong SQL and experience with modern data tooling (e.g. orchestration, warehousing, ETL/ELT) are expected.
Beyond that, the key differentiator is mindset.
They are looking for someone who:
Environment
You ll be joining a business at a stage where:
This offers a balance of ownership and stability the opportunity to shape something meaningful, without the uncertainty of a true greenfield environment.
Tech (for context)
A modern, cloud-based data stack including a mix of structured and unstructured data sources, orchestration tooling, and distributed storage.
RDS Postgres, MongoDB, AWS Athena, Parquet, AWS Glue, Airflow, Python, Docker, S3, GraphQL, REST. You won’t know all of it, you’ll be strong in your core area and curious about the rest.
Nice to have*: Python proficiency, CI/CD for data workflows, graph database experience (Neo4j), startup or early-stage background.
Depth in your core area is more important than experience across every tool.
Package
Process
The process is designed to assess both technical capability and how you think about problems:
Why this process matters
The role requires more than technical delivery. The team is specifically looking for individuals who show curiosity, initiative, and a genuine interest in how data drives business decisions not just how it is built.
Interested?
If you re looking for a role where you can build, own, and genuinely influence, this is worth a conversation.
Apply or get in touch for a confidential discussion.
EHS Partners Limited, Edison Hill Search & Edison Hill Scale are operating and advertising as an Employment Agency for permanent positions and as an Employment Business for interim / contract / temporary positions. EHS Partners Limited are an Equal Opportunities employer and we encourage applicants from all backgrounds. Please apply below at your earliest convenience.
Location : Reading
NO Visa sponsorship
Eligibility :ILR/Citizen/Dependent/Settled
Domain : Telecom
Job summary :
Core application skills as a platform engineer:
Foundational Skills:
About Scrumconnect
Scrumconnect Consulting is a UK-based digital transformation consultancy delivering agile, secure technology solutions for public and private sector clients. This is a fully remote role based in India, supporting our UK operations and client base. You will work closely with our UK leadership team and must be comfortable operating within UK regulatory and legal frameworks.
About the Role
We are looking for a Test Engineer to join a large-scale cloud data engineering programme, operating across a modern AWS-native technology stack including Apache Airflow, Amazon Athena, AWS Glue, S3, EMR, and DynamoDB.
You will own testing across automated pipelines, data workflows, and cloud infrastructure - identifying risks, championing test frameworks, and coaching colleagues in quality engineering practices.
This is a hands-on, technically deep role. You will write code, build and extend test frameworks, and take accountability for test quality across releases, while actively mentoring junior team members.
Key Responsibilities
Automated Testing
Risk Identification & Reporting
Framework Development
Pipeline & Data Testing
Mentoring & Coaching
CI/CD & DevOps
Technology Stack
Apache Airflow, Amazon Athena, AWS S3, AWS Glue, AWS EMR, AWS EC2, AWS ECR, AWS DynamoDB, AWS CloudWatch, AWS IAM, Python, Java, SQL, Bash, GitLab CI/CD, Jupyter Notebooks, Apache Spark, Terraform, Docker
Key Skills
Skills & Experience Required
Diversity and Inclusion
At Scrumconnect Consulting, we believe that diversity drives innovation. We are committed to creating an inclusive environment where every individual is respected, valued, and supported. We welcome applications from candidates of all backgrounds and experiences.
Full Stack Software Engineer (Data-Focused)
Location: London - Hybrid - 1-2 days per week
Salary: (Apply online only)k + Bonus
Type: Permanent
Sponsorship: Not Available
Role Overview:
We are seeking a Senior or Staff-Level Fullstack Software Engineer with a strong background in data to join our Data Science & Engineering team.This is a greenfield initiative that seeks to consolidate several lines of business under a single modern architecture. In this role, you will empower the business with the technology and tools needed to leverage data throughout the organization.
This Includes:
Key Responsibilities:
Required Qualifications:
Technology Stack:
Overview We are looking for a skilled Data Engineer with strong experience in Snowflake to join our growing data team. You will be responsible for designing, building, and maintaining scalable data pipelines and architectures that support analytics, reporting, and data-driven decision-making across the organization. \* 6 months initial contract (OUTSIDE IR35) \* Remote with occasional travel into London (1 day per month) Key Responsibilities \* Design, develop, and maintain robust data pipelines and ETL/ELT processes \* Build and optimize data models within Snowflake for performance and scalability \* Ingest data from various sources (APIs, databases, streaming platforms, etc.) \* Ensure data quality, integrity, and governance across systems \* Collaborate with data analysts, scientists, and business stakeholders to deliver data solutions \* Monitor and troubleshoot data workflows and pipeline performance \* Implement best practices for data security, privacy, and compliance \* Document data architecture, processes, and workflows Required Skills & Experience \* Proven experience as a Data Engineer or in a similar role \* Strong hands-on experience with Snowflake data platform \* Proficiency in SQL and data modeling techniques \* Experience with ETL/ELT tools (e.g., dbt, Apache Airflow, Talend, Informatica) \* Experience with cloud platforms (AWS, Azure, or GCP) \* Familiarity with data warehousing concepts and best practices \* Understanding of data governance and data quality principles Preferred Qualifications \* Experience with modern data stack tools (e.g., dbt, Fivetran, Kafka) \* Knowledge of CI/CD pipelines and DevOps practices \* Experience working with large-scale or real-time data processing systems \* Familiarity with BI tools (e.g., Power BI, Tableau, Looker) \* Snowflake certification is a plus
Software Developer Permanent | Hybrid (willing to travel to Newcastle) | Python | Apache | Data | SC cleared Hybrid work arrangement. Office attendance is up to 60%. Location is flexible: London, Leeds, Newcastle, Sheffield, Blackpool, Manchester and Birmingham. You must be willing to travel to Newcastle when required. The Role: We are looking for Software Developers with strong Python and data processing with Apache Spark responsible to lead the design, development, and operation of data‑driven applications and pipelines in a collaborative Dev and DevOps environment. The role focuses on writing and improving application code while working closely with DevOps engineers to support automated deployments, infrastructure management, system monitoring, reliability, and scaling. The post holder collaborates with Product Owners, Business Analysts, and users to translate business needs into robust technical solutions, operates and improves production services, analyses data and system issues to identify root causes, and champions engineering best practices. They also provide technical leadership through coaching and mentoring junior colleagues and contributing to continuous improvement across development, delivery, and operational processes. Responsibilities: Engineers will contribute to research, development and delivery across: æ Design, build, and operation of data ingest and publishing pipelines æ Workflow orchestration and task scheduling using managed services æ Collaboration with Product Owners, Business Analysts, and users to shape technical solutions æ Production support, monitoring, and continuous improvement of system resilience, stability, and performance æ Data analysis and investigation to identify root causes of defects and operational issues æ DevOps collaboration, including supporting automated deployments, infrastructure management, monitoring, and scaling æ Coaching and mentoring junior engineers and promoting engineering best practices Skills & Experience: æ Understanding of data processing using Apache Spark æ Use of Python, SQL, and familiarity with PySpark æ Experience using Apache Airflow for task orchestration æ Understanding of EMR and reviewing output logs æ Use of Jupyter notebooks and/or Amazon Athena to query and validate data æ Data analysis to identify root cause of issues æ Understanding of dimensional data models and slowly changing dimensions/historic data capture æ Use of AWS console and services such as, but not limited to; CloudWatch, IAM, S3, Glue, ECR, EC2, EMR, Dynamo DB, LakeFormation æ Familiarity with Amazon Textract and Comprehend æ Understanding of both server-side and client-side encryption æ Use of GitLab for source code management pipelines for CI/CD æ Use of GitLab Tags for component versioning in shared repositories æ Understanding of Docker and containerization of solutions æ IaC using Terraform æ Experience of understanding how customer expectations transition to applied functionality æ Familiarity with, and implementation of Engineering best practices æ Use of gitlab for release tagging and deployments æ Familiarity with basic data structures for constructing a solution æ Active BPSS, SC clearance or eligible for clearance Desirable skills: æ Experience supporting AI or data driven platforms æ Knowledge of cyber security or fraud prevention domains æ Experience working within government or critical national infrastructure environments Find out more: or check out our LinkedIn page
Junior Data Engineer - Public Sector Contract: Initial 7 months (extension possible) Rate: £310 per day, Inside IR35 Location: Remote with travel to Waterloo (2-3 days per month) Security Clearance: SCâeligible (5 years UK residency required) I am working with a key consultancy delivering a major UK public sector programme and are looking for a Junior Data Engineer / Scientist to join a mixed delivery team building and operating secure, reliable data platforms that support critical public services.This role is designed for someone early in their data career who wants to develop strong engineering fundamentals in a real production environment. The role - a junior, generalist data engineering position. This is an engineeringâled role, not a specialist or senior position. The team is ideally looking for a generalist in their first few years within data engineering or data science, who is building breadth across data platforms, pipelines and operations. You'll focus on: \* Designing, building and maintaining data pipelines \* Supporting the operation of data lakes and data warehouses \* Implementing and improving ETL / ELT processes \* Using Python and SQL to transform, validate and move data \* Working with analysts and developers to turn data requirements into technical solutions \* Monitoring data quality, documenting data models and lineage, and resolving issues \* Automating data workflows and operational tasks \* Participating in Agile delivery, sprint work and collaboration \* Supporting incidents and helping improve platform reliability over time \* Working within public sector data governance, security and privacy standards This role offers exposure to how data platforms are built, operated and supported in a regulated environment - forming the foundations of a longâterm data engineering career. What this role is not To avoid misalignment, it's important to be clear about what this role is not focused on: \* â Not a Data Analyst role \* â Not a Power BI / dashboard developer role \* â Not an insight, reporting or MI position \* â Not a modelling, ML or researchâfocused role \* â Not an LLM, AI or advanced data science role While you may work alongside analysts and data scientists, this role does not centre on: \* Building dashboards \* Producing insights or reports \* Statistical modelling \* Predictive or machine learning solutions The emphasis is on data engineering foundations and platform delivery. Ideal candidate profile This role is best suited to someone who: \* Is in their first few years of a data engineering or data science career \* Wants to build core engineering skills rather than specialise immediately \* Has handsâon experience with SQL and Python \* Understands basic data modelling and ETL concepts \* Is comfortable learning through delivery in a production environment \* Is interested in how data platforms work endâtoâend, including operations and support \* Is keen to grow within public sector data platforms Who this role is unlikely to suit This role is unlikely to be appropriate for candidates who: \* Are very senior data engineers or architects \* Have primarily worked in advanced ML, AI, or researchâfocused roles \* Are specialised Power BI, reporting or MI developers \* Are looking for a role centred on analysis, insights or modelling \* Are seeking leadership, ownership of platform strategy, or advanced optimisation work Applications that demonstrate significant seniority or deep specialisation rather than juniorâtoâmid generalist experience may not be progressed. Required skills and experience Your CV should clearly demonstrate: \* A degree in a technical discipline (Computer Science, Data Science, Mathematics or similar) \* Handsâon experience with SQL \* Experience using Python, Java or Bash \* Understanding of ETL processes and data modelling fundamentals \* Experience with version control (e.g. Git) \* Comfort working in Agile / DevOps environments \* Awareness of data security and privacy \* Eligibility for UK SC clearance Nice to have (but not essential) \* Exposure to AWS, Azure or GCP \* Familiarity with tools such as Airflow, dbt, Spark \* Awareness of CI/CD pipelines or containerisation \* Experience in public sector or regulated environments Important note for applicants This role is deliberately positioned as a junior, generalist data engineering role. Please ensure your CV clearly demonstrates handsâon data engineering fundamentals, rather than senior leadership, advanced AI/ML work, or analyticsâonly experience. Hays Specialist Recruitment Limited acts as an employment agency for permanent recruitment and employment business for the supply of temporary workers. By applying for this job you accept the T&C's, Privacy Policy and Disclaimers which can be found at hays.co.uk
Your new company Working for a renowned financial services organisation Your new role We are seeking a Data Engineer to support the replacement of a legacy ETL tool with a modern Apache Spark based data platform. This is a hands‑on engineering role focused on building and supporting Spark jobs, with an emphasis on performance, reliability, and scalability. The role is focused on building nonperformance Apache Spark jobs, with a strong emphasis on performance optimisation. Working in containerised environments using Kubernetes is a key element also as well as experience across Python/ Scala and Java. The role sits within a small Agile delivery team of four engineers (two onshore and two in Shenzhen), working closely with a Senior Data Engineer. You will be responsible for development work, sprint delivery, demos, documentation, and stakeholder engagement. This position suits a mid to senior level engineer with strong Spark development experience rather than design, infrastructure, or management responsibilities. What you’ll need to succeed Strong hands‑on experience with Apache Spark - Writing and tuning Spark jobs /PySpark development experience.
Experienced with Airflow and SQL. Strong experience working in with containerised environments using Kubernetes.
Experience with programming in Python or Scala
Experience with an Ops way of working, not pure development only - you know how to deploy solutions.
Experience with OpenShift would be highly desirable!
Experience working in an Agile way of working (Scrum, sprints, demos)
Financial services or professional services experience background required.
What you’ll get in return Flexible working options available. What you need to do now
If you’re interested in this role, click ‘apply now’ to forward an up-to-date copy of your CV, or call us now. Hays Specialist Recruitment Limited acts as an employment agency for permanent recruitment and employment business for the supply of temporary workers. By applying for this job you accept the T&C’s, Privacy Policy and Disclaimers which can be found at (url removed)