Make yourself visible and let companies apply to you.
Roles

PySpark Jobs

Overview

Looking for top PySpark jobs? Explore the latest PySpark developer positions on Haystack, your go-to IT job board for data engineering roles. Find exciting opportunities to work with big data, Apache Spark, and advanced analytics today!
Filters applied
PySpark
Search
Salary
Location
Remote preference
Role type
Seniority
Tech stack
Sectors
Contract type
Company size
Visa sponsorship
Senior Data Engineer
British Gas
Leicester
Hybrid
Senior
Private salary
RECENTLY POSTED
fabric
aws
python
hadoop
scala
pyspark
Description Join us, be part of more.We’re so much more than an energy company. We’re a family of brands revolutionising how we power the planet. We’re energisers. One team of 21,000 colleagues that’s energising a greener, fairer future by creating an energy system that doesn’t rely on fossil fuels whilst living our powerful commitment to igniting positive change in our communities. Here, you can find more purpose, more passion, and more potential. That’s why working here is #MoreThanACareer. We do energy differently - we do it all. We make it, store it, move it, sell it, and mend it.About your team:At British Gas, our mission is to sell it and mend it.We’ve been powering the UK’s homes and businesses for over 200 years - but supplying energy is just part of what we do. We’re making the UK greener and more energy efficient, getting closer to Net Zero. By using clever tech like thermostats, heat pumps, solar panels and EV chargers, we’re making it cheaper and easier for our customers to reduce their carbon-footprint.R0069779 - Senior Data EngineerFull time or Part timeLeicester/WindsorAbout your team:At British Gas Energy, our ambition is to be Britain’s favourite energy supplier.We’ve been powering the UK’s homes and businesses for over 200 years - but supplying energy is just part of what we do. We’re making the UK greener and more energy efficient, getting closer to Net Zero. By using clever tech like thermostats, heat pumps, solar panels and EV chargers, we’re making it cheaper and easier for our customers to reduce their carbon-footprint.Are you passionate about Data & AI and eager to make a significant impact? We are growing our Data & Analytics department to drive value and innovation within our business. Through the development of Data & AI products, we aim to enhance decision-making, improve performance, and make a valuable difference in our business and for our customers.Why Join Us?
Innovative Environment: Stay ahead in the Data & AI field through use of cutting-edge technologies and creative thinking
Collaborative Culture: Work with talented professionals in a supportive environment where best practices are shared and continuous improvement is encouraged
Career Growth: We will invest in our team’s development through continuous learning opportunities and career advancement programs
Impactful Work: Directly contribute to our mission to drive business growth and operational efficiency
Personal Development: We will provide an environment for you to learn and develop, with access to resources and support to help you grow both professionally and personally
About your role:Join us as a Senior Data Engineering Leader and Shape the Future of British Gas Business’s Data-Driven Success!Step into a pivotal role within our Data Engineering function, where you’ll lead transformative Data Engineering Science projects to drive growth, create efficiencies, and revolutionise BGB’s decision-making capabilities. As a senior member of the team, you’ll take charge of designing, building, and maintaining scalable data pipelines and data models that empower Data Analysts, Management Information, and Data Science initiatives. Alongside advancing our data integrity and availability, you’ll also inspire and develop Associate Data Engineers, while mentoring peers to elevate the entire team’s expertise.Key aspects of this role are:
Data Pipeline Development: Build and maintain robust Extract, Transform and Load data pipelines, ensuring seamless integration of large datasets into BGB’s Data Estate
Data Quality: Implement data quality audits and validation processes to maintain data accuracy
Data Product Engineering: Collaborate with Analysts and Scientists to create data products for advanced analytics and machine learning
Data Architecture: Design and refine data architecture to meet organizational needs
Optimisation: Enhance data extraction and storage efficiency for cost and performance gains
Technical Support: Troubleshoot data-related issues with a hands-on approach
Documentation: Establish and maintain thorough documentation of processes and best practices
Innovation: Stay at the forefront of emerging technologies to propel our data engineering capabilities forward
Leadership: Grow, develop, and retain top talent while fostering a culture of excellence and ensuring succession planning
Mentorship: Share your Data Engineering expertise with colleagues across departments, building cross-functional knowledge and collaboration
Here’s what we’re looking for:
Extensive expertise in data engineering, with a proven track record of designing and implementing scalable data pipelines and data models. Strong skills in data modelling and data warehousing underpin this expertise
Proficient in cloud services and cloud-based data engineering tools, such as AWS, Azure, Microsoft Fabric and Databricks, as well as big data technologies like Hadoop and Spark
Skilled in programming languages, including Python, PySpark and Scala, with extensive experience developing robust ETL pipelines and ensuring scalable deployments
Experienced in mentoring and developing less experienced Data Engineers, guiding them to grow their technical skills and capabilities
Capable of delivering and leading complex data engineering projects, ensuring high-quality outputs and timely completion
Why should you apply?We’re not a perfect place - but we’re a people place. Our priority is supporting all of the different realities our people face. Life is about so much more than work. We get it. That’s why we’ve designed our total rewards to give you the flexibility to choose what you need, when you need it, making sure that you and your family are supported not only financially, but physically and emotionally too. Visit the link below to discover why we’re a great place to work and what being part of more means for you.https://www.morethanacareer.energy/britishgasIf you’re full of energy, fired up about sustainability, and ready to craft not only a better tomorrow, but a better you, then come and find your purpose in a team where your voice matters, your growth is non-negotiable, and your ambitions are our priority.Help us, help you. We would love for you to share any information about yourself throughout our recruitment process so that we can better understand you and help shape your journey.
MLOps Tech Lead
Stackstudio Digital Ltd.
London
Hybrid
Senior
£500/day - £525/day
RECENTLY POSTED
processing-js
aws
mongodb
mysql
tensorflow
git
+13
Job Details Role / Job Title: MLOps Tech Lead Work Location: London, UK Office Requirement (Hybrid): 2 days per week Key Responsibilities (High-Level) Data Pipeline Development: Lead the technical direction of projects and ensure the use of Sainsbury’s best practices to the best quality.Data Integration: Lead and provide expertise on Integrate data from various sources, ensuring data consistency, integrity, and quality across the entire data lifecycle.Infrastructure Management: Provide guidance for the junior & Mid Data Engineers on the best practices when building and managing data infrastructure, including data lakes, warehouses, and distributed processing systems (e.g., PySpark, Hadoop). The Role As a Tech Lead , you will play a critical role in designing, building, and maintaining data pipelines and infrastructure that enable the development and deployment of machine learning models and drive engineering excellence. You will collaborate closely with data scientists, and lead ML engineers, and software engineers to ensure data is clean, accessible, and optimised for large-scale processing and analysis. Your Responsibilities Data Pipeline Development: Lead the technical direction of projects and ensure the use of Sainsbury’s best practices to the best quality.Data Integration: Lead and provide expertise on Integrate data from various sources, ensuring data consistency, integrity, and quality across the entire data lifecycle.Infrastructure Management: Provide guidance for the junior & Mid Data Engineers on the best practices when building and managing data infrastructure, including data lakes, warehouses, and distributed processing systems (e.g., PySpark, Hadoop).Data Preparation: Collaborate with data scientists to prepare and transform raw data into formats suitable for machine learning, including feature engineering and data augmentation.Automation: Implement automation tools and frameworks (CI/CD) to streamline the deployment and monitoring of machine learning models in production.Performance Optimisation: Optimise data processing workflows and storage solutions to improve performance and reduce costs.Collaboration: Work closely with cross-functional teams, including data science, engineering, and product management, to deliver data solutions that meet business needs.Mentorship: junior and mid-level data engineers and provide technical guidance on best practices and emerging technologies in data engineering and machine learning and helping to enhance their skills and career growth.Knowledge Sharing and Empowerment: Promote a culture of knowledge sharing within the engineering teams by organising regular technical workshops, brown bag sessions, and code reviews.Innovation and Continuous Improvement: Foster a collaborative and inclusive team environment that encourages continuous learning and improvement. Your Profile Essential Skills / Knowledge / Experience Knowledge of machine learning frameworks (e.g., PySpark, PyTorch) and model deployment tools (e.g., MLflow, TensorFlow Serving).Strong experience with data processing frameworks (e.g., Apache Spark, Flink).Expertise in SQL and NoSQL databases (e.g., MySQL, PostgreSQL, MongoDB, Cassandra).Hands-on experience with cloud platforms (e.g., AWS, GCP, Azure) and their data services (e.g., Snowflake, S3, BigQuery, Redshift).Experience with containerisation and orchestration tools (e.g., Docker, Kubernetes).Familiarity with version control systems (e.g., Git) and CI/CD pipelines. Desirable Skills / Knowledge / Experience Certifications: AWS Certified Big Data Specialty, Google Professional Data Engineer, or equivalent. Soft Skills:o Excellent problem-solving and analytical skills.o Strong communication skills, with the ability to explain complex technical concepts to non-technical stakeholders.o Ability to work independently and in a team-oriented, collaborative environment. Leadership and Communication Strong leadership skills with the ability to inspire and guide team.Lead scrum ceremonies as and when needed (Standup, Planning, and grooming sessions).Excellent verbal and written communication skills, with the ability to articulate complex technical concepts.Creating a safe and inclusive environment where all team members feel that their input is valued and are never dissuaded from speaking up or asking questions. Collaborative Attitude Strong team player with a collaborative approach to working with cross-functional teams within the Media Agency.Open to feedback and willing to provide constructive criticism to others.Be available for the team, responding within a reasonable time frame and if not possible clearly sign positing alternative contacts who can guide.Building a community across Media Agency.Contribute to a positive and inclusive atmosphere within the team. Knowledge Sharing and Empowerment Commitment to fostering a learning culture within the team and ensuring knowledge transfer across all levels.Support and mentor C3s and C4s engineers by providing them opportunities to lead initiatives and contribute to the technical roadmap.TPBN1_UKTJ
AI Technical Architect
Purview Consultancy Services Ltd
London
Hybrid
Senior - Leader
Private salary
RECENTLY POSTED
processing-js
python
pyspark
Job Title: AI Technical Architect Location: London, UK (2 days in a week from office, Hybrid)Job type: 6 Months contract with possible extension Active SC Cleared - Inside IR3515 years of IT experience currently hands on with minimum of 5 years of experience on Azure having technically guided managed and governed the team.Primary skills:
Strong communication skills and experience in managing various stakeholder relationships to gain consensus on complex technical solutions
Experience in architecting designing and implementing solutions on-premises in the cloud and using hybrid models
Handson experience in deploying a variety of generative models
In-depth experience in finetuning and customizing pretrained AI models with good understanding of various patterns and practices in AI data engineering and large data processing.
Handson in Prompt Engineering Azure Open AI Form Recognizer Cognitive Search Vector Databases.
Develop and deliver upskilling sessions to the customer.
Python and PySpark.
This role requires the candidate who is already holding active Security Check (SC) clearance in accordance with UK Government standards.
Secondary Skills:
MLOps and LLMOps
Certifications Must Have:
Microsoft Certified Azure Developer Associate AZ204
Certifications Good to Have:
Microsoft Certified Solutions Architect Expert AZ303 AZ304
AI102 Microsoft Certified Azure AI Engineer Associate
DP100 Microsoft Certified Azure Data Scientist Associate
Databricks Professional Certificate in Large Language Models
Microsoft Certified Azure Solutions Architect Expert
Soft Skills:
Good customer connects Prepare solution presentations
Positive attitude and excellent communication skills to manage customer calls
Excellent problem-solving skills good communication
Data Engineer
Coventry Building Society
Coventry
Hybrid
Mid
£50,000/day
RECENTLY POSTED
python
sql
pyspark
About the roleWorking in our Data and Analytics Delivery department, the Data Engineer will join the group on a 12-month fixed term contract to focus on the migration and integration of data into our new ecosystem.The Data Engineer will be designing, developing and testing quality data engineering solutions and will look to challenge and improve our processes, tools and approach. The person in post will undertake review and assurance activity, providing other team members with guidance on design, build and test activity.Adhering to standard driven code development, the Data Engineer will deliver solutions that meet business needs in a timely manner and will take responsibility for the testing of their solution, including the analysis of requirement, designs of test cases & scripts, preparing test data and creating and executing tests to ensure effective and accurate deliverables.We operate on a team ledhybrid approach with at least 1 days a week in the Coventry or Manchester office.Our benefits include:
28 days holiday a yearplus bank holidays and a holiday buy/sell scheme
Annual discretionary bonus scheme
Personal pension with matched contributions
Life assurance (6 times annual salary)
We reserve the right toclosethis advertearlyif we receive ahigh volumeof suitable applicationsAbout youYoull either have a Data Engineering related qualification and/or extensive Data Development experience in a commercial or Agile environment.To be successful in this role its essential that you will:
Have experience of AWS, Python, SQL, Gitand PySpark
Desirable experience needed will be:
SISS or SAS experience
Quality Assurance and Test Automation experience
Experience of Database technologies
Experience in Financial Services organisation
About usIn 2025, Coventry Building Society purchased The Co-operative Bank. Bringing together our purpose-led building society with the UKs original ethical bank was the start of an exciting journey.Trusted by over four million people, were a mutually owned business free from shareholders, and with our combined experience of almost 300 years, our ethics and dedication will continue to guide us. Together, we have shared values and an ethical approach towards our members, customers and colleagues.Were officially recognised as a Great Place to Work and our benefits go beyond basic pay, with a discretionary bonus scheme, a culture of reward and recognition and comprehensive support for wellbeing.Were serious about equality, of race, age, faith, disability, and sexual orientation and we celebrate diversity. By working together, we know youll build more than just a career with us.Flexibility and why it matters We understand the need for flexibility, so wherever possible, well consider alternative working patterns.Have a chat with us before you apply to see what the possibilities are for this role. Proud to be a Disability Confident Committed Employer Were proud to offer an interview or assessment to every disabled applicant who meet the minimum criteria for our vacancies. As part of the application process, disabled applicants can opt in for the Disability Confident Interview Scheme. If there are ever occasions where it is not practicable to interview all candidates that meet the essential criteria, such as when we receive a high number of applications, we commit to interviewing disabled candidates who best meet the minimum essential and desirable criteria.
Senior Data Engineer - (ML and AI Platform)
Datatech
London
Hybrid
Senior
£65,000 - £80,000
RECENTLY POSTED
aws
python
sql
pyspark
snowflake
Senior Data Engineer (ML and AI Platform) Location London with hybrid working Monday to Wednesday in the office Salary 65,000 to 80,000 depending on experience Reference J13026We are partnering with an AI first SaaS business that turns complex first party data into trusted, decision ready insight at scale.You will join a collaborative data and engineering team building a modern, cloud agnostic data and AI platforms.This role is well suited to an experienced data engineer who enjoys working thoughtfully with real world data, contributing to reliable production systems, and developing clear and well-structured Python and SQL.Why join: Supportive and inclusive culture where people are encouraged to contribute and be heard Clear progression with space to develop your skills at a sustainable pace An environment where collaboration, learning, and thoughtful engineering are genuinely valuedWhat you will be doing: Contributing to the design and delivery of cloud-based data and machine learning pipelines Working with Python, PySpark and SQL to build clear and maintainable data transformations Helping shape scalable data models that support analytics, machine learning, and product features Collaborating closely with Product, Engineering, and Data Science teams to deliver meaningful production outcomesWhat we are looking for: Experience using Python for data transformation, ideally alongside PySpark Confidence working with SQL and production data models Experience working with at least one modern cloud data platform such as GCP, AWS, Azure, Snowflake, or Databricks Experience contributing to data pipelines that run reliably in production environments A collaborative mindset with clear and thoughtful communicationRight to work in the UK is required. Sponsorship is not available now or in the future.Apply to learn more and see if this could be the next step for you.If you have a friend or colleague who may be interested, referrals are welcome. For each successful placement, you will be eligible for our general gift or voucher scheme. Datatech is one of the UK’s leading recruitment agencies specialising in analytics and is the host of the critically acclaimed Women in Data event. For more information, visit (url removed)
Senior Manager - Palantir Foundry Decision Intelligence Practice
Staffworx Limited
London
Hybrid
Senior
Private salary
RECENTLY POSTED
python
typescript
java
sql
pyspark
Decision Intelligence - Palantir Foundry, Lead Consultant, Senior ManagerWe are looking for a Senior Manager with deep Palantir Foundry expertise to lead the design and delivery of production-grade data and AI solutions. You will shape end-to-end architectures, lead multidisciplinary teams and work directly with senior client stakeholders to turn complex data, AI and process challenges into scalable Foundry applications.Key responsibilities
Act as lead architect for Foundry, owning solution design from ingestion and pipelines through Ontology, applications and AI use cases.
Translate business problems into Foundry use cases, technical designs and deliverable roadmaps.
Design and oversee data pipelines, Ontology models, security and governance patterns and application workflows in Foundry.
Guide teams of data engineers, software engineers and data scientists to deliver robust, secure and maintainable Foundry solutions.
Integrate Foundry with wider enterprise platforms, cloud environments and downstream analytics tools.
Build trusted relationships with senior stakeholders, shaping new opportunities and ensuring value realisation from the platform.
Skills and experience
Significant hands-on experience delivering Palantir Foundry solutions in complex client environments.
Deep Foundry technical expertise across the full stack: Pipeline Builder, Ontology, Workshop, OSDK, Code Repositories, Actions and AIP or agentic capabilities, able to build production-grade applications not just prototypes.
Strong proficiency in at least one relevant programming language such as Python or PySpark, Java, Typescript or SQL.
Solid understanding of data engineering, data modelling, security and governance in enterprise settings.
Experience with software engineering best practices including Git-based development, testing and CI or CD.
Excellent communication and stakeholder management skills, with the ability to influence and align diverse technical and business audiences.
Proven leadership in building, coaching and motivating technical teams.
Sector experience in Financial Services, Government, Healthcare, Energy or Manufacturing is desirable.
Eligibility for, or current possession of, government security clearance is an advantage.
What you will receiveYou will join a specialist Foundry community, working on high-impact programmes with strong support for ongoing learning and certification. A competitive package typically includes flexible and hybrid working, health and wellbeing benefits, professional development support and paid volunteering or community days.
Senior Data Engineer
Tenth Revolution Group
Multiple locations
Fully remote
Senior
£60,000 - £65,000
RECENTLY POSTED
fabric
python
sql
pyspark
About the Role We are looking for a Senior Data Engineer to join a leading Microsoft partner that is modernising data platforms and delivering innovative analytics solutions for organisations across the UK. You will work closely with clients to understand their business challenges before designing tailored solutions that improve efficiency, drive self‑service reporting and support long‑term scalability. This is a hands‑on role where you will support clients from a variety of different sectors. You will also be able to supplement this hands-on experience with the opportunity to gain Microsoft focus certifications and accreditations. Responsibilities Build and manage data pipelines using Azure Synapse, Data Factory, Databricks or Microsoft Fabric Design, implement and maintain data lakes data warehouses and ETL/ELT processes Develop scalable data models for reporting in Power BI Work closely with stakeholders to understand business needs and advise on solutions that best fit the individual needs of the businessSkills and Experience Hands‑on experience Azure services such as Synapse, Data Factory or Databricks Strong SQL skills Proficiency in Python and/or PySpark Experience with Power BI and data modellingWhat is on offer Salary up to £65,000 Fully remote working from anywhere in the UK Performance‑related bonus scheme Pension scheme and private healthcare optionsThis is just a brief overview of the role. For the full details, simply apply with your CV and we’ll be in touch to discuss it further. Tenth Revolution Group are the go‑to recruiter for Data & AI roles in the UK, offering more opportunities nationwide than any other recruitment agency. We are proud sponsors of SQLBits, Power Platform World Tour and the London Fabric User Group
Lead Business Intelligence Analyst
Persimmon Homes
York
Hybrid
Senior
Private salary
RECENTLY POSTED
git
python
pandas
dynamics-crm
dimensions
pyspark
+1
Job Title: Lead Business Intelligence AnalystLocation: York, YO19Looking for a career where your ambition meets real opportunity? Join Persimmon Homes as a Lead Business Intelligence Analyst and step into a role where your success is celebrated, your growth supported, and your work truly matters.Why Persimmon Homes?We’re one of the UK’s largest and most established housebuilders — FTSE 100 listed, with 29 regional offices and thousands of quality homes built every year.At Persimmon, we don’t just build homes — we build careers. When you join us as an Lead Business Intelligence Analyst, you’ll benefit from:
Competitive salary
Company Car/ Car Allowance
5* housebuilder - Be part of a company that consistently delivers quality homes and outstanding customer satisfaction
Life Cover & Contributory Pension
Bonus
Employee Benefits Platform – giving you access to high-street discounts, wellbeing support, and more
Committed to diversity, inclusion, and empowering your development
What is the role?Persimmon Plc is recruiting for a Lead Business Intelligence Analyst to strengthen our Data Team as we continue to expand our data journey.Reporting to the Data Team Manager, the Lead BI Analyst will take ownership of the reporting and data insight function, managing a small team of BI Analysts (currently three, with potential for growth). You will combine hands-on BI development with leadership responsibilities, ensuring the delivery of accurate, well-designed, and insightful reporting across the organisation.This role can be based in Birmingham, York, or Manchester, with a flexible hybrid working option to work remotely up to three days a week.What you’ll do as a Lead Business Intelligence Analyst
Line manage and develop a team of BI Analysts, including resource allocation, performance management, and PDPs.
Oversee and review all reporting outputs, ensuring a high standard of accuracy, design, and consistency across the team.
Maintain ownership of the reporting lifecycle — from data extraction and modelling to final delivery and presentation.
Lead on BI best practices, driving continuous improvement in reporting standards, data quality, and visual design.
Collaborate and coordinate with cross-functional teams, stakeholders, and vendors to ensure the effective functioning of the enterprise data infrastructure.
Translate business requirements into technical specifications, including data streams, integrations, transformations, data models, dashboards, and reports.
Support the development and maintenance of the enterprise data architecture framework, standards, and principles.
Document key processes, maintain the data dictionary, and ensure governance and consistency.
Provide support and mentorship to analysts, fostering a culture of excellence, curiosity, and innovation.
Occasional travel to other offices or project sites (typically less than 10%).
What experience do I need?
Proven experience at Lead or Senior BI Analyst level.
Experience line managing or mentoring BI teams (ideally 3+ analysts).
Strong attention to detail and a passion for delivering high-quality, well-formatted, and visually engaging reports.
Advanced proficiency with Power BI, including data modelling and DAX.
Solid understanding of relational databases and data warehouse concepts (dimensions, facts, star schemas) and associated technologies (e.g., Databricks).
Experience with Python (including Pandas / PySpark).
Familiarity with software development and source control, in particular Git and CI/CD practices.
Excellent communication and stakeholder management skills — able to engage both technical and non-technical audiences.
Strong organisational and time management skills.
Experience within the housing or construction industry is a significant advantage.
Knowledge of ERP (COINS, Microsoft Dynamics, SAP, Oracle) and CRM systems.
AWS Support Engineer / Data Engineer Telecom Domain
Stackstudio Digital Ltd.
Ipswich
In office
Mid - Senior
£70,000/day
RECENTLY POSTED
aws
processing-js
python
pyspark
aws-glue
Job Title: AWS Support Engineer / Data Engineer Location: Ipswitch (onsite) Job Type: PermanentJob Summary:AWS Support Engineer / Data Engineer Telecom Domain (JD)
Key Skills & Expertise
AWS Core Services: S3, Redshift, Glue, Athena, Lake Formation, IAM
Data Engineering / ETL:
Building and optimizing ETL pipelines
Data ingestion, transformation & orchestration using AWS Glue (PySpark/Python)
Working with structured/semi-structured telecom datasets (CDRs, network logs, subscriber data)
Data Lake Technologies:
Expertise in Apache Iceberg table format
Schema evolution, partitioning, compaction & metadata management
Query performance tuning with Athena & Redshift Spectrum
Redshift Expertise:
Data modeling, distribution styles, sort keys
Workload management (WLM)
Performance optimization & troubleshooting
Python:
Automation scripts
Data processing workflows
Monitoring, debugging, validation scripts
AWS Support / Operations:
Troubleshooting ETL failures, performance bottlenecks, pipeline issues
Monitoring cloud workloads (CloudWatch, CloudTrail)
Handling incidents, root-cause analysis (RCA), patching & releases
Cost optimization and resource usage tracking
Telecom Domain:
Experience with OSS/BSS systems
Understanding of CDR processing, network KPIs, subscriber analytics
Data quality checks for telecom data pipelines
Roles & Responsibilities
Provide L2/L3 support for AWS-based data platforms in the telecom domain.
Maintain and enhance ETL pipelines built on Glue + Iceberg + Athena + Redshift.
Monitor production jobs, fix failures, optimize queries, and ensure SLA adherence.
Develop automation for operational workflows using Python.
Collaborate with data architects, business teams, and network teams for data requirements.
Implement best practices for data governance, security, and cost management.
Support migrations from legacy systems to AWS-native data lakes or Redshift.
Ideal Candidate Profile
3 10+ years of experience in AWS Data Engineering / Support Engineering.
Strong telecom domain understanding.
Hands-on with Iceberg, Athena, Glue (PySpark), Python, Redshift, S3, ETL frameworks.
Strong troubleshooting mindset and ability to work in 24 7 or rotational support environments (if required).
Tech Lead / Lead Data Engineer - Outside IR35 - SC + NPPV3 Cleared
SR2
London
Hybrid
Senior
£500/day - £550/day
RECENTLY POSTED
aws
terraform
github
python
amazon-s3
sql
+3
Tech Lead / Lead Data Engineer (AWS Data Platform) Rate: £500 - £550 p/d outside IR35 Length: 1st April to end of November (initially) Location: London (hybrid – typically 1 day per week on-site, remaining remote) Security Clearance: SC Clearance essential + NPPV3 Overview We’re looking for a hands-on Tech Lead to lead a small team delivering secure, scalable data solutions within a highly regulated environment. You’ll take technical ownership across an AWS-based data platform using S3, Glue, and Redshift, working closely with delivery leadership, architecture stakeholders, and product teams to deliver incremental value. This role suits someone who can balance technical leadership, hands-on engineering, and stakeholder-facing communication, while maintaining strong standards around security, quality, and operational resilience. Key Responsibilities Lead and mentor a small engineering team across data engineering, analytics engineering, and DevOps. Own the technical design of data ingestion, transformation, storage, and access patterns. Drive engineering standards including code quality, testing, CI/CD, Infrastructure as Code, and security-by-design. Translate high-level requirements into solution increments, technical designs, and well-scoped delivery tickets. Deliver and optimise data modelling approaches (e.g., star/snowflake schemas) and performance tuning practices. Build reliable and cost-effective ETL/ELT pipelines, including orchestration and event-driven patterns where appropriate. Partner with security stakeholders to ensure compliance, including IAM least privilege, encryption, auditability, and secure access controls. Implement and maintain CI/CD pipelines for data workflows and platform components. Ensure strong monitoring and operational discipline using cloud-native tooling and engineering best practice. Communicate technical decisions, trade-offs, risks, and delivery progress to senior stakeholders. Promote a culture of learning, quality, and continuous improvement.Required Skills & Experience Proven experience as a Tech Lead / Lead Data Engineer delivering AWS-based data platforms. Strong hands-on AWS experience, including: Amazon S3 (data lake patterns, partitioning, lifecycle policies, cost optimisation) AWS Glue (Jobs, Crawlers, PySpark, Glue Data Catalog, orchestration) Amazon Redshift (performance tuning, sort/dist keys, Spectrum, WLM) Strong development skills across: Python (including PySpark) SQL (DDL/DML, analytical queries, data performance considerations) Experience with Infrastructure as Code (Terraform or CloudFormation). CI/CD experience using tools such as GitHub Actions, Azure DevOps, CodePipeline, CodeBuild, etc. Strong understanding of security & governance in regulated environments: IAM, KMS encryption, Secrets Manager/SSM, audit logging Delivery capability across Agile (Scrum/Kanban) environments with strong backlog refinement discipline. Confident stakeholder management with the ability to explain technical choices and gain consensus
AWS Support Engineer / Data Engineer Telecom Domain
Stackstudio Digital Ltd.
Ipswich
In office
Mid - Senior
£70,000
RECENTLY POSTED
aws
processing-js
python
pyspark
aws-glue
Job Title: AWS Support Engineer / Data EngineerLocation: Ipswitch (onsite)Job Type: PermanentJob Summary: AWS Support Engineer / Data Engineer Telecom Domain (JD)Key Skills & ExpertiseAWS Core Services : S3, Redshift, Glue, Athena, Lake Formation, IAMData Engineering / ETL:Building and optimizing ETL pipelinesData ingestion, transformation & orchestration using AWS Glue (PySpark/Python)Working with structured/semi-structured telecom datasets (CDRs, network logs, subscriber data)Data Lake Technologies:Expertise in Apache Iceberg table formatSchema evolution, partitioning, compaction & metadata managementQuery performance tuning with Athena & Redshift SpectrumRedshift Expertise:Data modeling, distribution styles, sort keysWorkload management (WLM)Performance optimization & troubleshootingPython:Automation scriptsData processing workflowsMonitoring, debugging, validation scriptsAWS Support / Operations:Troubleshooting ETL failures, performance bottlenecks, pipeline issuesMonitoring cloud workloads (CloudWatch, CloudTrail)Handling incidents, root-cause analysis (RCA), patching & releasesCost optimization and resource usage trackingTelecom Domain:Experience with OSS/BSS systemsUnderstanding of CDR processing, network KPIs, subscriber analyticsData quality checks for telecom data pipelinesRoles & ResponsibilitiesProvide L2/L3 support for AWS-based data platforms in the telecom domain.Maintain and enhance ETL pipelines built on Glue + Iceberg + Athena + Redshift.Monitor production jobs, fix failures, optimize queries, and ensure SLA adherence.Develop automation for operational workflows using Python.Collaborate with data architects, business teams, and network teams for data requirements.Implement best practices for data governance, security, and cost management.Support migrations from legacy systems to AWS-native data lakes or Redshift.Ideal Candidate Profile3 10+ years of experience in AWS Data Engineering / Support Engineering.Strong telecom domain understanding.Hands-on with Iceberg, Athena, Glue (PySpark), Python, Redshift, S3, ETL frameworks.Strong troubleshooting mindset and ability to work in 24 7 or rotational support environments (if required).TPBN1_UKTJ
Senior Data Engineer/ PowerBI
Head Resourcing
Glasgow
Hybrid
Senior
£60,000 - £80,000
RECENTLY POSTED
powerbi
processing-js
fabric
unity-3d
git
python
+4
Lead Data Engineer - Azure & Databricks Lakehouse Glasgow (3/4 days onsite) | Exclusive Role with a Leading UK Consumer Business A rapidly scaling UK consumer brand is undertaking a major data modernisation programme-moving away from legacy systems, manual Excel reporting and fragmented data sources into a fully automated Azure Enterprise Landing Zone + Databricks Lakehouse. They are building a modern data platform from the ground up using Lakeflow Declarative Pipelines, Unity Catalog, and Azure Data Factory, and this role sits right at the heart of that transformation. This is a rare opportunity to join early, influence architecture, and help define engineering standards, pipelines, curated layers and best practices that will support Operations, Finance, Sales, Logistics and Customer Care. If you want to build a best-in-class Lakehouse from scratch-this is the one. ? What You’ll Be Doing Lakehouse Engineering (Azure + Databricks) Engineer scalable ELT pipelines using Lakeflow Declarative Pipelines, PySpark, and Spark SQL across a full Medallion Architecture (Bronze ? Silver ? Gold). Implement ingestion patterns for files, APIs, SaaS platforms (e.g. subscription billing), SQL sources, SharePoint and SFTP using ADF + metadata-driven frameworks. Apply Lakeflow expectations for data quality, schema validation and operational reliability. Curated Data Layers & Modelling Build clean, conformed Silver/Gold models aligned to enterprise business domains (customers, subscriptions, deliveries, finance, credit, logistics, operations). Deliver star schemas, harmonisation logic, SCDs and business marts to power high-performance Power BI datasets. Apply governance, lineage and fine-grained permissions via Unity Catalog. Orchestration & Observability Design and optimise orchestration using Lakeflow Workflows and Azure Data Factory. Implement monitoring, alerting, SLAs/SLIs, runbooks and cost-optimisation across the platform. DevOps & Platform Engineering Build CI/CD pipelines in Azure DevOps for notebooks, Lakeflow pipelines, SQL models and ADF artefacts. Ensure secure, enterprise-grade platform operation across Dev ? Prod, using private endpoints, managed identities and Key Vault. Contribute to platform standards, design patterns, code reviews and future roadmap. Collaboration & Delivery Work closely with BI/Analytics teams to deliver curated datasets powering dashboards across the organisation. Influence architecture decisions and uplift engineering maturity within a growing data function. ? Tech Stack You’ll Work With Databricks: Lakeflow Declarative Pipelines, Workflows, Unity Catalog, SQL Warehouses Azure: ADLS Gen2, Data Factory, Key Vault, vNets & Private Endpoints Languages: PySpark, Spark SQL, Python, Git DevOps: Azure DevOps Repos, Pipelines, CI/CD Analytics: Power BI, Fabric ? What We’re Looking For Experience 5-8+ years of Data Engineering with 2-3+ years delivering production workloads on Azure + Databricks. Strong PySpark/Spark SQL and distributed data processing expertise. Proven Medallion/Lakehouse delivery experience using Delta Lake. Solid dimensional modelling (Kimball) including surrogate keys, SCD types 1/2, and merge strategies. Operational experience-SLAs, observability, idempotent pipelines, reprocessing, backfills. Mindset Strong grounding in secure Azure Landing Zone patterns. Comfort with Git, CI/CD, automated deployments and modern engineering standards. Clear communicator who can translate technical decisions into business outcomes. Nice to Have Databricks Certified Data Engineer Associate Streaming ingestion experience (Auto Loader, structured streaming, watermarking) Subscription/entitlement modelling experience Advanced Unity Catalog security (RLS, ABAC, PII governance) Terraform/Bicep for IaC Fabric Semantic Model / Direct Lake optimisation
Data Engineer Manager
Young's Employment Services Ltd
Brent
Hybrid
Senior - Leader
£90,000
RECENTLY POSTED
fabric
aws
kafka
python
java
apache-spark
+4
Hybrid - London with 2/3 days WFH Circ £85,000 - £95,000 + Attractive Bonus & Benefits Hands On Data Engineer Manager required for this exciting newly created position with a prestigious and rapidly expanding business in West London. It would suit someone with official management experience, or potentially a Lead / Senior Engineer looking to take on more managerial responsibility. The Data Engineer Manager will play a pivotal role at the heart of our client’s data & analytics operation. Having implemented a new MS Fabric based Data platform, the need now is to scale up and meet the demand to deliver data driven insights and strategies right across the business globally. There’ll be a hands-on element to the role as you’ll be troubleshooting, reviewing code, steering the team through deployments and acting as the escalation point for data engineering. Our client can offer an excellent career development opportunity and a vibrant, creative and collaborative work environment. This is a hybrid role based in Central / West London with the flexibility to work from home 2 or 3 days per week. Key Responsibilities include; Define and take ownership of the roadmap for the ongoing development and enhancement of the Data Platform. Design, implement, and oversee scalable data pipelines and ETL/ELT processes within MS Fabric, leveraging expertise in Azure Data Factory, Databricks, and other Azure services. Advocate for engineering best practices and ensure long-term sustainability of systems. Integrate principles of data quality, observability, and governance throughout all processes. Participate in recruiting, mentoring, and developing a high-performing data organization. Demonstrate pragmatic leadership by aligning multiple product workstreams to achieve a unified, robust, and trustworthy data platform that supports production services such as dashboards, new product launches, analytics, and data science initiatives. Develop and maintain comprehensive data models, data lakes, and data warehouses (e.g., utilizing Azure Synapse). Collaborate with data analysts, Analytics Engineers, and various stakeholders to fulfil business requirements. Key Experience, Skills and Knowledge: Experience leading data or platform teams in a production environment as a Senior Data Engineer, Tech Lead, Data Engineering Manager etc. Proven success with modern data infrastructure: distributed systems, batch and streaming pipelines Hands-on knowledge of tools such as Apache Spark, Kafka, Databricks, DBT or similar Experience building, defining, and owning data models, data lakes, and data warehouses Programming proficiency in the likes of Python, Pyspark, SQL, Scala or Java. Experience operating in a cloud-native environment such as Azure, AWS, GCP etc ( Fabric experience would be beneficial but is not essential). Excellent stakeholder management and communication skills. A strategic mindset, with a practical approach to delivery and prioritisation. Proven success with modern data infrastructure: distributed systems, batch and streaming pipelines. Experience building, defining, and owning data models, data lakes, and data warehouses. Exposure to data science concepts and techniques is highly desirable. Strong problem-solving skills and attention to detail. Salary is dependent on experience and expected to be in the region of £85,000 - £95,000 + an attractive bonus scheme and benefits package. For further information, please send your CV to Wayne Young at Young’s Employment Services Ltd. YES are operating as both a recruitment Agency and Recruitment Business. TPBN1_UKTJ
Senior Azure Data Engineer
Youngs Employment Services
London
Hybrid
Senior
£70,000 - £80,000
RECENTLY POSTED
processing-js
r
python
azure-databricks
delta-lake
sql
+1
Hybrid - Work From Home and West LondonCirc £70,000 - £80,000 + Range of benefitsA well-known and prestigious business is looking to add a Senior Azure Data Engineer to their data team. This is an exciting opportunity for a Data Engineer that’s not just technical, but also enjoys directly engaging and collaborating with stakeholders from across business functions. Having nearly completed the process of migrating data from their existing on-prem databases to an Azure Cloud based platform, the Senior Data Engineer will play a key role in helping make best use of the data by gathering and agreeing requirements with the business to build data solutions that align accordingly. Working with diverse data sets from multiple systems and overseeing their integration and optimisation will require raw development, management and optimisation of data pipelines using tools in the Azure Cloud. Our client has expanded rapidly in recent years, they’re an iconic business with a special work environment that’s manifested a strong and positive culture amongst the whole workforce. This is a hybrid role where the postholder can work from home 2 or 3 days per week, the other days will be based onsite in West London just a few minutes walk from a Central Line tube station.The key responsibilities for the post include;* Develop, construct, test and maintain data architectures within large scale data processing systems.* Develop and manage data pipelines using Azure Data Factory, Delta Lake and Spark.* Utilise Azure Cloud architecture knowledge to design and implement scalable data solutions.* Utilise Spark, SQL, Python, R, and other data frameworks to manipulate data and gain a thorough understanding of the dataset’s characteristics.* Interact with API systems to query and retrieve data for analysis.* Collaborate with business users / stakeholders to gather and agree requirements.To be considered for the post you’ll need at least 5 years experience ideally with 1 or 2 years at a senior / lead level. You’ll need to be goal driven and able to take ownership of work tasks without the need for constant supervision. You’ll be engaging with multiple business areas so the ability to communicate effectively to understand requirements and build trusted relationships is a must. It’s likely you’ll have most, if not all the following:* Experience as a Senior Data Engineer or similar* Strong knowledge of Azure Cloud architecture and Azure Databricks, DevOps and CI/CD.* Experience with PySpark, Python, SQL and other data engineering development tools.* Experience with metadata driven pipelines and SQL serverless data warehouses.* Knowledge of querying API systems.* Experience building and optimising ETL pipelines using Databricks.* Strong problem-solving skills and attention to detail.* Understanding of data governance and data quality principles.* A degree in computer science, engineering, or equivalent experience.Salary will be dependent on experience and likely to be in the region of £70,000 - £80,000 although client may consider higher for outstanding candidate. Our client can also provide a vibrant, rewarding, and diverse work environment that supports career development.Candidates must be authorised to work in the UK and not require sponsoring either now or in the future. For further information, please send your CV to Wayne Young at Young’s Employment Services Ltd. Young’s Employment Services acts in the capacity of both an Employment Agent and Employment Business
Pyspark Engineer (AWS Glue) STEVENAGE/Hybrid £80k
Akkodis
Stevenage
Hybrid
Mid - Senior
£70,000 - £80,000
RECENTLY POSTED
aws
pyspark
aws-glue
python
talend
sql
+1
Pyspark Engineer- (Data Engineering, AWS GLUE) SC Cleared OR Eligible Stevenage (Hybrid) 2-3 days onsite Up to £80,000 High-impact programme - Revolutionary platformI am looking for a Pyspark expert to take the reins on a range of highly ambitious Data Migration projects supporting a range of truly high-impact programmes across the UK.This is a unique opportunity to work on cutting-edge cloud, software, and infrastructure projects that shape the future of technology in both public and private sectors. You’ll be part of a collaborative team delivering scalable, next-generation digital ecosystemsWhat you’ll be doing?As a Developer within our Centre of Excellence, you will play a critical role in delivering complex data migration and data engineering projects for our clients. This position focuses on the planning, execution, and optimisation of data migrations-from Legacy platforms to modern cloud-based environments-ensuring accuracy, consistency, security, and continuity throughout the processKey Responsibilities
Analyse existing data structures and understand business and technical requirements for migration initiatives.
Design and deliver robust data migration strategies and ETL solutions.
Develop automated data extraction, transformation, and loading (ETL) processes using industry-standard tools and scripts.
Work closely with stakeholders to ensure seamless migration and minimal business disruption.
Plan, coordinate, and execute data migration projects within defined timelines.
Ensure the highest standards of data quality, integrity, and security.
Troubleshoot and resolve data-related issues promptly.
Collaborate with wider engineering and architecture teams to ensure migrations align with organisational and regulatory standards.
Relevant exposure;
Strong hands-on experience with ETL processes and tools (Talend, Informatica, Matillion, Pentaho, MuleSoft, Boomi) or Scripting using Python, PySpark, and SQL.
Proficient-level SQL skills for complex query development, performance tuning, indexing, and data transformation across on-premise databases and AWS cloud environments.
Solid understanding of data warehousing and modelling techniques (Star Schema, Snowflake Schema).
Familiarity with security frameworks such as GDPR, HIPAA, ISO 27001, NIST, SOX, and PII, as well as AWS security features including IAM, KMS, and RBAC.
Ability to identify and resolve data quality issues across migration projects.
Strong track record of delivering end-to-end data migration projects and working effectively with both technical and non-technical stakeholders.Salary up to £80,000 plus wider benefits - Contact me today for further insight (see below)Modis International Ltd acts as an employment agency for permanent recruitment and an employment business for the supply of temporary workers in the UK. Modis Europe Ltd provide a variety of international solutions that connect clients to the best talent in the world. For all positions based in Switzerland, Modis Europe Ltd works with its licensed Swiss partner Accurity GmbH to ensure that candidate applications are handled in accordance with Swiss law.Both Modis International Ltd and Modis Europe Ltd are Equal Opportunities Employers.By applying for this role your details will be submitted to Modis International Ltd and/or Modis Europe Ltd. Our Candidate Privacy Information Statement which explains how we will use your information is available on the Modis website.
MLOps Tech Lead
Stackstudio Digital Ltd.
London
Hybrid
Senior
£500/day - £525/day
processing-js
aws
mongodb
mysql
tensorflow
git
+13
Job DetailsRole / Job Title:MLOps Tech LeadWork Location:London, UKOffice Requirement (Hybrid):2 days per weekKey Responsibilities (High-Level) Data Pipeline Development: Lead the technical direction of projects and ensure the use of Sainsbury’s best practices to the best quality. Data Integration: Lead and provide expertise on Integrate data from various sources, ensuring data consistency, integrity, and quality across the entire data lifecycle. Infrastructure Management: Provide guidance for the junior & Mid Data Engineers on the best practices when building and managing data infrastructure, including data lakes, warehouses, and distributed processing systems (e.g., PySpark, Hadoop).The RoleAs a Tech Lead, you will play a critical role in designing, building, and maintaining data pipelines and infrastructure that enable the development and deployment of machine learning models and drive engineering excellence. You will collaborate closely with data scientists, and lead ML engineers, and software engineers to ensure data is clean, accessible, and optimised for large-scale processing and analysis.Your Responsibilities Data Pipeline Development: Lead the technical direction of projects and ensure the use of Sainsbury’s best practices to the best quality. Data Integration: Lead and provide expertise on Integrate data from various sources, ensuring data consistency, integrity, and quality across the entire data lifecycle. Infrastructure Management: Provide guidance for the junior & Mid Data Engineers on the best practices when building and managing data infrastructure, including data lakes, warehouses, and distributed processing systems (e.g., PySpark, Hadoop). Data Preparation: Collaborate with data scientists to prepare and transform raw data into formats suitable for machine learning, including feature engineering and data augmentation. Automation: Implement automation tools and frameworks (CI/CD) to streamline the deployment and monitoring of machine learning models in production. Performance Optimisation: Optimise data processing workflows and storage solutions to improve performance and reduce costs. Collaboration: Work closely with cross-functional teams, including data science, engineering, and product management, to deliver data solutions that meet business needs. Mentorship: junior and mid-level data engineers and provide technical guidance on best practices and emerging technologies in data engineering and machine learning and helping to enhance their skills and career growth. Knowledge Sharing and Empowerment: Promote a culture of knowledge sharing within the engineering teams by organising regular technical workshops, brown bag sessions, and code reviews. Innovation and Continuous Improvement: Foster a collaborative and inclusive team environment that encourages continuous learning and improvement.Your ProfileEssential Skills / Knowledge / Experience Knowledge of machine learning frameworks (e.g., PySpark, PyTorch) and model deployment tools (e.g., MLflow, TensorFlow Serving). Strong experience with data processing frameworks (e.g., Apache Spark, Flink). Expertise in SQL and NoSQL databases (e.g., MySQL, PostgreSQL, MongoDB, Cassandra). Hands-on experience with cloud platforms (e.g., AWS, GCP, Azure) and their data services (e.g., Snowflake, S3, BigQuery, Redshift). Experience with containerisation and orchestration tools (e.g., Docker, Kubernetes). Familiarity with version control systems (e.g., Git) and CI/CD pipelines.Desirable Skills / Knowledge / Experience Certifications: AWS Certified Big Data Specialty, Google Professional Data Engineer, or equivalent.Soft Skills: o Excellent problem-solving and analytical skills. o Strong communication skills, with the ability to explain complex technical concepts to non-technical stakeholders. o Ability to work independently and in a team-oriented, collaborative environment.Leadership and Communication Strong leadership skills with the ability to inspire and guide team. Lead scrum ceremonies as and when needed (Standup, Planning, and grooming sessions). Excellent verbal and written communication skills, with the ability to articulate complex technical concepts. Creating a safe and inclusive environment where all team members feel that their input is valued and are never dissuaded from speaking up or asking questions.Collaborative Attitude Strong team player with a collaborative approach to working with cross-functional teams within the Media Agency. Open to feedback and willing to provide constructive criticism to others. Be available for the team, responding within a reasonable time frame and if not possible clearly sign positing alternative contacts who can guide. Building a community across Media Agency. Contribute to a positive and inclusive atmosphere within the team.Knowledge Sharing and Empowerment Commitment to fostering a learning culture within the team and ensuring knowledge transfer across all levels. Support and mentor C3s and C4s engineers by providing them opportunities to lead initiatives and contribute to the technical roadmap.
Senior Data Engineer
Opus Recruitment Solutions
Milton Keynes
Hybrid
Senior
£450/day - £525/day
python
sql
pyspark
dbt
Outside IR35 | £500–£525 per day | Milton Keynes | Hybrid Working | 6‑Month Initial Term We are currently looking for an experienced Senior Data Engineer to support a major data modernisation programme. You’ll be instrumental in reshaping and enhancing data pipelines as the business moves towards a Databricks Lakehouse setup. The work centres on creating scalable, high‑quality data flows that underpin analytics, reporting, and strategic insight across the organisation. This contract is outside IR35, requires one on‑site day each week, and offers an immediate start with strong extension prospects. What You’ll Be Doing Developing, refining, and maintaining robust ELT/ETL data pipelines Supporting the migration of data assets into a Databricks Lakehouse framework Ensuring data is accurate, reliable, and optimised for analytical consumption Partnering with stakeholders to deliver well‑engineered, business‑aligned solutions Monitoring production systems and resolving performance or reliability issues What They’re Looking For 7+ years of Data Engineering experience, ideally within cloud‑native environments Strong background in building and optimising large‑scale data pipelines Practical expertise with Databricks and Azure services Confident communicator with strong problem‑solving ability Core Technologies Databricks DBT Python PySpark SQL Azure If you are interested in this role then please apply via this platform or email me a copy of your most up to date CV to (url removed) and I will be in touch. Outside IR35 | £500–£525 per day | Milton Keynes | Hybrid Working | 6‑Month Initial Term
Senior Data Engineer
WRK DIGITAL LTD
Leeds
Hybrid
Senior
£65,000
processing-js
github
python
azure-databricks
sql
pyspark
*Senior Data Engineer*?? Location: Leeds, West Yorkshire (Hybrid 2 days per week)?? Employment Type: Permanent?? Status: Actively Hiring?? Salary: £55,000-£65,000 + Excellent BenefitsWRK digital is excited to be partnering exclusively with a high-profile, UK-leading organisation, currently shortlisting for a Senior Data Engineer on a permanent basis.This role sits at the heart of a cloud-first data strategy, helping to build scalable, secure, and high-quality data platforms that support critical, real-world decision-making.As a Senior Data Engineer, youll design and deliver robust data pipelines using Azure, Databricks, Python, SQL, and Spark, working closely with data scientists, analysts, and stakeholders in an agile environment. Youll also play a key role in mentoring others, shaping best practice, and contributing to the evolution of modern data architecture.Key ExperienceExtensive experience with Azure services including Azure Databricks, Azure Data Lake Storage, and Azure Data Factory.Advanced proficiency in SQL, Python, and Spark (PySpark), with a strong focus on performance optimization and distributed processing.Proven experience in CI/CD practices using industry-standard tools (e.g., GitHub Actions, Azure DevOps).Strong understanding of data architecture principles and cloud-native design patterns.Were currently shortlisting this month, with interviews planned for early January.Unfortunately this role does not offer sponsorship at this time*Senior Data Engineer*
Data Engineer
Teksystems
Sheffield
Hybrid
Mid - Senior
£450/day - £500/day
python
splunk
hadoop
pyspark
Description Our Tier 1 banking client are seeking a experienced Data engineer for a long term contracting position. • Objective: Automate ingestion, correlation, and reporting of in-house datasets; eliminate manual Excel/macros processes. Skills Python Data Hadoop Splunk Pyspark AutomationPlease note this will require someone onsite 3x days a week in Sheffield. Job Title: Data Engineer Location: Sheffield, UK Rate/Salary: 450.00 - 500.00 GBP Daily Job Type: Contract Trading as TEKsystems. Allegis Group Limited, Maxis 2, Western Road, Bracknell, RG12 1RT, United Kingdom. No. (phone number removed). Allegis Group Limited operates as an Employment Business and Employment Agency as set out in the Conduct of Employment Agencies and Employment Businesses Regulations 2003. TEKsystems is a company within the Allegis Group network of companies (collectively referred to as “Allegis Group”). Aerotek, Aston Carter, EASi, Talentis Solutions, TEKsystems, Stamford Consultants and The Stamford Group are Allegis Group brands. If you apply, your personal data will be processed as described in the Allegis Group Online Privacy Notice available at (url removed)> To access our Online Privacy Notice, which explains what information we may collect, use, share, and store about you, and describes your rights and choices about this, please go to (url removed)> We are part of a global network of companies and as a result, the personal data you provide will be shared within Allegis Group and transferred and processed outside the UK, Switzerland and European Economic Area subject to the protections described in the Allegis Group Online Privacy Notice. We store personal data in the UK, EEA, Switzerland and the USA. If you would like to exercise your privacy rights, please visit the “Contacting Us” section of our Online Privacy Notice at (url removed)/en-gb/privacy-notices for details on how to contact us. To protect your privacy and security, we may take steps to verify your identity, such as a password and user ID if there is an account associated with your request, or identifying information such as your address or date of birth, before proceeding with your request. If you are resident in the UK, EEA or Switzerland, we will process any access request you make in accordance with our commitments under the UK Data Protection Act, EU-U.S. Privacy Shield or the Swiss-U.S. Privacy Shield
Senior Data Engineer, SQL, RDBMS, AWS, Python, Mainly Remote
Carrington Recruitment Solutions
London
Fully remote
Senior
£85,000 - £95,000
aws
python
sql
celery
rabbitmq
pyspark
Senior Data Engineer, SQL, RDBMS, Python, Celery, RabbitMQ, AWS, Part Central London, Mainly Remote Senior Data Engineer (SQL, RDBMS, Python, AWS) required to work for a fast growing and exciting business based in Central London. However, this role is mainly remote. We need an experienced Data Developer who is a good people person, working with client facing teams outside of Technology, and also mentoring more junior members of the team across Europe. As the company is fast growing, there will be an opportunity to move upwards at certain points throughout your journey. Read on for more details… Responsibilities * Collaborate with product managers and business stakeholders to understand complex business requirements to translate business needs into well-designed and maintainable solutions * Ensure data quality and reliability by implementing robust data quality checks, monitoring, and alerting to ensure the accuracy and timeliness of all data pipelines * Create data governance policies and develop data models and schemas optimized for analytical workloads * Influence the direction for key infrastructure and framework choices for data pipelining and data management * Manage complex initiatives by setting project priorities, deadlines, and deliverables * Collaborate effectively with distributed team members across multiple time zones, including offshore development teams Skills required: * Proven track record building scalable data pipelines (batch and streaming) in production * Expert Python, PySpark, Celery and RabbitMQ skills; deep experience with AWS data stack (Glue, OpenSearch, RDS) * Expert skills within SQL with experience in both transactional RDBMS systems and distributed systems * Hands-on with Lakehouse technologies (Apache Iceberg, S3 Tables, StarRocks) * Strong grasp of data governance, schema design, and quality frameworks * Comfortable leading infrastructure decisions and collaborating across distributed teams This is a fantastic opportunity and salary is dependent upon experience. Apply now for more details
Azure Data Engineer - £500 - Hybrid
Tenth Revolution Group
Newcastle upon Tyne
Hybrid
Mid - Senior
£450/day - £550/day
processing-js
fabric
terraform
github
git
kafka
+7
Azure Data Engineer - £500PD - Hybrid We are seeking an Azure Data Engineer with strong experience in Databricks to design, build, and optimize scalable data pipelines and analytics solutions on the Azure cloud platform. The ideal candidate will have hands-on expertise across Azure data services, data modeling, ETL/ELT development, and collaborative engineering practices. Key Responsibilities * Design, develop, and maintain scalable data pipelines using Azure Databricks (Python, PySpark, SQL). * Build and optimize ETL/ELT workflows that ingest data from various on-prem and cloud-based sources. * Work with Azure services including Azure Data Lake Storage, Azure Data Factory, Azure Synapse Analytics, Azure SQL, and Event Hub. * Implement data quality validation, monitoring, metadata management, and governance processes. * Collaborate closely with data architects, analysts, and business stakeholders to understand data requirements. * Optimize Databricks clusters, jobs, and runtimes for performance and cost efficiency. * Develop CI/CD workflows for data pipelines using tools such as Azure DevOps or GitHub Actions. * Ensure security best practices for data access, data masking, and role-based access control. * Produce technical documentation and contribute to data engineering standards and best practices. Required Skills and Experience * Proven experience as a Data Engineer working with Azure cloud services. * Strong proficiency in Databricks, including PySpark, Spark SQL, notebooks, Delta Lake, and job orchestration. * Strong SQL and data modeling skills (e.g., dimensional modeling, data vault). * Experience with Azure Data Factory or other orchestration tools. * Understanding of data lakehouse architecture and distributed computing principles. * Experience with CI/CD pipelines and version control (Git). * Knowledge of REST APIs, JSON, and event-driven data processing. * Solid understanding of data governance, data lineage, and security controls. * Ability to solve complex technical problems and communicate solutions clearly. Preferred Qualifications * Industry certifications (e.g., Databricks Data Engineer Associate/Professional, Azure Data Engineer Associate). * Experience with Azure Synapse SQL or serverless SQL pools. * Familiarity with streaming technologies (e.g., Spark Structured Streaming, Kafka, Event Hub). * Experience with infrastructure-as-code (Terraform or Bicep). * Background in BI or analytics engineering (Power BI, dbt) is a plus. To apply for this role please submit your CV or contact Dillon Blackburn on (phone number removed) or at (url removed). Tenth Revolution Group are the go-to recruiter for Data & AI roles in the UK offering more opportunities across the country than any other recruitment agency. We’re the proud sponsor and supporter of SQLBits, Power Platform World Tour, and the London Fabric User Group. We are the global leaders in Data & AI recruitment
Page 1 of 2

Frequently asked questions

What types of PySpark jobs are listed on Haystack?
Haystack features a wide range of PySpark job listings including roles such as Data Engineer, Big Data Developer, Data Scientist, and Analytics Engineer working with Apache Spark and PySpark in various industries.
Do I need to have experience with Apache Spark to apply for PySpark jobs?
While specific requirements vary by employer, most PySpark job listings require a strong understanding of Apache Spark fundamentals, including working with RDDs, DataFrames, and Spark SQL through PySpark.
Can I find remote PySpark job opportunities on Haystack?
Yes, Haystack includes many remote and hybrid PySpark job listings, allowing you to work from anywhere while leveraging your PySpark skills.
How can I improve my chances of getting hired for a PySpark job through Haystack?
To increase your chances, make sure your resume highlights relevant experience with PySpark and big data technologies, tailor your application to each job description, and consider obtaining certifications in Apache Spark or related technologies.
Are there entry-level PySpark positions available on Haystack?
Yes, Haystack lists entry-level PySpark roles for candidates new to the field or transitioning from other data technologies. These listings typically require foundational knowledge of Python and Spark along with eagerness to learn.