Make yourself visible and let companies apply to you.
Roles
Data Engineer Jobs
Overview
Looking for top Data Engineer jobs? Explore the latest data engineering opportunities on Haystack, your go-to IT job board. Whether you're skilled in ETL, data pipelines, or big data technologies, find the perfect role to advance your career today. Start your search for Data Engineer positions now!
Azure AI Engineer
Sanderson Recruitment
Luton
Hybrid
Mid - Senior
£400/day - £450/day
RECENTLY POSTED

£450/day outside IR35

London-based - twice a week on-site

6 month initial contract

Working with a financial services client who are looking to build a new AI/ML function. Looking for a MLOps Engineer to design, develop and implement AI and ML applications.

Will require working cross-functionally to conceptualise, design, test and deploy AI projects.

Azure AI/ML Engineer, key responsibilities:

Build, develop and deploy AI applications using Python

Design and Develop AI services

Setup and develop data ingestion pipelines and components

Develop search related components using Azure AI Search

Developing and deploying AI/ML models

Built and maintain scalable, high-performance AI apps on Azure

Create and maintain secure AI platform

Collaborate cross-functionally to understand requirements

Azure AI Engineer, Azure ML, Python

Reasonable Adjustments:

Respect and equality are core values to us. We are proud of the diverse and inclusive community we have built, and we welcome applications from people of all backgrounds and perspectives. Our success is driven by our people, united by the spirit of partnership to deliver the best resourcing solutions for our clients.

If you need any help or adjustments during the recruitment process for any reason , please let us know when you apply or talk to the recruiters directly so we can support you.

TPBN1_UKTJ

OpenShift Telemetry Engineer
Avance Consulting
London
Remote or hybrid
Mid - Senior
Private salary
RECENTLY POSTED

The Role

We are seeking a skilled OpenShift Telemetry Engineer to join our team. In this role, you will be responsible for implementing, managing, and optimizing the observability stack within a Red Hat OpenShift Container Platform environment to ensure system health, performance, and security.

You will act as a bridge between application monitoring and infrastructure observability, leveraging modern telemetry and data streaming tools.

Key Responsibilities

  • Design, implement, and maintain data pipelines to ingest and process OpenShift telemetry data (metrics, logs, and traces) at scale.
  • Stream OpenShift telemetry through Kafka (producers, topics, schemas) and build resilient consumer services for transformation and enrichment.
  • Engineer data models and routing mechanisms for multi-tenant observability while ensuring data lineage, quality, and SLA adherence across streaming layers.
  • Integrate processed telemetry into Splunk for dashboards, visualization, alerting, and analytics to achieve Observability Level 4 (proactive insights).
  • Implement schema management, governance, and versioning using Avro or Protobuf for telemetry events.
  • Build automated validation, replay, and backfill mechanisms to ensure data reliability and recovery.
  • Instrument services with OpenTelemetry, standardizing tracing, metrics, and structured logging across platforms.
  • Utilize LLM-based capabilities to enhance observability (e.g., query assistance, anomaly summarization, runbook generation).
  • Collaborate with Platform, SRE, and Application teams to integrate telemetry, alerts, and SLOs.
  • Ensure security, compliance, and best practices for telemetry data pipelines and observability platforms.
  • Document data flows, schemas, dashboards, and operational runbooks.

Required Skills & Experience

  • Hands-on experience building streaming data pipelines with Kafka (producers/consumers, schema registry, Kafka Connect, KSQL, Kafka Streams).
  • Strong experience with OpenShift / Kubernetes telemetry, including OpenTelemetry and Prometheus.
  • Experience integrating telemetry into Splunk (HEC, Universal Forwarder, source types, CIM) and building dashboards and alerts.
  • Strong data engineering skills using Python (or similar languages) for ETL/ELT, enrichment, and validation.
  • Experience with event schemas (Avro, Protobuf, JSON) and schema compatibility strategies.
  • Familiarity with observability frameworks and maturity models, driving toward Level 4 observability (proactive monitoring and automated insights).
  • Understanding of hybrid cloud and multi-cluster telemetry architectures.

Preferred Skills:

  • Security and compliance practices for data pipelines, including:

    • Secret management
    • RBAC
    • Encryption in transit and at rest
  • Strong problem-solving and analytical skills.

  • Ability to work effectively in cross-functional teams.

  • Excellent communication and documentation skills.

Data Engineer
Peregrine
London
In office
Senior
Private salary
RECENTLY POSTED
+3

We are Data Services, our mission is to unlock the value of data by delivering high-quality, reliable, and secure data services that are accessible, understandable, and actionable. We continuously evolve our offerings, leveraging modern cloud-based technologies, and fostering strong partnerships to help our colleagues in the Bank navigate the complexities of a data-driven world and achieve their strategic objectives.

Active SC Clearance

Job Description:

The world of data in Central Banking is evolving rapidly. With the rise of detailed data collection in financial regulation and the swift advancements in cloud-native data technologies, the demand for visionary data engineers is growing. Were seeking a senior Data Engineer to join our Data Engineering team and play a pivotal role in shaping the Banks strategic cloud-first data platform.

As a senior member of the team, you will play a key role in designing and delivering robust, scalable data solutions that support the Banks core responsibilities around monetary policy, financial stability, and regulatory supervision. Youll contribute to technical design decisions, mentor engineers, and collaborate across teams to ensure our data infrastructure continues to evolve and meet future demands.

Role Responsibilities

  • Lead the design, development, and deployment of scalable, secure, and cost-effective distributed data solutions using Azure services (e.g., Azure Databricks, Azure Data Lake Storage, Azure Data Factory).
  • Architect and implement advanced data pipelines using Databricks, Delta Lake, Python and Spark, ensuring performance, reliability, and maintainability across cloud and on-prem environments.
  • Champion data quality, governance, and observability, ensuring data is accurate, timely, and fit-for-purpose for analytics, BI, and operational use cases.
  • Drive the modernization of legacy systems, leading the migration of data infrastructure to Azure with minimal disruption and long-term scalability.
  • Act as a technical authority on Azure-native data engineering, guiding best practices and setting standards across the team.
  • Mentor and coach junior and mid-level engineers, fostering a culture of continuous learning, innovation, and technical excellence.
  • Collaborate with architects, analysts, and stake holders to align data engineering efforts with strategic business goals and enterprise data strategy.
  • Evaluate and introduce emerging technologies, tools, and methodologies to enhance the Banks data capabilities.
  • Own the end-to-end delivery of complex data solutions, from requirements gathering to production deployment and support.
  • Contribute to the development of reusable frameworks, templates, and patterns to accelerate delivery and ensure consistency across projects.
Minimum Criteria
  • Extensive experience with Azure services including Azure Databricks, Azure Data Lake Storage, and Azure Data Factory.
  • Advanced proficiency in SQL, Python, and Spark (PySpark), with a strong focus on performance optimization and distributed processing.
  • Proven experience in CI/CD practices using industry-standard tools (e.g., GitHub Actions, Azure DevOps).
  • Strong understanding of data architecture principles and cloud-native design patterns.
Essential Criteria
  • Demonstrated ability to lead technical delivery, mentor engineering teams and collaborate with stakeholders to ensure alignment between data solutions and business strategy.
  • Proficiency in Linux/Unix environments and shell scripting.
  • Deep understanding of source control, testing strategies, and agile development practices.
  • Self-motivated with a strategic mindset and a passion for driving innovation in data engineering.
Desirable Criteria
  • Experience delivering data pipelines on Hortonworks/Cloudera on-prem and leading cloud migration initiatives.
  • Familiarity with: Apache Airflow
  • Data modelling and metadata management
  • Experience influencing enterprise data strategy and contributing to architectural governance.

Changed this now. I was confusing this with PDE role as I am working on that in parallel. Hope this makes sense now.

data solutions rather than architectures?

Should add Python here as a key tech we use

Have mentioned Python in ‘Minimum Criteria’ section below, but will add here too

this could be added to Essential Criteria ?

stakeholder and project management ?

Have updated #1 in essential criteria below. But I have now used the previous version to create requisition in OBS. Will see if it can be changed.

What is the difference between “minimum” and “essential” criteria. Both imply that they are mandatory and so could be one list?

This is a bit confusing. I used to have just one, but this is the standard format of JD that the Bank wants us to follow. Here is the difference:

Min Criteria:

This must list the minimum technical skills/experience/qualifications required to do the job and should be measurable/scoreable. The screening questions you select must link to these, in order to allow candidates to best demonstrate their suitability for the role.

Essential:

This lists other important technical skills/experience/qualifications, and also more behavioural competencies. These are ones that are better assessed at interview rather than on screening questions on the application form

Ok, I think we could go back and ask HR about this as it does seem confusing and to me doesn’t give a good impression of the Bank to applicants at it looks like 2 lists for the same thing.

I had checked this earlier, but seems they want us to follow this format. When I advertised last time, I just mentioned Minimum Criteria, but they said it has to be split into Minimum and Essential.

Don’t think we need to mention Atlas or Cloudera Manager as we hardly ever use those. Airflow could be useful so would leave that in.

Senior Data Engineer
HAYS
Abingdon
Hybrid
Senior
£65,000
RECENTLY POSTED
+2

Your new company

An established and fast-growing technology organisation is on a mission to transform digital connectivity across the UK. With a focus on building and operating high-speed fibre networks, the business is committed to delivering world-class broadband services to communities and supporting a data-driven future. You’ll be joining a forward-thinking environment that values innovation, collaboration, and continuous improvement.

Your new role

As a Senior Data Engineer, you will play a pivotal role in shaping and enhancing the organisation’s enterprise data platform. Working within a specialist Data Analytics & AI team, you’ll be responsible for designing, building, and maintaining scalable data pipelines and models within Snowflake to support analytics, reporting, and data-led decision-making across the business.You will translate data architecture strategies into high-quality technical solutions, optimise performance and cost, and ensure the data platform is reliable, secure, and well-structured. This includes developing ELT/ETL pipelines using tools such as dbt and Argo Workflows, implementing data quality and governance practices, and leveraging advanced Snowflake features to drive automation and efficiency.Collaboration is key-you’ll work closely with analysts, data consumers, and business stakeholders, enabling them through well-designed data models and providing technical support where needed. You’ll also contribute to monitoring, CI/CD processes, and ongoing improvements to engineering standards across the team.

What you’ll need to succeed

  • Proven experience delivering cloud-based data engineering solutions, ideally centred around Snowflake
  • Strong skills in SQL, Python, and dbt for data modelling and transformation
  • Experience with Snowflake RBAC and performance optimisation
  • Familiarity with ingestion/replication tools such as Airbyte, Fivetran, Hevo, or similar
  • Understanding of cloud technologies (AWS preferred)
  • Knowledge of data modelling, governance principles, and best-practice engineering standards
  • Experience supporting BI/reporting tools such as Power BI
  • Solid grounding in version-controlled development and CI/CD practices (git)

Desirable:

  • Exposure to enterprise systems like Salesforce, BSS/OSS, telephony, or call-centre data
  • Experience in data platform migrations, data validation, and quality assurance
  • Background in enabling business teams through training, documentation, or adoption support
  • Familiarity with Terraform or Infrastructure-as-Code
  • A mindset for continuous learning and staying up to date with modern data stack tooling

What you need to do now

If you’re interested in this role, click ‘apply now’ to forward an up-to-date copy of your CV, or call us now.
If this job isn’t quite right for you, but you are looking for a new position, please contact us for a confidential discussion about your career.

Hays Specialist Recruitment Limited acts as an employment agency for permanent recruitment and employment business for the supply of temporary workers. By applying for this job you accept the T&C’s, Privacy Policy and Disclaimers which can be found at hays.co.uk

Data Engineer AWS Python Kafka
client server
London
Hybrid
Mid - Senior
£85,000
RECENTLY POSTED

Data Engineer (AWS Python Kafka) London / WFH to £85k

Are you a tech savvy Data Engineer with AWS expertise combined with client facing skills?

You could be joining a global technology consultancy with a range of banking, financial services and insurance clients in a senior, hands-on Data Engineer role.

As a Data Engineer you will design and build end-to-end real-time data pipelines using AWS native tools, Kafka and modern data architectures, applying AWS Well-Architected Principles to ensure scalability, security and resilience. You’ll collaborate directly with clients to analyse requirements, define solutions and deliver production grade systems, leading the development of robust, well tested and fault tolerant data engineering solutions.

Location / WFH:

There’s a hybrid work from home model with two days a week in the London, City office (or at client site in London).

About you:

  • You are an experienced Data Engineer within financial services or consulting environments
  • You have expertise with AWS including Lake formation and transformation layers
  • You have strong Python coding skills
  • You have experience with real-time data streaming using Kafka
  • You’re collaborative and pragmatic with excellent communication and stakeholder management skills
  • You’re comfortable taking ownership of projects and working end-to-end
  • You have a good knowledge of Distributed Systems and DevOps tooling
  • Ideally you will also have Databricks experience

What’s in it for you:

As a Data Engineer you will earn a highly competitive package:

  • Salary to £85k
  • Bonus c15%
  • Pension (up to 7% employer contribution), Life Assurance, Income Protection
  • Private medical care for you and your family, including mental health
  • Travel Insurance
  • Charitable giving
  • Gym membership for you and your family
  • Flexible holiday scheme

Apply now to find out more about this Data Engineer (AWS Python Kafka) opportunity.

At Client Server we believe in a diverse workplace that allows people to play to their strengths and continually learn. We’re an equal opportunities employer whose people come from all walks of life and will never discriminate based on race, colour, religion, sex, gender identity or expression, sexual orientation, national origin, genetics, disability, age, or veteran status. The clients we work with share our values.

Head of Data
Cathcart Technology
Glasgow
Hybrid
Leader
Private salary
RECENTLY POSTED

Head of Data required to lead and evolve enterprise-wide data platforms within a global organisation in Glasgow. This is a senior role responsible for building scalable platforms, maintaining high standards of data governance and quality, leading a high-performing team, and shaping the organisation’s long-term data strategy.

The Organisation

This is a large, global organisation where data underpins critical business functions. Over the past five years, the organisation has been developing its data capabilities and recently launched its first enterprise data platform. The next phase is to replicate these platforms across multiple business domains while enhancing governance, reliability, and value from the data estate.

The firm continues to invest in cloud-based platforms, analytics, and AI, with a focus on secure, scalable, and high-quality data solutions. Senior technology leaders are trusted to guide strategy as well as deliver results operationally.

The Role

You will take end-to-end ownership of the enterprise data platforms, ensuring they are robust, reliable, and scalable, while driving improvements in data quality and governance. You’ll manage a team of data engineers and act as the central coordinator across data architects, governance, and reporting teams, ensuring alignment and successful delivery across the business.

In addition, you will contribute to developing and executing the firm’s data strategy, supporting innovation and longer-term ambitions including AI and advanced analytics initiatives.

What You’ll Be Doing

** Leading the design, implementation, and optimisation of enterprise data platforms
** Ensuring data governance, master data management, and quality standards are embedded across all platforms
** Managing, mentoring, and developing a team of data engineers, while coordinating cross-functional teams of architects, governance, and reporting specialists
** Building and maintaining scalable, reliable, and reusable data pipelines across multiple sources
** Collaborating with senior stakeholders to translate business priorities into actionable data initiatives
** Driving the adoption of cloud services, analytics, and AI to enhance the data estate
** Managing vendors and third-party partners to ensure delivery, performance, and value

What They’re Looking For

** Proven experience leading enterprise data platform initiatives
** Strong technical expertise in data engineering, data management, and cloud platforms (e.g., Azure, MS Fabric, Databricks)
** Track record of delivering complex, high-value data solutions with strong governance and quality controls
** Experienced people leader capable of managing teams and coordinating cross-functional stakeholders
** Skilled at shaping and delivering data strategy, with a vision for AI and advanced analytics
** Excellent stakeholder management and communication skills at senior and executive levels
** Understanding of business functions such as finance, HR, compliance, and operational processes

The Offer

A competitive salary and benefits package is on offer, alongside hybrid working (typically 2-3 days per week in their city centre office).

This is a senior, high-profile leadership role with the opportunity to shape the enterprise data landscape, build a high-performing team, and drive strategic innovation across the organisation.

If this sounds of interest, please apply or reach out to Murray Simpson.

Cathcart Technology is acting as an Employment Agency in relation to this vacancy.

Research Technician
UNIVERSITY OF STIRLING
Stirling
Hybrid
Junior - Mid
£31,236/day - £37,694/day
RECENTLY POSTED
TECH-AGNOSTIC ROLE

The Post

This post holder will be responsible for managing the technical development of a web-based battery of measures of curiosity for children. The post holder will work with the project team to implement specified tasks and surveys into a unified web-based platform for data collection in schools, public spaces, and the lab and to develop the data architecture for the battery and produce the pipelines for wrangling and cleaning the data ready for analysis.

The finalised battery will be made available open source for other researchers and educators and resulting data should be shareable. A key part of the role will be documentation of the measures and the resulting data in line with Open Science practices, including preparation of preregistration documents. The post holder will develop training materials for the end users of the battery including researchers and educators. Additionally, the post holder may contribute to research activities such as data collection and data analysis.

The post holder will be an integral member of the dynamic research team at Stirling and Lancaster Universities and contribute to a line of studies assessing curiosity and its effects in primary school age children. The post holder will have opportunities to develop new skills, collaborations, and research ideas within the role.

The University of Stirling has an Agile Working Framework that will enable post holder to work flexibly to deliver the project objectives.

Description of Duties

  • Lead the technical development of an open source, web-based battery of curiosity measures including the data architecture, and secure access and sharing, for the Curiosity Battery
  • Develop data cleaning, wrangling, and preliminary analysis pipelines for the resulting data
  • Collaborate with members of the research team to write methodological and protocol publications
  • Harmonise measures and data with existing developmental research platforms
  • Develop training materials and train research staff and students on use of the Curiosity Battery as a research tool
  • Develop training materials for educators on the use of the Curiosity Battery for remote data collection in schools

Essential Criteria

  • Educated to degree level or equivalent in relevant discipline (e.g., psychology, education, data science, or web programming)
  • Proven experience developing web-based data collection tools
  • Experience developing data cleaning and/or analysis pipelines
  • General knowledge and understanding of Open Science practices

Desirable Criteria

  • Postgraduate research qualification to Masters or PhD level
  • Experience of working with children in research settings
  • Experience harmonising data across projects
  • Experience in training staff and students in use of data collection tools
  • Experience developing documentation for research tools
  • Knowledge of curiosity research

Additional Information

Part time (40% FTE)

Fixed term for 12 months

Grade 6: £31,236 - £37,694 p.a. pro-rata

The closing date for applications is midnight on Sunday 05 April 2026.

Interviews are expected to take place on the week commencing Monday 20 April 2026.

There is an expectation that work will be undertaken in the UK.

This role will require a membership of the PVG scheme. An offer of employment will be subject to a satisfactory outcome of this process.

This role is not eligible for sponsorship. Applicants require to have existing right to work in the UK.

The University of Stirling recognises that a diverse workforce benefits and enriches the work, learning and research experiences of the entire campus and greater community. We are committed to removing barriers and welcome applications from those who would contribute to further diversification of our staff and ensure that equality, diversity and inclusion is woven into the substance of the role. We strongly encourage applications from people from diverse backgrounds including gender, identity, race, age, class, and ethnicity.

For a full description of duties and essential/desirable criteria please click the apply button, which will take you directly to the University Website.

ML&AI Engineer
Vallum Associates Limited
Sheffield
Hybrid
Mid - Senior
£500/day
RECENTLY POSTED
TECH-AGNOSTIC ROLE

We are currently looking for an experienced ML & AI Engineer to join a major technology program delivering advanced AI-driven solutions within the banking sector. The role involves working on innovative AI initiatives, building scalable infrastructure, and developing intelligent systems that power agent-based workflows and conversational AI platforms.

You will collaborate with cross-functional teams to design and implement next-generation AI capabilities and help drive the evolution of AI-powered products.

Program Scope

  • Develop and provision infrastructure that supports agentic AI workflows across both Azure and Google Cloud Platform (GCP) environments.
  • Provide data science expertise to support the design of agent-based solutions, including Coach AI and future AI Assistant capabilities.
  • Create integration patterns for AI agents to interact with banking systems and perform actions on behalf of customers.
  • Contribute to the development of new AI products within the Conversational Banking Lab.

Key Initiatives Include

Agent Summarisation
Develop advanced capabilities to summarise complex and nuanced customer conversations.

App Search Evolution
Transform existing vector search functionality into a fully generative AI-driven search experience, creating a single unified interface for users.

Evaluation Methods
Build automated evaluation frameworks to test and validate both deterministic and generative AI conversations at scale.

Required Skills & ExperienceMust Have

  • Strong Python development skills, with 2+ years of experience building production-grade applications using Large Language Models (LLMs).

  • Solid understanding of software engineering principles, including:

    • Microservices architecture
    • CI/CD pipelines
    • Event-driven architecture
  • Hands-on experience with AI engineering practices, including:

    • RAG (Retrieval-Augmented Generation) pipelines
    • Prompt engineering
    • LLMOps
    • Runtime monitoring and evaluation of AI systems
    • Experience with Vertex AI
  • Experience in data engineering, including building scalable data pipelines using Python and Spark.

  • Strong knowledge of GCP-native services, including:

    • BigQuery (BQ)
    • Spanner
    • Dataflow
    • Firestore

Nice to Have

  • Experience with Agentic AI frameworks, such as:

    • LangGraph
    • ADK
    • CrewAI
    • Multi-agent architectures
  • Experience building deployable AI solutions (production environments rather than notebook-only solutions).

  • Knowledge of data ontologies and graph-based data models.

  • Exposure to Agile or Scrum development methodologies.

ML & AI Engineer
DCV Technologies
London
Remote or hybrid
Mid - Senior
£450/day - £650/day
RECENTLY POSTED

ML & AI Engineer – Python, LLM, RAG, GCP We are seeking a Machine Learning / AI Engineer to help build and deploy production-grade generative AI and LLM systems powering next-generation conversational digital experiences. This role focuses on designing and engineering LLM applications, RAG pipelines and scalable AI infrastructure using Python and Google Cloud (GCP). You will work on cutting-edge agentic AI and conversational AI platforms, building services that support intelligent assistants and automated customer interactions. Key Responsibilities \* Build and deploy LLM-based applications and generative AI solutions \* Develop RAG (Retrieval Augmented Generation) pipelines \* Engineer scalable microservices-based AI platforms \* Design and maintain data pipelines using Python and Spark \* Implement CI/CD pipelines for machine learning and AI systems \* Work with GCP services including Vertex AI, BigQuery, Dataflow, Spanner and Firestore \* Contribute to prompt engineering, LLM evaluation and monitoring \* Build integrations between AI agents and enterprise systems Required Skills \* Strong Python development experience \* Experience building LLM or generative AI applications \* Knowledge of RAG pipelines and prompt engineering \* Experience with microservices architecture and CI/CD \* Hands-on experience with Google Cloud Platform (GCP) \* Familiarity with Spark or large-scale data pipelines Nice to Have \* Experience with LangChain, LangGraph or agentic AI frameworks \* Knowledge of multi-agent architectures or AI orchestration \* Experience building production AI platforms This is an opportunity to work on large-scale AI engineering challenges, delivering production AI systems and intelligent digital assistants using modern LLM, generative AI and cloud-native technologies

Snowflake BI Developer - Contract - £250 per day
Randstad Technologies Recruitment
London
Hybrid
Junior - Mid
£200/day - £250/day
RECENTLY POSTED

Snowflake BI Developer - Contract - 250 per day

I’m contacting you to highlight a contract opportunity I’m currently recruiting for. My London based client is looking for a Snowflake BI Developer immediately available to start.

As a Snowflake BI Developer you will have experience driving reporting across organisations utilising Snowflake to generate these reports.

Location: Hybrid - Central London
Length: 6 months with strong view to extend
Day Rate: 250 per day
IR35 Status: Inside of IR35

Required experience will include:

  • Experience understanding Snowflake Data Models.
  • Exposure to an Agile/Scrum environment.
  • Developing reports through Snowflake.
  • Power BI Report Developing.
  • Strong SQL Skills.
  • Strong knowledge of building reports for analytics.

Desirables:

  • Experience within Finance.

If you are interested in this Snowflake BI Developer role please apply with your most recent CV. Alternatively email me on Jordan co . uk. There are multiple roles available so feel free to recommend a friend or previous colleague.

Snowflake BI Developer - Contract - 250 per day

Randstad Technologies is acting as an Employment Business in relation to this vacancy.

Data Engineer
SF Recruitment
Shropshire
Hybrid
Mid - Senior
£55,000 - £70,000
RECENTLY POSTED
+1

Location: Hybrid Shropshire or Sussex
Salary: Competitive salary plus benefits

We are currently supporting a leading technology business that delivers large-scale data solutions across complex and highly secure environments. Due to ongoing project growth, they are seeking to appoint a Data Engineer to join their expanding data engineering team.

This role will focus on designing and delivering robust data integration solutions, building scalable pipelines, and collaborating closely with client stakeholders to support data-driven decision-making across critical systems.

The Role

As a Data Engineer, you will be responsible for building and maintaining data pipelines and integration solutions within enterprise environments. The role covers the full delivery lifecycle, from gathering requirements through to deployment and operational support.

Key Responsibilities:

  • Design and implement robust data integration solutions (batch and near real-time)
  • Build and maintain scalable data pipelines for ingestion, transformation, and curation
  • Work with large and complex datasets across enterprise platforms
  • Collaborate with product teams and client stakeholders to translate requirements into technical solutions
  • Support live systems, troubleshoot issues, and ensure service continuity
  • Work within Agile delivery teams alongside engineers, analysts, and business stakeholders
  • Contribute to best practises and continuous improvement across the data engineering capability

Experience Required

We are looking for engineers with strong fundamentals in data engineering and proven experience delivering solutions within complex environments.

Essential Skills and Experience:

  • Strong SQL and data modelling skills
  • Experience with ETL/ELT tools such as Informatica, Talend, Pentaho, AWS Glue, or similar
  • Familiarity with data platforms such as Oracle, Cloudera, or enterprise data warehouses
  • Proficiency in programming or scripting languages such as Python or Bash
  • Designing and maintaining data pipelines and integration processes
  • Experience working within Agile delivery environments

Desirable Experience:

  • Experience with cloud platforms such as AWS
  • Familiarity with job scheduling or orchestration tools (e.g. Airflow or similar)
  • Knowledge of reporting and visualisation tools such as Power BI, Business Objects, or Pentaho
  • Experience with CI/CD and version control tooling
Data Systems Administrator
Lucy Walker Recruitment Ltd
Yorkshire
In office
Graduate - Junior
£25,000 - £27,000
RECENTLY POSTED

We are recruiting for a fast-growing, dynamic business who work across an impressive B2B client portfolio.

We are looking for a Digital Systems Administrator to support core business systems, websites, and data processes.

This role focuses on accuracy, continuous learning, and providing dependable day-to-day support to users and the wider Digital team. It is ideal for someone early in their systems or data career who wants to build technical capability within a commercially focused business.

If you are a recent graduate, who has strong systems knowledge and an interest in advanced Excel - or coding and are looking for a new career opportunity, please send us your CV today.

Key Responsibilities

  • Provide day-to-day user support across SAP, websites, product configuration tools, and internal systems
  • Prepare, update, and maintain system data with a high degree of accuracy
  • Troubleshoot basic to intermediate issues and escalate where appropriate
  • Complete routine housekeeping tasks to ensure systems and data remain clean and up to date
  • Deliver assigned tasks within system and website projects, following clear instructions
  • Produce reports using Excel and SQL, ensuring consistency and clarity
  • Identify recurring issues or inefficiencies and suggest improvements

Skills & Experience:

  • Good understanding of business processes such as sales, production, stock, and delivery flows
  • Strong Excel skills, with developing knowledge of VBA and/or SQL
  • Growing understanding of ERP systems (e.g., SAP) and website platforms
  • Clear communication skills and confidence supporting internal stakeholders
  • Strong attention to detail and a methodical approach to tasks
  • Eagerness to learn and develop technical capability

What Success Looks Like

  • High accuracy and completeness in data-related tasks
  • Responsive, reliable user support
  • Reduction in basic issue escalations over time
  • Clear, consistent reporting standards
  • Delivery of project tasks within agreed timelines
  • Demonstrated growth in independence across systems and tools

We are unable to respond to all applications. If you have been shortlisted we will contact you within 5 days of you application.

Lead Data Engineer
Searchability NS&D
Gloucester
Hybrid
Senior
£65,000 - £85,000
RECENTLY POSTED
+2
  • Gloucester location - hybrid working when possible
  • Must hold active Enhanced DV Clearance (West)
  • Competitive Salary DOE - 6% bonus, 25 days holiday, clearance bonus
  • Experience in Data Pipelines, ETL processing, Data Integration, Apache, SQL/NoSQL, Team Leadership

Who Are We?

Our client is a trusted and growing supplier to the National Security sector, delivering mission-critical solutions that help keep the nation safe, secure, and prosperous. You’ll work with cutting-edge technologies, including AI/Data Science, Cyber, Cloud, DevOps/SRE, and Platform Engineering. They have long-term contracts secured across the latest customer framework and are set for significant growth.

What will the Lead Data Engineer be Doing?

You will develop mission-critical data solutions for National Security clients, working with cutting-edge technologies such as AI/DS, Cyber, Cloud, DevOps/SRE, and Platform Engineering. You’ll collaborate directly with customers across National Security, Defence, and Intelligence to solve complex, high-stakes challenges. The role involves designing and implementing sophisticated data pipelines to connect operational systems with analytics and business intelligence platforms.

Responsibilities include:

  • Design, build, and maintain data pipelines, including ingestion, orchestration, and enrichment
  • Develop data-streaming and ETL solutions (e.g. NiFi)
  • Model databases and integrate data from diverse sources
  • Ensure data quality, consistency, and security
  • Monitor and optimise system performance
  • Write clean, secure, reusable, test-driven code
  • Apply systems integration expertise within agile teams
  • Decompose user needs into epics and stories
  • Promote reuse of data flows and best practices across teams
  • Champion data engineering standards across government

The Lead Data Engineer Should Have:

  • Active eDV clearance (West)
  • Willingness to work full-time on-site in Gloucester when required.

Required experience in the following:

  • Apache Kafka
  • Apache NiFI
  • SQL and NoSQL databases (e.g. MongoDB)
  • ETL processing languages such as Groovy, Python or Java
  • Understand and interpret technical and business stakeholder needs
  • Manage expectations through clear, proactive communication
  • Lead and support challenging conversations with teams and senior stakeholders

To be Considered:

Please either apply by clicking online or emailing me directly to . For further information please call me on / - I can make myself available outside of normal working hours to suit from 7am until 10pm. If unavailable, please leave a message and either myself or one of my colleagues will respond. By applying for this role, you give express consent for us to process & submit (subject to required skills) your application to our client in conjunction with this vacancy only. Also feel free to follow me on or connect with me on LinkedIn, just search Henry Clay-Davies (searchability). I look forward to hearing from you.

KEY SKILLS:

DATA ENGINEER / DATA ENGINEERING / DEFENCE / NATIONAL SECURITY / DATA STRATEGY / DATA PIPELINES / DATA GOVERNANCE / SQL / NOSQL / APACHE / NIFI / KAFKA / ETL / GLOUCESTER / DV / SECURITY CLEARED / DV CLEARANCE

ML&AI Engineer
Vallum Associates Limited
Sheffield
Remote or hybrid
Mid - Senior
£500/day
RECENTLY POSTED

We are currently looking for an experienced ML & AI Engineer to join a major technology program delivering advanced AI-driven solutions within the banking sector. The role involves working on innovative AI initiatives, building scalable infrastructure, and developing intelligent systems that power agent-based workflows and conversational AI platforms. You will collaborate with cross-functional teams to design and implement next-generation AI capabilities and help drive the evolution of AI-powered products. Program Scope Develop and provision infrastructure that supports agentic AI workflows across both Azure and Google Cloud Platform (GCP) environments. Provide data science expertise to support the design of agent-based solutions, including Coach AI and future AI Assistant capabilities. Create integration patterns for AI agents to interact with banking systems and perform actions on behalf of customers. Contribute to the development of new AI products within the Conversational Banking Lab. Key Initiatives Include Agent Summarisation Develop advanced capabilities to summarise complex and nuanced customer conversations. App Search Evolution Transform existing vector search functionality into a fully generative AI-driven search experience , creating a single unified interface for users. Evaluation Methods Build automated evaluation frameworks to test and validate both deterministic and generative AI conversations at scale. Required Skills & Experience Must Have Strong Python development skills , with 2+ years of experience building production-grade applications using Large Language Models (LLMs) . Solid understanding of software engineering principles , including: Microservices architecture CI/CD pipelines Event-driven architecture Hands-on experience with AI engineering practices , including: RAG (Retrieval-Augmented Generation) pipelines Prompt engineering LLMOps Runtime monitoring and evaluation of AI systems Experience with Vertex AI Experience in data engineering , including building scalable data pipelines using Python and Spark . Strong knowledge of GCP-native services , including: BigQuery (BQ) Spanner Dataflow Firestore Nice to Have Experience with Agentic AI frameworks , such as: LangGraph ADK CrewAI Multi-agent architectures Experience building deployable AI solutions (production environments rather than notebook-only solutions). Knowledge of data ontologies and graph-based data models . Exposure to Agile or Scrum development methodologies . TPBN1\_UKTJ

Machine Learning Engineer
Sanderson Recruitment
London
Fully remote
Mid - Senior
£700/day - £750/day
RECENTLY POSTED
+1

Machine Learning/Data Engineer

£700-750/day overall assignment rate to umbrella

Fully remote

3-6 month initial

Apply today to join a forward-thinking, tech-driven FTSE 100 organisation using data science and AI to enhance customer experience, optimise supply chains and drive sustainable growth. With 40% of sales from sustainable products, this is a company that combines scale, innovation and purpose.

As a Machine Learning Engineer, you’ll help maintain the stability and performance of core data and ML systems across Europe. This technical engineering role focuses on reliability, optimisation and critical fixes, ideal if you excel at investigating and debugging complex data flows and ML issues in live production environments.

We’re looking for individuals with:

Experience: Proven background as a Machine Learning Engineer.

Technical Skills: Strong in SQL and Python (Pandas, Scikit-learn, Jupyter, Matplotlib).

Data transformation & manipulation : experience with Airflow, DBT and Kubeflow

Cloud: Experience with GCP and Vertex AI (developing ML services).

Expertise: Solid understanding of computer science fundamentals and time-series forecasting.

Machine Learning: Strong grasp of ML and deep learning algorithms (e.g. Logistic Regression, Random Forest, XGBoost, BERT, LSTM, NLP, Transfer Learning).

Reasonable Adjustments:

Respect and equality are core values to us. We are proud of the diverse and inclusive community we have built, and we welcome applications from people of all backgrounds and perspectives. Our success is driven by our people, united by the spirit of partnership to deliver the best resourcing solutions for our clients.

If you need any help or adjustments during the recruitment process for any reason , please let us know when you apply or talk to the recruiters directly so we can support you.

TPBN1_UKTJ

Senior Software Engineer
OTA Recruitment Limited
London
Remote or hybrid
Senior
£100,000
RECENTLY POSTED
+2

Senior Software Engineer Salary: £75k-£110k (plus attractive bonus on top) Location: London or Leeds (relaxed about hybrid or remote working, if preferred) My client is a specialist provider of sports pricing and trading technology, developing advanced simulation-based models and risk tools that underpin the performance of major sports brands.

Role Overview Were looking for a Senior Software Engineer to join the Modelling & Data Engineering group at a rapidly expanding sports-technology business. This is a hands-on role in a fast-moving environment where youll help shape new modelling tools, improve existing systems, and contribute to the technical foundations that support the companys growth.

What youll be working on Developing high-quality, maintainable software using .NET technologies.

Taking ownership of greenfield initiatives, designing and building internal tools that support the companys modelling capabilities.

Helping to gather, process and structure the data that powers the modelling pipeline.

Introducing new technologies, improving architectural patterns and reducing technical debt to enhance performance and maintainability.

Collaborating closely with colleagues across the Modelling & Data Engineering function to manage the full lifecycle of internal tooling.

Qualifications Essential A degree in a STEM discipline (Computer Science preferred), or equivalent demonstrable programming ability.

Certifications or training aligned with the companys core tech stack (e.g., .NET, AWS).

Strong programming fundamentals, including data structures, performance-focused development, design patterns and SOLID principles.

Commercial experience working with .NET (ideally .NET 5+).

Good SQL knowledge and at least one year working with relational databases.

Experience with distributed streaming platforms such as Kafka.

Familiarity with in-memory storage solutions like Redis.

Hands-on experience with AWS services such as S3, Athena, ECS, CloudFormation, Lambda and CloudWatch.

Confident using Git in a multi-developer environment.

Background in systems integration, including APIs, networking and data migration.

A commitment to producing clean, well-documented, reproducible systems.

Strong communication, organisation and time-management skills, with the ability to work independently or as part of a team.

Analytical mindset and strong problem-solving ability.

Desirable Interest in US sports (NFL, NBA, MLB, NHL, NCAAB, NCAAF), Cricket, Tennis or Football.

Experience collaborating with Data Scientists or Data Engineers.

Comfort with mathematical concepts such as probability, statistics and matrix operations.

Additional notes While experience with C# is preferred, candidates with a strong Java background and relevant industry exposure - or a clear personal interest in betting, gaming or US sports - will also be considered.

TPBN1_UKTJ

Contract Machine Learning Engineer (GCP) 6-Months £600
Method Resourcing
London
Remote or hybrid
Mid - Senior
£500/day - £600/day
RECENTLY POSTED

Contract Machine Learning Engineer (LLM & GC)

6-Month Contract | Outside IR35 | £600 per day

We are seeking an experienced Machine Learning Engineer to support the design and build, production ready ML models on Google Cloud Platform (GCP). This is a hands-on delivery role, focused on turning models into scalable, reliable, production systems that solve real business problems.

The contract will run for at least 6-months, will be Outside IR35 at £600 per day, and we are looking to start the project at the beginning of March. This role suits a delivery-focused ML Engineer who enjoys taking models from concept through to production, rather than staying purely in research or experimentation.

Key Responsibilities

Design, build, and productionise machine learning models using GCP-native services

Translate business problems into deployable ML solutions

Develop and maintain end-to-end ML pipelines (training, testing, deployment, monitoring)

Work with data scientists and engineers to operationalise models at scale

Implement best practices for model performance, versioning, and lifecycle management

Ensure solutions are secure, scalable, and cost-efficient within GCP

Required Experience

Strong hands-on experience building and deploying ML models on Google Cloud Platform

Experience with services such as Vertex AI, BigQuery, Cloud Storage, and Cloud Functions / Cloud Run

Solid Python experience for ML and data engineering workloads

Experience productionising models (not just experimentation or notebooks)

Understanding of MLOps concepts: CI/CD, monitoring, retraining, and model governance

Ability to work independently in a contract environment and deliver at pace

Nice to Have

Experience with real-time or near-real-time ML use cases

Exposure to data pipelines and orchestration tools

Prior work in regulated or large-scale enterprise environments

Contract Details

Duration : 6 months

Rate : £500 per day

IR35 : Outside IR35

Start : March 2026

To learn more about this opportunity, please send your CV to Method Resourcing for consideration.

RSG Plc is acting as an Employment Business in relation to this vacancy.

TPBN1_UKTJ

Machine Learning Engineer
Anson McCade
London
Fully remote
Mid - Senior
£75,000
RECENTLY POSTED
+3

Core Duties Design and develop machine learning models for traditional ML use cases (forecasting, classification, anomaly detection) and GenAI/LLM applications Lead experimentation cycles: define hypotheses, design experiments, evaluate results, and iterate rapidly while adhering to governance requirements Transition validated experiments into production-ready solutions, working closely with other engineers on deployment and monitoring Build and optimise ML pipelines using AWS services and experiment tracking tools Develop and integrate LLM-powered solutions for tracing, evaluation, and production monitoring Implement robust experiment tracking, model versioning, and reproducibility practices with full audit trails Design feature engineering approaches and contribute to feature store development Support production models through monitoring, performance analysis, and continuous improvement Apply responsible AI practices, including model explainability and fairness assessment Present experiment findings and production outcomes to stakeholders, articulating operational and strategic value Mentor junior colleagues and share learnings across the team About You You will have experience in many of the following: Hands-on experience developing and deploying ML models in Python using frameworks such as scikit-learn, XGBoost, PyTorch, or TensorFlow Strong experience with AWS ML services (SageMaker, Lambda, S3) in production environments Strong experiment design skills: hypothesis formulation, A/B testing methodology, and statistical evaluation Proven track record transitioning models from experimentation to production with appropriate governance and quality controls Experience with experiment tracking and MLOps tooling (MLflow, Weights & Biases, Data Version Control) Experience developing LLM/GenAI applications, including prompt engineering and RAG architectures It Would Be Great If You Also Had Experience In Some Of These, But If Not Well Help You With Them Experience with advanced LLM techniques: agents, tool use, and agentic workflows Experience with vector databases (Pinecone, Weaviate, pgvector) for RAG applications Experience with feature stores (Feast, AWS Feature Store) Experience with containerisation (Docker) and orchestration (Kubernetes, ECS) Familiarity with Infrastructure as Code (Terraform, CloudFormation) Experience with data processing frameworks (Spark, Dask) for large-scale workloads Understanding of data governance and compliance frameworks TPBN1\_UKTJ

AI Engineer
DXC
London
Remote or hybrid
Junior - Mid
Private salary
RECENTLY POSTED
TECH-AGNOSTIC ROLE

Job Description AI Engineer Location Erskine, Newcastle, Farnborough or London Candidates are required to be eligible for clearance DXC Technology (DXC: NYSE) is the worlds leading independent, end-to-end IT services company, helping clients harness the power of innovation to thrive on change. Created by the merger of CSC and the Enterprise Services business of Hewlett Packard Enterprise, DXC Technology serves nearly 6,000 private and public sector clients across 70 countries. The companys technology independence, global talent, and extensive partner network combine to deliver powerful next-generation IT services and solutions. DXC Technology is recognized among the best corporate citizens globally. Were looking for a talented and forward-thinkingAI Engineerto join our innovative team. This is a unique opportunity to work on cutting-edge AI technologies and contribute to transformative projects across multiple domains. In this role, youll help design and build modern data pipelines that bring together information from a variety of systemsmaking data accessible, trustworthy, and ready for intelligent analytics and AI solutions. You will collaborate closely with teammates across disciplines and have the opportunity to learn from experienced engineers and leaders. Key responsibilities include: Supporting and contributing to data engineering projects, helping ensure delivery within scope and timelines. Working alongside supportive team members to develop and maintain data pipelines and infrastructure. Partnering with cross-functional teams to understand data needs and shape solutions. Contributing to data quality, governance, and security initiatives. Learning directly from specialists in AI and data engineering. Helping to continuously improve and optimise data processes. Staying current with emerging tools, trends, and technologies. Contributing to a collaborative, inclusive, and growth-focused team culture. What youll bring A Bachelors degree in a relevant field or equivalent experience. Experience with modern data engineering tools and technologies. A growth mindset and passion for continuous learning. Understanding and hands-on experience with Transformer models and LLMs (e.g., GPT, LLaMA, Mistral, Claude). Skills in fine-tuning, prompt engineering, and building RAG pipelines. Familiarity with Agent Frameworks (LangChain, LlamaIndex, CrewAI, AutoGen). Knowledge of reinforcement learning methods or tools (Q-learning, policy gradients, RLlib). Why Join Us? Work on AI solutions that make a meaningful impact across industries. Be part of a supportive, collaborative, and forward-thinking team. Access mentoring, continuous learning, and career-development opportunities. Enjoy flexible working arrangements designed to support worklife balance. Join a company committed to inclusion, wellbeing, and empowering your success. TPBN1\_UKTJ

Database Developer
Bowerford Associates
Devon
Hybrid
Mid - Senior
£45,000 - £50,000
RECENTLY POSTED

I am searching for a talented Database Developer / SQL Developer to join our client on a full-time and permanent basis.

The role requires you in the office 2-days per week so you will need to live within a commutable distance of Exeter to be considered for the role or you will be in a position to relocate to the area.

As a Database Developer / SQL Developer you will design, develop and test high-quality database applications that support both internal systems and external business solutions.

You will be responsible for improving processes, solving complex problems whilst working collaboratively and supporting other team members.

Working in an agile environment, you will follow SCRUM and SOLID principles - you will take part in testing, including TDD, design reviews, code walkthroughs and inspections, and you will contribute to continuous improvement.

You will work with internal and external customers to capture requirements, and you will be communicating technical concepts to non-technical audiences.

You will be supporting a 24/7 production environment ensuring systems continue to meet the needs of new and existing platforms.

Additionally, you will:

  • Mentor colleagues in database design and coding.

  • Utilise AI agentic tools within software development to enhance productivity and efficiency

About You

To be a success in this role you will need experience in the following key areas: -

  • Relational database analysis and design experience
  • Experience with performance tuning
  • Data experience, including warehousing analytics and visualisation platform development e.g. Power BI/Tableau
  • Evidence of high-level participation in database-orientated commercial projects
  • Proficient T-SQL development skills in Microsoft SQL Server from version 2016
  • Proficient in SSIS, SSRS and SSAS Development
  • Experience in using agentic AI environments i.e. Cline, Copilot or Gemini (or similar)
  • Effective communication skills, both verbal and written, both external and internal, for example, during agile ceremonies, writing updates for internal and external users

I am looking to speak with good communicators who like to work collaboratively with a diverse range of technical experts within a highly effective technology team.

The role comes with a competitive salary and an outstanding benefits package which includes an enhanced pension, medical and healthcare, a bonus, good holiday allowance and much, much more!

Please note, to be considered for this role you will MUST have the Right to Work in the UK long-term without company sponsorship. Our customer is not able to sponsor candidates for this opportunity.

Please note that due to a high level of applications, we can only respond to applicants whose skills and qualifications are suitable for this position.

No terminology in this advert is intended to discriminate against any of the protected characteristics that fall under the Equality Act 2010.

Bowerford Associates Ltd is acting as an Employment Agency in relation to this vacancy.

Senior SQL DBA
Tec Partners
Norwich
In office
Senior
£50,000 - £55,000
RECENTLY POSTED

Role: Senior SQL Server DBA/Developer

Location: Norwich (onsite)

Salary: Up to 55k DOE

I’m working on behalf of a well-established UK organisation specialising in financial data and technology solutions, seeking an experienced SQL Server Database Administrator to join its internal IT team. The business provides critical financial product data used by major banks, regulators and government bodies across the UK and has been a leader in financial data services for more than 30 years.

Reporting to the Software Development Manager, this role will focus on maintaining and developing database infrastructure, ensuring reliability, security and performance across key systems while supporting new product development.

Key Responsibilities

  • Maintain and support existing SQL Server databases and infrastructure
  • Develop database modules and software components to meet client requirements
  • Design and propose database architecture and infrastructure improvements
  • Produce and maintain technical documentation including standards and procedures
  • Ensure development best practices including code reviews, testing and standards
  • Mentor and support other members of the technical team

Experience & Skills Required

  • Strong experience in a SQL Server DBA or similar role
  • Proven experience with SQL Server (2014/2017) database development and management
  • Solid understanding of database design and query optimisation
  • Experience with C# .NET desktop development
  • Strong analytical and problem-solving skills
  • Excellent attention to detail and ability to work under pressure
  • Strong communication skills and ability to work independently or within a team

Desirable Experience

  • Knowledge of VB6, VBA or web technologies
  • Experience with reporting tools, data warehousing or data mining
  • Experience working within Agile development environments
  • Exposure to financial services or financial products

Salary & Benefits

  • Competitive salary depending on experience
  • 25 days holiday + bank holidays (with additional long service entitlement)
  • Birthday day off
  • Enhanced workplace pension
  • Employee Assistance Programme and 24/7 GP access
  • Group life insurance
  • Ongoing training and development opportunities
  • Free onsite parking and electric vehicle charging points
  • Locker rooms with showers
  • Fully air-conditioned offices
  • Staff perks including Monday treats and discounted local bus travel
Frequently asked questions
Typically, a Data Engineer should have a strong background in computer science or related fields, proficiency in programming languages like Python or Java, and experience with data warehousing, ETL processes, and big data technologies such as Hadoop or Spark.
Haystack features a wide range of Data Engineer positions, including roles in startups, large enterprises, and remote opportunities. You can find jobs specializing in cloud data engineering, real-time data processing, data pipeline development, and more.
To improve your chances, tailor your resume to highlight relevant skills and projects, gain hands-on experience with popular data tools, contribute to open-source projects, and stay updated with the latest trends in data engineering.
Yes, Haystack lists entry-level Data Engineer roles suitable for recent graduates or professionals transitioning into data engineering, as well as internships and junior positions to help you start your career.
Absolutely. Haystack offers many remote and flexible Data Engineer job listings to suit your preferred working style and location.