£450/day outside IR35
London-based - twice a week on-site
6 month initial contract
Working with a financial services client who are looking to build a new AI/ML function. Looking for a MLOps Engineer to design, develop and implement AI and ML applications.
Will require working cross-functionally to conceptualise, design, test and deploy AI projects.
Azure AI/ML Engineer, key responsibilities:
Build, develop and deploy AI applications using Python
Design and Develop AI services
Setup and develop data ingestion pipelines and components
Develop search related components using Azure AI Search
Developing and deploying AI/ML models
Built and maintain scalable, high-performance AI apps on Azure
Create and maintain secure AI platform
Collaborate cross-functionally to understand requirements
Azure AI Engineer, Azure ML, Python
Reasonable Adjustments:
Respect and equality are core values to us. We are proud of the diverse and inclusive community we have built, and we welcome applications from people of all backgrounds and perspectives. Our success is driven by our people, united by the spirit of partnership to deliver the best resourcing solutions for our clients.
If you need any help or adjustments during the recruitment process for any reason , please let us know when you apply or talk to the recruiters directly so we can support you.
TPBN1_UKTJ
The Role
We are seeking a skilled OpenShift Telemetry Engineer to join our team. In this role, you will be responsible for implementing, managing, and optimizing the observability stack within a Red Hat OpenShift Container Platform environment to ensure system health, performance, and security.
You will act as a bridge between application monitoring and infrastructure observability, leveraging modern telemetry and data streaming tools.
Key Responsibilities
Required Skills & Experience
Preferred Skills:
Security and compliance practices for data pipelines, including:
Strong problem-solving and analytical skills.
Ability to work effectively in cross-functional teams.
Excellent communication and documentation skills.
We are Data Services, our mission is to unlock the value of data by delivering high-quality, reliable, and secure data services that are accessible, understandable, and actionable. We continuously evolve our offerings, leveraging modern cloud-based technologies, and fostering strong partnerships to help our colleagues in the Bank navigate the complexities of a data-driven world and achieve their strategic objectives.
Active SC Clearance
Job Description:
The world of data in Central Banking is evolving rapidly. With the rise of detailed data collection in financial regulation and the swift advancements in cloud-native data technologies, the demand for visionary data engineers is growing. Were seeking a senior Data Engineer to join our Data Engineering team and play a pivotal role in shaping the Banks strategic cloud-first data platform.
As a senior member of the team, you will play a key role in designing and delivering robust, scalable data solutions that support the Banks core responsibilities around monetary policy, financial stability, and regulatory supervision. Youll contribute to technical design decisions, mentor engineers, and collaborate across teams to ensure our data infrastructure continues to evolve and meet future demands.
Role Responsibilities
Changed this now. I was confusing this with PDE role as I am working on that in parallel. Hope this makes sense now.
data solutions rather than architectures?
Should add Python here as a key tech we use
Have mentioned Python in ‘Minimum Criteria’ section below, but will add here too
this could be added to Essential Criteria ?
stakeholder and project management ?
Have updated #1 in essential criteria below. But I have now used the previous version to create requisition in OBS. Will see if it can be changed.
What is the difference between “minimum” and “essential” criteria. Both imply that they are mandatory and so could be one list?
This is a bit confusing. I used to have just one, but this is the standard format of JD that the Bank wants us to follow. Here is the difference:
Min Criteria:
This must list the minimum technical skills/experience/qualifications required to do the job and should be measurable/scoreable. The screening questions you select must link to these, in order to allow candidates to best demonstrate their suitability for the role.
Essential:
This lists other important technical skills/experience/qualifications, and also more behavioural competencies. These are ones that are better assessed at interview rather than on screening questions on the application form
Ok, I think we could go back and ask HR about this as it does seem confusing and to me doesn’t give a good impression of the Bank to applicants at it looks like 2 lists for the same thing.
I had checked this earlier, but seems they want us to follow this format. When I advertised last time, I just mentioned Minimum Criteria, but they said it has to be split into Minimum and Essential.
Don’t think we need to mention Atlas or Cloudera Manager as we hardly ever use those. Airflow could be useful so would leave that in.
Your new company
An established and fast-growing technology organisation is on a mission to transform digital connectivity across the UK. With a focus on building and operating high-speed fibre networks, the business is committed to delivering world-class broadband services to communities and supporting a data-driven future. You’ll be joining a forward-thinking environment that values innovation, collaboration, and continuous improvement.
Your new role
As a Senior Data Engineer, you will play a pivotal role in shaping and enhancing the organisation’s enterprise data platform. Working within a specialist Data Analytics & AI team, you’ll be responsible for designing, building, and maintaining scalable data pipelines and models within Snowflake to support analytics, reporting, and data-led decision-making across the business.You will translate data architecture strategies into high-quality technical solutions, optimise performance and cost, and ensure the data platform is reliable, secure, and well-structured. This includes developing ELT/ETL pipelines using tools such as dbt and Argo Workflows, implementing data quality and governance practices, and leveraging advanced Snowflake features to drive automation and efficiency.Collaboration is key-you’ll work closely with analysts, data consumers, and business stakeholders, enabling them through well-designed data models and providing technical support where needed. You’ll also contribute to monitoring, CI/CD processes, and ongoing improvements to engineering standards across the team.
What you’ll need to succeed
Desirable:
What you need to do now
If you’re interested in this role, click ‘apply now’ to forward an up-to-date copy of your CV, or call us now.
If this job isn’t quite right for you, but you are looking for a new position, please contact us for a confidential discussion about your career.
Hays Specialist Recruitment Limited acts as an employment agency for permanent recruitment and employment business for the supply of temporary workers. By applying for this job you accept the T&C’s, Privacy Policy and Disclaimers which can be found at hays.co.uk
Data Engineer (AWS Python Kafka) London / WFH to £85k
Are you a tech savvy Data Engineer with AWS expertise combined with client facing skills?
You could be joining a global technology consultancy with a range of banking, financial services and insurance clients in a senior, hands-on Data Engineer role.
As a Data Engineer you will design and build end-to-end real-time data pipelines using AWS native tools, Kafka and modern data architectures, applying AWS Well-Architected Principles to ensure scalability, security and resilience. You’ll collaborate directly with clients to analyse requirements, define solutions and deliver production grade systems, leading the development of robust, well tested and fault tolerant data engineering solutions.
Location / WFH:
There’s a hybrid work from home model with two days a week in the London, City office (or at client site in London).
About you:
What’s in it for you:
As a Data Engineer you will earn a highly competitive package:
Apply now to find out more about this Data Engineer (AWS Python Kafka) opportunity.
At Client Server we believe in a diverse workplace that allows people to play to their strengths and continually learn. We’re an equal opportunities employer whose people come from all walks of life and will never discriminate based on race, colour, religion, sex, gender identity or expression, sexual orientation, national origin, genetics, disability, age, or veteran status. The clients we work with share our values.
Head of Data required to lead and evolve enterprise-wide data platforms within a global organisation in Glasgow. This is a senior role responsible for building scalable platforms, maintaining high standards of data governance and quality, leading a high-performing team, and shaping the organisation’s long-term data strategy.
The Organisation
This is a large, global organisation where data underpins critical business functions. Over the past five years, the organisation has been developing its data capabilities and recently launched its first enterprise data platform. The next phase is to replicate these platforms across multiple business domains while enhancing governance, reliability, and value from the data estate.
The firm continues to invest in cloud-based platforms, analytics, and AI, with a focus on secure, scalable, and high-quality data solutions. Senior technology leaders are trusted to guide strategy as well as deliver results operationally.
The Role
You will take end-to-end ownership of the enterprise data platforms, ensuring they are robust, reliable, and scalable, while driving improvements in data quality and governance. You’ll manage a team of data engineers and act as the central coordinator across data architects, governance, and reporting teams, ensuring alignment and successful delivery across the business.
In addition, you will contribute to developing and executing the firm’s data strategy, supporting innovation and longer-term ambitions including AI and advanced analytics initiatives.
What You’ll Be Doing
** Leading the design, implementation, and optimisation of enterprise data platforms
** Ensuring data governance, master data management, and quality standards are embedded across all platforms
** Managing, mentoring, and developing a team of data engineers, while coordinating cross-functional teams of architects, governance, and reporting specialists
** Building and maintaining scalable, reliable, and reusable data pipelines across multiple sources
** Collaborating with senior stakeholders to translate business priorities into actionable data initiatives
** Driving the adoption of cloud services, analytics, and AI to enhance the data estate
** Managing vendors and third-party partners to ensure delivery, performance, and value
What They’re Looking For
** Proven experience leading enterprise data platform initiatives
** Strong technical expertise in data engineering, data management, and cloud platforms (e.g., Azure, MS Fabric, Databricks)
** Track record of delivering complex, high-value data solutions with strong governance and quality controls
** Experienced people leader capable of managing teams and coordinating cross-functional stakeholders
** Skilled at shaping and delivering data strategy, with a vision for AI and advanced analytics
** Excellent stakeholder management and communication skills at senior and executive levels
** Understanding of business functions such as finance, HR, compliance, and operational processes
The Offer
A competitive salary and benefits package is on offer, alongside hybrid working (typically 2-3 days per week in their city centre office).
This is a senior, high-profile leadership role with the opportunity to shape the enterprise data landscape, build a high-performing team, and drive strategic innovation across the organisation.
If this sounds of interest, please apply or reach out to Murray Simpson.
Cathcart Technology is acting as an Employment Agency in relation to this vacancy.
The Post
This post holder will be responsible for managing the technical development of a web-based battery of measures of curiosity for children. The post holder will work with the project team to implement specified tasks and surveys into a unified web-based platform for data collection in schools, public spaces, and the lab and to develop the data architecture for the battery and produce the pipelines for wrangling and cleaning the data ready for analysis.
The finalised battery will be made available open source for other researchers and educators and resulting data should be shareable. A key part of the role will be documentation of the measures and the resulting data in line with Open Science practices, including preparation of preregistration documents. The post holder will develop training materials for the end users of the battery including researchers and educators. Additionally, the post holder may contribute to research activities such as data collection and data analysis.
The post holder will be an integral member of the dynamic research team at Stirling and Lancaster Universities and contribute to a line of studies assessing curiosity and its effects in primary school age children. The post holder will have opportunities to develop new skills, collaborations, and research ideas within the role.
The University of Stirling has an Agile Working Framework that will enable post holder to work flexibly to deliver the project objectives.
Description of Duties
Essential Criteria
Desirable Criteria
Additional Information
Part time (40% FTE)
Fixed term for 12 months
Grade 6: £31,236 - £37,694 p.a. pro-rata
The closing date for applications is midnight on Sunday 05 April 2026.
Interviews are expected to take place on the week commencing Monday 20 April 2026.
There is an expectation that work will be undertaken in the UK.
This role will require a membership of the PVG scheme. An offer of employment will be subject to a satisfactory outcome of this process.
This role is not eligible for sponsorship. Applicants require to have existing right to work in the UK.
The University of Stirling recognises that a diverse workforce benefits and enriches the work, learning and research experiences of the entire campus and greater community. We are committed to removing barriers and welcome applications from those who would contribute to further diversification of our staff and ensure that equality, diversity and inclusion is woven into the substance of the role. We strongly encourage applications from people from diverse backgrounds including gender, identity, race, age, class, and ethnicity.
For a full description of duties and essential/desirable criteria please click the apply button, which will take you directly to the University Website.
We are currently looking for an experienced ML & AI Engineer to join a major technology program delivering advanced AI-driven solutions within the banking sector. The role involves working on innovative AI initiatives, building scalable infrastructure, and developing intelligent systems that power agent-based workflows and conversational AI platforms.
You will collaborate with cross-functional teams to design and implement next-generation AI capabilities and help drive the evolution of AI-powered products.
Program Scope
Key Initiatives Include
Agent Summarisation
Develop advanced capabilities to summarise complex and nuanced customer conversations.
App Search Evolution
Transform existing vector search functionality into a fully generative AI-driven search experience, creating a single unified interface for users.
Evaluation Methods
Build automated evaluation frameworks to test and validate both deterministic and generative AI conversations at scale.
Required Skills & ExperienceMust Have
Strong Python development skills, with 2+ years of experience building production-grade applications using Large Language Models (LLMs).
Solid understanding of software engineering principles, including:
Hands-on experience with AI engineering practices, including:
Experience in data engineering, including building scalable data pipelines using Python and Spark.
Strong knowledge of GCP-native services, including:
Nice to Have
Experience with Agentic AI frameworks, such as:
Experience building deployable AI solutions (production environments rather than notebook-only solutions).
Knowledge of data ontologies and graph-based data models.
Exposure to Agile or Scrum development methodologies.
ML & AI Engineer – Python, LLM, RAG, GCP We are seeking a Machine Learning / AI Engineer to help build and deploy production-grade generative AI and LLM systems powering next-generation conversational digital experiences. This role focuses on designing and engineering LLM applications, RAG pipelines and scalable AI infrastructure using Python and Google Cloud (GCP). You will work on cutting-edge agentic AI and conversational AI platforms, building services that support intelligent assistants and automated customer interactions. Key Responsibilities \* Build and deploy LLM-based applications and generative AI solutions \* Develop RAG (Retrieval Augmented Generation) pipelines \* Engineer scalable microservices-based AI platforms \* Design and maintain data pipelines using Python and Spark \* Implement CI/CD pipelines for machine learning and AI systems \* Work with GCP services including Vertex AI, BigQuery, Dataflow, Spanner and Firestore \* Contribute to prompt engineering, LLM evaluation and monitoring \* Build integrations between AI agents and enterprise systems Required Skills \* Strong Python development experience \* Experience building LLM or generative AI applications \* Knowledge of RAG pipelines and prompt engineering \* Experience with microservices architecture and CI/CD \* Hands-on experience with Google Cloud Platform (GCP) \* Familiarity with Spark or large-scale data pipelines Nice to Have \* Experience with LangChain, LangGraph or agentic AI frameworks \* Knowledge of multi-agent architectures or AI orchestration \* Experience building production AI platforms This is an opportunity to work on large-scale AI engineering challenges, delivering production AI systems and intelligent digital assistants using modern LLM, generative AI and cloud-native technologies
Snowflake BI Developer - Contract - 250 per day
I’m contacting you to highlight a contract opportunity I’m currently recruiting for. My London based client is looking for a Snowflake BI Developer immediately available to start.
As a Snowflake BI Developer you will have experience driving reporting across organisations utilising Snowflake to generate these reports.
Location: Hybrid - Central London
Length: 6 months with strong view to extend
Day Rate: 250 per day
IR35 Status: Inside of IR35
Required experience will include:
Desirables:
If you are interested in this Snowflake BI Developer role please apply with your most recent CV. Alternatively email me on Jordan co . uk. There are multiple roles available so feel free to recommend a friend or previous colleague.
Snowflake BI Developer - Contract - 250 per day
Randstad Technologies is acting as an Employment Business in relation to this vacancy.
Location: Hybrid Shropshire or Sussex
Salary: Competitive salary plus benefits
We are currently supporting a leading technology business that delivers large-scale data solutions across complex and highly secure environments. Due to ongoing project growth, they are seeking to appoint a Data Engineer to join their expanding data engineering team.
This role will focus on designing and delivering robust data integration solutions, building scalable pipelines, and collaborating closely with client stakeholders to support data-driven decision-making across critical systems.
The Role
As a Data Engineer, you will be responsible for building and maintaining data pipelines and integration solutions within enterprise environments. The role covers the full delivery lifecycle, from gathering requirements through to deployment and operational support.
Key Responsibilities:
Experience Required
We are looking for engineers with strong fundamentals in data engineering and proven experience delivering solutions within complex environments.
Essential Skills and Experience:
Desirable Experience:
We are recruiting for a fast-growing, dynamic business who work across an impressive B2B client portfolio.
We are looking for a Digital Systems Administrator to support core business systems, websites, and data processes.
This role focuses on accuracy, continuous learning, and providing dependable day-to-day support to users and the wider Digital team. It is ideal for someone early in their systems or data career who wants to build technical capability within a commercially focused business.
If you are a recent graduate, who has strong systems knowledge and an interest in advanced Excel - or coding and are looking for a new career opportunity, please send us your CV today.
Key Responsibilities
Skills & Experience:
What Success Looks Like
We are unable to respond to all applications. If you have been shortlisted we will contact you within 5 days of you application.
Who Are We?
Our client is a trusted and growing supplier to the National Security sector, delivering mission-critical solutions that help keep the nation safe, secure, and prosperous. You’ll work with cutting-edge technologies, including AI/Data Science, Cyber, Cloud, DevOps/SRE, and Platform Engineering. They have long-term contracts secured across the latest customer framework and are set for significant growth.
What will the Lead Data Engineer be Doing?
You will develop mission-critical data solutions for National Security clients, working with cutting-edge technologies such as AI/DS, Cyber, Cloud, DevOps/SRE, and Platform Engineering. You’ll collaborate directly with customers across National Security, Defence, and Intelligence to solve complex, high-stakes challenges. The role involves designing and implementing sophisticated data pipelines to connect operational systems with analytics and business intelligence platforms.
Responsibilities include:
The Lead Data Engineer Should Have:
Required experience in the following:
To be Considered:
Please either apply by clicking online or emailing me directly to . For further information please call me on / - I can make myself available outside of normal working hours to suit from 7am until 10pm. If unavailable, please leave a message and either myself or one of my colleagues will respond. By applying for this role, you give express consent for us to process & submit (subject to required skills) your application to our client in conjunction with this vacancy only. Also feel free to follow me on or connect with me on LinkedIn, just search Henry Clay-Davies (searchability). I look forward to hearing from you.
KEY SKILLS:
DATA ENGINEER / DATA ENGINEERING / DEFENCE / NATIONAL SECURITY / DATA STRATEGY / DATA PIPELINES / DATA GOVERNANCE / SQL / NOSQL / APACHE / NIFI / KAFKA / ETL / GLOUCESTER / DV / SECURITY CLEARED / DV CLEARANCE
We are currently looking for an experienced ML & AI Engineer to join a major technology program delivering advanced AI-driven solutions within the banking sector. The role involves working on innovative AI initiatives, building scalable infrastructure, and developing intelligent systems that power agent-based workflows and conversational AI platforms. You will collaborate with cross-functional teams to design and implement next-generation AI capabilities and help drive the evolution of AI-powered products. Program Scope Develop and provision infrastructure that supports agentic AI workflows across both Azure and Google Cloud Platform (GCP) environments. Provide data science expertise to support the design of agent-based solutions, including Coach AI and future AI Assistant capabilities. Create integration patterns for AI agents to interact with banking systems and perform actions on behalf of customers. Contribute to the development of new AI products within the Conversational Banking Lab. Key Initiatives Include Agent Summarisation Develop advanced capabilities to summarise complex and nuanced customer conversations. App Search Evolution Transform existing vector search functionality into a fully generative AI-driven search experience , creating a single unified interface for users. Evaluation Methods Build automated evaluation frameworks to test and validate both deterministic and generative AI conversations at scale. Required Skills & Experience Must Have Strong Python development skills , with 2+ years of experience building production-grade applications using Large Language Models (LLMs) . Solid understanding of software engineering principles , including: Microservices architecture CI/CD pipelines Event-driven architecture Hands-on experience with AI engineering practices , including: RAG (Retrieval-Augmented Generation) pipelines Prompt engineering LLMOps Runtime monitoring and evaluation of AI systems Experience with Vertex AI Experience in data engineering , including building scalable data pipelines using Python and Spark . Strong knowledge of GCP-native services , including: BigQuery (BQ) Spanner Dataflow Firestore Nice to Have Experience with Agentic AI frameworks , such as: LangGraph ADK CrewAI Multi-agent architectures Experience building deployable AI solutions (production environments rather than notebook-only solutions). Knowledge of data ontologies and graph-based data models . Exposure to Agile or Scrum development methodologies . TPBN1\_UKTJ
Machine Learning/Data Engineer
£700-750/day overall assignment rate to umbrella
Fully remote
3-6 month initial
Apply today to join a forward-thinking, tech-driven FTSE 100 organisation using data science and AI to enhance customer experience, optimise supply chains and drive sustainable growth. With 40% of sales from sustainable products, this is a company that combines scale, innovation and purpose.
As a Machine Learning Engineer, you’ll help maintain the stability and performance of core data and ML systems across Europe. This technical engineering role focuses on reliability, optimisation and critical fixes, ideal if you excel at investigating and debugging complex data flows and ML issues in live production environments.
We’re looking for individuals with:
Experience: Proven background as a Machine Learning Engineer.
Technical Skills: Strong in SQL and Python (Pandas, Scikit-learn, Jupyter, Matplotlib).
Data transformation & manipulation : experience with Airflow, DBT and Kubeflow
Cloud: Experience with GCP and Vertex AI (developing ML services).
Expertise: Solid understanding of computer science fundamentals and time-series forecasting.
Machine Learning: Strong grasp of ML and deep learning algorithms (e.g. Logistic Regression, Random Forest, XGBoost, BERT, LSTM, NLP, Transfer Learning).
Reasonable Adjustments:
Respect and equality are core values to us. We are proud of the diverse and inclusive community we have built, and we welcome applications from people of all backgrounds and perspectives. Our success is driven by our people, united by the spirit of partnership to deliver the best resourcing solutions for our clients.
If you need any help or adjustments during the recruitment process for any reason , please let us know when you apply or talk to the recruiters directly so we can support you.
TPBN1_UKTJ
Senior Software Engineer Salary: £75k-£110k (plus attractive bonus on top) Location: London or Leeds (relaxed about hybrid or remote working, if preferred) My client is a specialist provider of sports pricing and trading technology, developing advanced simulation-based models and risk tools that underpin the performance of major sports brands.
Role Overview Were looking for a Senior Software Engineer to join the Modelling & Data Engineering group at a rapidly expanding sports-technology business. This is a hands-on role in a fast-moving environment where youll help shape new modelling tools, improve existing systems, and contribute to the technical foundations that support the companys growth.
What youll be working on Developing high-quality, maintainable software using .NET technologies.
Taking ownership of greenfield initiatives, designing and building internal tools that support the companys modelling capabilities.
Helping to gather, process and structure the data that powers the modelling pipeline.
Introducing new technologies, improving architectural patterns and reducing technical debt to enhance performance and maintainability.
Collaborating closely with colleagues across the Modelling & Data Engineering function to manage the full lifecycle of internal tooling.
Qualifications Essential A degree in a STEM discipline (Computer Science preferred), or equivalent demonstrable programming ability.
Certifications or training aligned with the companys core tech stack (e.g., .NET, AWS).
Strong programming fundamentals, including data structures, performance-focused development, design patterns and SOLID principles.
Commercial experience working with .NET (ideally .NET 5+).
Good SQL knowledge and at least one year working with relational databases.
Experience with distributed streaming platforms such as Kafka.
Familiarity with in-memory storage solutions like Redis.
Hands-on experience with AWS services such as S3, Athena, ECS, CloudFormation, Lambda and CloudWatch.
Confident using Git in a multi-developer environment.
Background in systems integration, including APIs, networking and data migration.
A commitment to producing clean, well-documented, reproducible systems.
Strong communication, organisation and time-management skills, with the ability to work independently or as part of a team.
Analytical mindset and strong problem-solving ability.
Desirable Interest in US sports (NFL, NBA, MLB, NHL, NCAAB, NCAAF), Cricket, Tennis or Football.
Experience collaborating with Data Scientists or Data Engineers.
Comfort with mathematical concepts such as probability, statistics and matrix operations.
Additional notes While experience with C# is preferred, candidates with a strong Java background and relevant industry exposure - or a clear personal interest in betting, gaming or US sports - will also be considered.
TPBN1_UKTJ
Contract Machine Learning Engineer (LLM & GC)
6-Month Contract | Outside IR35 | £600 per day
We are seeking an experienced Machine Learning Engineer to support the design and build, production ready ML models on Google Cloud Platform (GCP). This is a hands-on delivery role, focused on turning models into scalable, reliable, production systems that solve real business problems.
The contract will run for at least 6-months, will be Outside IR35 at £600 per day, and we are looking to start the project at the beginning of March. This role suits a delivery-focused ML Engineer who enjoys taking models from concept through to production, rather than staying purely in research or experimentation.
Key Responsibilities
Design, build, and productionise machine learning models using GCP-native services
Translate business problems into deployable ML solutions
Develop and maintain end-to-end ML pipelines (training, testing, deployment, monitoring)
Work with data scientists and engineers to operationalise models at scale
Implement best practices for model performance, versioning, and lifecycle management
Ensure solutions are secure, scalable, and cost-efficient within GCP
Required Experience
Strong hands-on experience building and deploying ML models on Google Cloud Platform
Experience with services such as Vertex AI, BigQuery, Cloud Storage, and Cloud Functions / Cloud Run
Solid Python experience for ML and data engineering workloads
Experience productionising models (not just experimentation or notebooks)
Understanding of MLOps concepts: CI/CD, monitoring, retraining, and model governance
Ability to work independently in a contract environment and deliver at pace
Nice to Have
Experience with real-time or near-real-time ML use cases
Exposure to data pipelines and orchestration tools
Prior work in regulated or large-scale enterprise environments
Contract Details
Duration : 6 months
Rate : £500 per day
IR35 : Outside IR35
Start : March 2026
To learn more about this opportunity, please send your CV to Method Resourcing for consideration.
RSG Plc is acting as an Employment Business in relation to this vacancy.
TPBN1_UKTJ
Core Duties Design and develop machine learning models for traditional ML use cases (forecasting, classification, anomaly detection) and GenAI/LLM applications Lead experimentation cycles: define hypotheses, design experiments, evaluate results, and iterate rapidly while adhering to governance requirements Transition validated experiments into production-ready solutions, working closely with other engineers on deployment and monitoring Build and optimise ML pipelines using AWS services and experiment tracking tools Develop and integrate LLM-powered solutions for tracing, evaluation, and production monitoring Implement robust experiment tracking, model versioning, and reproducibility practices with full audit trails Design feature engineering approaches and contribute to feature store development Support production models through monitoring, performance analysis, and continuous improvement Apply responsible AI practices, including model explainability and fairness assessment Present experiment findings and production outcomes to stakeholders, articulating operational and strategic value Mentor junior colleagues and share learnings across the team About You You will have experience in many of the following: Hands-on experience developing and deploying ML models in Python using frameworks such as scikit-learn, XGBoost, PyTorch, or TensorFlow Strong experience with AWS ML services (SageMaker, Lambda, S3) in production environments Strong experiment design skills: hypothesis formulation, A/B testing methodology, and statistical evaluation Proven track record transitioning models from experimentation to production with appropriate governance and quality controls Experience with experiment tracking and MLOps tooling (MLflow, Weights & Biases, Data Version Control) Experience developing LLM/GenAI applications, including prompt engineering and RAG architectures It Would Be Great If You Also Had Experience In Some Of These, But If Not Well Help You With Them Experience with advanced LLM techniques: agents, tool use, and agentic workflows Experience with vector databases (Pinecone, Weaviate, pgvector) for RAG applications Experience with feature stores (Feast, AWS Feature Store) Experience with containerisation (Docker) and orchestration (Kubernetes, ECS) Familiarity with Infrastructure as Code (Terraform, CloudFormation) Experience with data processing frameworks (Spark, Dask) for large-scale workloads Understanding of data governance and compliance frameworks TPBN1\_UKTJ
Job Description AI Engineer Location Erskine, Newcastle, Farnborough or London Candidates are required to be eligible for clearance DXC Technology (DXC: NYSE) is the worlds leading independent, end-to-end IT services company, helping clients harness the power of innovation to thrive on change. Created by the merger of CSC and the Enterprise Services business of Hewlett Packard Enterprise, DXC Technology serves nearly 6,000 private and public sector clients across 70 countries. The companys technology independence, global talent, and extensive partner network combine to deliver powerful next-generation IT services and solutions. DXC Technology is recognized among the best corporate citizens globally. Were looking for a talented and forward-thinkingAI Engineerto join our innovative team. This is a unique opportunity to work on cutting-edge AI technologies and contribute to transformative projects across multiple domains. In this role, youll help design and build modern data pipelines that bring together information from a variety of systemsmaking data accessible, trustworthy, and ready for intelligent analytics and AI solutions. You will collaborate closely with teammates across disciplines and have the opportunity to learn from experienced engineers and leaders. Key responsibilities include: Supporting and contributing to data engineering projects, helping ensure delivery within scope and timelines. Working alongside supportive team members to develop and maintain data pipelines and infrastructure. Partnering with cross-functional teams to understand data needs and shape solutions. Contributing to data quality, governance, and security initiatives. Learning directly from specialists in AI and data engineering. Helping to continuously improve and optimise data processes. Staying current with emerging tools, trends, and technologies. Contributing to a collaborative, inclusive, and growth-focused team culture. What youll bring A Bachelors degree in a relevant field or equivalent experience. Experience with modern data engineering tools and technologies. A growth mindset and passion for continuous learning. Understanding and hands-on experience with Transformer models and LLMs (e.g., GPT, LLaMA, Mistral, Claude). Skills in fine-tuning, prompt engineering, and building RAG pipelines. Familiarity with Agent Frameworks (LangChain, LlamaIndex, CrewAI, AutoGen). Knowledge of reinforcement learning methods or tools (Q-learning, policy gradients, RLlib). Why Join Us? Work on AI solutions that make a meaningful impact across industries. Be part of a supportive, collaborative, and forward-thinking team. Access mentoring, continuous learning, and career-development opportunities. Enjoy flexible working arrangements designed to support worklife balance. Join a company committed to inclusion, wellbeing, and empowering your success. TPBN1\_UKTJ
I am searching for a talented Database Developer / SQL Developer to join our client on a full-time and permanent basis.
The role requires you in the office 2-days per week so you will need to live within a commutable distance of Exeter to be considered for the role or you will be in a position to relocate to the area.
As a Database Developer / SQL Developer you will design, develop and test high-quality database applications that support both internal systems and external business solutions.
You will be responsible for improving processes, solving complex problems whilst working collaboratively and supporting other team members.
Working in an agile environment, you will follow SCRUM and SOLID principles - you will take part in testing, including TDD, design reviews, code walkthroughs and inspections, and you will contribute to continuous improvement.
You will work with internal and external customers to capture requirements, and you will be communicating technical concepts to non-technical audiences.
You will be supporting a 24/7 production environment ensuring systems continue to meet the needs of new and existing platforms.
Additionally, you will:
Mentor colleagues in database design and coding.
Utilise AI agentic tools within software development to enhance productivity and efficiency
About You
To be a success in this role you will need experience in the following key areas: -
I am looking to speak with good communicators who like to work collaboratively with a diverse range of technical experts within a highly effective technology team.
The role comes with a competitive salary and an outstanding benefits package which includes an enhanced pension, medical and healthcare, a bonus, good holiday allowance and much, much more!
Please note, to be considered for this role you will MUST have the Right to Work in the UK long-term without company sponsorship. Our customer is not able to sponsor candidates for this opportunity.
Please note that due to a high level of applications, we can only respond to applicants whose skills and qualifications are suitable for this position.
No terminology in this advert is intended to discriminate against any of the protected characteristics that fall under the Equality Act 2010.
Bowerford Associates Ltd is acting as an Employment Agency in relation to this vacancy.
Role: Senior SQL Server DBA/Developer
Location: Norwich (onsite)
Salary: Up to 55k DOE
I’m working on behalf of a well-established UK organisation specialising in financial data and technology solutions, seeking an experienced SQL Server Database Administrator to join its internal IT team. The business provides critical financial product data used by major banks, regulators and government bodies across the UK and has been a leader in financial data services for more than 30 years.
Reporting to the Software Development Manager, this role will focus on maintaining and developing database infrastructure, ensuring reliability, security and performance across key systems while supporting new product development.
Key Responsibilities
Experience & Skills Required
Desirable Experience
Salary & Benefits