Make yourself visible and let companies apply to you.
Roles
Pandas Jobs
Overview
Discover top Pandas jobs with Haystack – your go-to IT job board for Python data professionals. Whether you're a data analyst or data scientist skilled in Pandas, find the latest roles that match your expertise. Start your search today and advance your career working with Pandas, the essential Python library for data manipulation and analysis.
Data Reliability Engineer II
CME Technology Support Services Ltd
Belfast
Hybrid
Mid - Senior
Private salary
RECENTLY POSTED
+11

Data Reliability Engineer II (dRE)

Role Overview:

A crucial role in CME’s Cloud transformation, the dRE II will be aligned to data product pods ensuring that our data infrastructure is reliable, scalable, and efficient as the GCP data footprint expands rapidly.

Accountabilities:

  • Automate data tasks on Google Cloud Platform (GCP).
  • Work with data domain owners, data scientists, and other stakeholders to ensure that data is consumed effectively on GCP.
  • Design, build, secure, and maintain data infrastructure, including data pipelines, databases, data warehouses, and data processing platforms on GCP.
  • Measure and monitor the quality of data on GCP data platforms.
  • Implement robust monitoring and alerting systems to proactively identify and resolve issues in data systems.
  • Respond to incidents promptly to minimize downtime and data loss.
  • Develop automation scripts and tools to streamline data operations and make them scalable to accommodate growing data volumes and user traffic.
  • Optimize data systems to ensure efficient data processing, reduce latency, and improve overall system performance.
  • Collaborate with data and infrastructure teams to forecast data growth and plan for future capacity requirements.
  • Ensure data security and compliance with data protection regulations.
  • Implement best practices for data access controls and encryption.
  • Collaborate with data engineers, data scientists, and software engineers to understand data requirements, troubleshoot issues, and support data-driven initiatives.
  • Continuously assess and improve data infrastructure and data processes to enhance reliability, efficiency, and performance.
  • Maintain clear and up-to-date documentation related to data systems, configurations, and standard operating procedures.

Qualifications

  • Education: Bachelor’s or Master’s degree in Computer Science, Software Engineering, Data Science, or a related field, or equivalent practical experience.
  • Professional Experience: Experience as a Site Reliability Engineer or a similar role with a focus on data infrastructure management.
  • Methodologies: Understanding of Site Reliability Engineering (SRE) practices.
  • Data Technologies: ideally some experience in data technologies such as relational databases, data warehousing, big data platforms (e.g., Hadoop), data streaming (e.g., Kafka), and cloud services (e.g., AWS, GCP, Azure).
  • Programming Skills: Any programming skills in languages like Python (NumPy, pandas, PySpark), Java (Core Spark with Java, functional interface, collections), or Scala with experience in automation and scripting would be great.
  • Infrastructure Tools: Experience with containerization and orchestration tools like Docker and Kubernetes is a plus.
  • Compliance: Experience with data governance, data security, and compliance best practices.
  • Software Development: Understanding of software development methodologies and best practices, including version control (e.g., Git) and CI/CD pipelines.
  • Cloud Computing: Any experience in cloud computing and data-intensive applications and services, ideally Google Cloud Platform (GCP) would be highly beneficial.
  • Quality Assurance: Experience with data quality assurance and testing.
  • GCP Data Services: Any exposure to GCP data services (BigQuery, Dataflow, Data Fusion, Dataproc, Cloud Composer, Pub/Sub, Google Cloud Storage) is a bonus.
  • Monitoring Tools: Understanding of logging and monitoring using tools such as Cloud Logging, ELK Stack, AppDynamics, New Relic, and Splunk.
  • Advanced Tech: Knowledge of AI and ML tools is a plus.
  • Certifications: Google Associate Cloud Engineer or Data Engineer certification is a plus.
  • Domain Specifics: Experience in data engineering or data science.

Company Benefits:

  • Bonus Programme
  • Equity Programme
  • Employee Stock Purchase Plan (ESPP)
  • Private Medical and Dental coverage
  • Mental Health Benefit Programme
  • Group Pension Plan
  • Income Protection
  • Life Assurance
  • Cycle To Work
  • EV Car Benefit Scheme
  • Gym Membership
  • Family Leave
  • Education Assistance - MBA/Advanced Degree/Bachelor Degree
  • Ongoing Employee Development Training/Certification
  • Hybrid Working

CME Group: Where Futures are Made

CME Group is the world’s leading derivatives marketplace. But who we are goes deeper than that. Here, you can impact markets worldwide. Transform industries. And build a career by shaping tomorrow. We invest in your success and you own it - all while working alongside a team of leading experts who inspire you in ways big and small. Problem solvers, difference makers, trailblazers. Those are our people. And we’re looking for more.

At CME Group, we embrace our employees’ unique experiences and skills to ensure that everyone’s perspectives are acknowledged and valued. As an equal-opportunity employer, we consider all potential employees without regard to any protected characteristic.

Important Notice: Recruitment fraud is on the rise, with scammers using misleading promises of job offers and interviews to solicit money and personal information from job seekers. CME Group adheres to established procedures designed to maintain trust, confidence and security throughout our recruitment process. Learn more here.

To be considered for this role you will be redirected to and must complete the application process on our careers page. To start the process click the Continue to Application or Login/Register to apply button below.

Gen AI Lead
Axiom Software Solutions Ltd
London
Hybrid
Senior
Private salary
RECENTLY POSTED
+2

Position: Gen AI Lead With Python
Location: London, UK (Hybrid 2 days onsite a week)
Duration: Full Time

Job Description:
Core AI/ML Foundations:
o Strong foundational knowledge in GenAI, Machine Learning (ML modelling), Data Science, Statistics, and AI fundamentals, including Natural Language Processing (NLP), Neural Networks, and Large Language Models (LLMs).

Generative AI & LLM Expertise:
o Extensive hands-on experience with leading LLMs such as Google Gemini, OpenAI models, Anthropic Claude, Mistral, Llama, and various other open-source LLMs.
o Critical: Deep working knowledge and hands-on experience with Retrieval-Augmented Generation (RAG) pipelines, including advanced RAG techniques and their detailed implementation.
o Proven ability to build, tune, and deploy LLM-based applications using platforms like Vertex AI, Hugging Face, etc.
o Expertise in developing robust prompt engineering strategies, prompt tuning, and creating reusable prompt templates.
o Hands-on experience with agentic framework-based use case implementation.
o Working knowledge of Guardrails and methodologies for assessing the performance and safety of GenAI features.

Programming & Data Engineering:
o Strong programming proficiency in Python is a must, including extensive experience with libraries such as Pandas, NumPy, scikit-learn, PyTorch, TensorFlow, Transformers, FastAPI, Seaborn, LangChain, and LlamaIndex.
o Proficiency in integrating generative AI with enterprise applications using APIs, knowledge graphs, and orchestration tools.
o Hands-on experience with various vector databases (eg, PG Vector, Pinecone, Mongo Atlas, Neo4j) for efficient data storage and retrieval.
o Experience in dealing with large amounts of unstructured data and designing solutions for high-throughput processing.

Deployment & MLOps:
o Critical: Hands-on experience deploying GenAI-based models to production environments.
o Strong understanding and practical experience with MLOps principles, model evaluation, and establishing robust deployment pipelines.
o Strong expertise in CI/CD principles and tools (eg, Jenkins, GitLab CI, Azure DevOps, ArgoCD) for automated builds, testing, and deployments.

Cloud & Containerization:
o Proven experience with container orchestration platforms like OpenShift or Kubernetes for deploying, managing, and scaling containerized applications in a cloud-native environment.

Soft Skills:
o Strong problem-solving abilities, excellent collaboration skills for working effectively with cross-functional teams, and the capability to work independently on complex, ambiguous problems

AI Solution Architect
Raytheon
Warminster
Remote or hybrid
Senior - Leader
Private salary
RECENTLY POSTED
+9

Key Responsibilities:

  • Lead the Exploration of opportunities for the development and deployment of AI based solution across our solution.
  • Lead the identification, evaluation, and delivery of AI opportunities that enhance Army training capability and operational effectiveness.
  • Act as the technical authority and primary point of contact for AI architecture, governance, ethics, and regulatory compliance.
  • Define and own scalable, secure AI/ML architectures aligned with enterprise strategy, Omnia principles, and hybrid cloud environments.
  • Design end-to-end AI solutions, including data pipelines, feature engineering, model development, deployment, and operational monitoring.
  • Provide architectural leadership across the full lifecycle, ensuring alignment with DevSecOps, security, and engineering best practices.
  • Establish standards for model performance monitoring, reliability, risk management, and continuous improvement (MLOps).
  • Collaborate with Enterprise Architecture, engineering leaders, and Army/MoD stakeholders to integrate AI capabilities into existing platforms and services.
  • Evaluate and recommend AI platforms, tools, and frameworks to enable scalable experimentation and production deployment.
  • Lead technical horizon scanning, prototyping, and structured experimentation of emerging AI technologies.
  • Develop AI roadmaps and transition architectures to support the evolution of defence training systems.
  • Ensure AI solutions meet security, resilience, and compliance requirements within regulated defence environments.

Who we are looking for:

You’ll have a mission focus, and the enthusiasm and drive to ‘get things done’. You’ll want to work in collaboration with other defence training organisations, and the British Army. You won’t let bureaucracy get in the way of what needs to be done, you’ll learn lessons and share these lessons across the team. You’ll understand what it means to put the mission first.

Essential Skills and Experience:

  • Proven experience architecting AI solutions in secure or classified environments, with strong knowledge of data governance and access control.
  • Proven experience designing and delivering enterprise-scale AI/ML solutions from concept through to production.
  • Strong expertise in AI/ML architecture, including model development, deployment, monitoring, and lifecycle management (MLOps).
  • Experience designing scalable, secure solutions within cloud and hybrid environments (e.g. Azure, AWS, or equivalent).
  • Solid understanding of data architecture, including data pipelines, feature engineering, data governance, and model training strategies.
  • Experience integrating AI capabilities into complex legacy and enterprise systems.
  • Demonstrated application of DevSecOps principles, including CI/CD for AI/ML workloads and automated deployment pipelines.
  • Strong knowledge of AI ethics, responsible AI, and regulatory compliance, particularly within regulated or sensitive environments.
  • Experience selecting and evaluating AI platforms, frameworks, and tooling (e.g. Python, TensorFlow, PyTorch, MLflow, Kubeflow, etc.).
  • Ability to define and implement model performance monitoring, drift detection, and continuous improvement approaches.
  • Experience working within secure or regulated environments (e.g. defence, government, healthcare, finance).
  • Strong stakeholder engagement skills, with the ability to communicate complex technical concepts to senior technical and non-technical audiences.
  • Experience working within or alongside enterprise architecture frameworks and governance processes.
  • Demonstrated ability to lead technical design decisions and provide architectural oversight across multiple delivery teams.

Desirable Skills and Experience:

  • MSc in Computing, AI/ML, or equivalent professional accreditation (e.g., CEng).
  • Experience working with UK MoD, Defence Digital, or government programmes.
  • Familiarity with secure-by-design and classified environments.
  • Knowledge of simulation, training systems, or digital learning environments.
  • Relevant certifications (e.g. Azure/AWS Architect, TOGAF, AI/ML specialisations).
  • Experience in cloud platforms and container orchestration tools, particularly Azure, AWS, and Red Hat OpenShift.
  • Experience with Hugging Face Transformers to enable capabilities such as intelligent document processing, conversational AI, and semantic search.
  • Knowledge and experience of some of the following tools;
    • Apache Spark, Databricks, Pandas for data processing and feature engineering
    • SQL, MongoDB for structured and unstructured data management
    • SageMaker, Azure ML for model deployment, tracking, and monitoring.
    • Terraform, Ansible, GitLab CI/CD, Jenkins for infrastructure automation and DevSecOps
Trainee AI Engineer Placement Programme
ITOL Recruit
Multiple locations
Fully remote
Graduate
£30,000 - £45,000
RECENTLY POSTED

Trainee AI Engineer – No Experience Needed Future-proof your career in Artificial Intelligence – starting today. Looking for a career change? Currently employed but want something better? Or maybe you're between jobs and ready for a fresh start? ITOL Recruit's AI Traineeship is designed to get you into one of the fastest-growing industries with zero experience required. Train online at your own pace and land your first AI Engineer role in 1-3 months. Please note this is a training course and fees apply Job guaranteed - complete the programme and get a job or get your money back. Our candidates earn £30,000-£45,000. Why AI? AI is reshaping every industry you can think of. Healthcare, finance, retail, and manufacturing – they’re all scrambling for skilled professionals. The demand far outstrips supply, which means excellent salaries, flexible working arrangements, and genuine job security. How It Works Step 1 – AI Engineering Fundamentals Start with the basics of AI, including neural networks and large language models, to build a solid foundation in AI engineering. Step 2 – Data Fundamentals Understand the data workflow, from collection to cleaning, and learn how to prepare data for AI applications. Step 3 – Notebooks & IDEs Get hands-on with industry-standard tools like Jupyter Notebooks and VS Code to develop AI systems. Step 4 – Python Programming Master Python, covering everything from the basics to object-oriented programming (OOP). Step 5 – Python Streamlit Project Apply your Python skills by building a car price prediction app using Python and Streamlit. Step 6 – Python for Data Learn essential Python libraries like NumPy, Pandas, and Matplotlib for data manipulation and visualisation. Step 7 – AI Sentiment Analysis Project Work with Hugging Face to build a sentiment analysis classifier using real-world AI techniques. Step 8 – AI Prompt Engineering Master prompt engineering, learning how to craft effective prompts for controlling AI outputs. Step 9 – Retrieval-Augmented Generation (RAG) Learn how to integrate external knowledge into AI systems using RAG techniques and vector databases. Step 10 – AI Specialised Customer Service Chatbot Project Combine prompt engineering and RAG to build an AI-powered customer service chatbot, delivering intelligent responses using vector databases and knowledge bases. Step 11 – Machine Learning Fundamentals Understand machine learning principles and algorithms, and how to train and test models using scikit-learn. Step 12 – Machine Learning Project Put your machine learning knowledge into practice with a hands-on project. Step 13 – AI & Data Ethics Study the ethical considerations in AI, including issues of bias, fairness, and data privacy. Step 14 – Oral Exam Complete a virtual oral exam to assess your understanding and ability to apply your learning. Step 15 – AWS Certified Cloud Practitioner Finish with the AWS Certified Cloud Practitioner course and exam to gain essential cloud computing knowledge. What You Get · 100% online, self-paced training · Microsoft AI-900 certification included · 1-to-1 tutor and recruitment support · Real-world project experience · Job guarantee – get a job or your money back · Starting salary of £30,000–£45,000 We Get You Hired! We're not new to this. ITOL Recruit has 15+ years of experience and has placed over 5,000 people into new roles. Our job programmes include certified tutors, UK-accredited qualifications, and one-on-one support from a recruitment adviser focused on placing you. We don't believe in empty promises. Complete our programme, follow the process, and if you don't land a job, you get your money back. "Five months from complete beginner to AI engineer. Best decision I ever made." – Jamie W., now working as a Junior AI Engineer in London Ready to Start? If you’re motivated, curious, and excited about technology, we’ll help you turn that into a career you can be proud of. Apply now, and one of our expert Career Advisors will be in touch within 4 working hours to guide you through your next steps

Senior Data Scientist
Adria Solutions
Manchester
Hybrid
Senior
£75,000
RECENTLY POSTED

My client is a fast-growing UK business serving thousands of customers. They are investing heavily in their data capability and are now looking to appoint a Lead Data Scientist to drive end-to-end machine learning delivery within a regulated financial environment.

This is a hands-on role combining technical ownership and production-grade model deployment.

The Role

As Senior Data Scientist, you will:

  • Own end-to-end ML solutions -from problem framing and feature engineering to deployment, monitoring, and governance
  • Translate business objectives into modelling strategies aligned to risk appetite and operational constraints
  • Build and deploy models using Python, SQL, and AWS (SageMaker or equivalent)
  • Partner closely with Engineering, Data, and Risk/Financial Crime teams to ensure robust, production-ready solutions
  • Establish monitoring frameworks for performance, drift, and retraining
  • Drive clear documentation, traceability, and governance appropriate for a regulated environment

This role requires someone who thinks beyond experimentation - focusing on operational impact, adoption, and long-term model performance.

Essential Experience

  • Proven commercial ML/Data Science delivery with measurable impact
  • Experience taking models into production and managing performance over time
  • Prior experience leading or mentoring Data Scientists
  • Strong Python (pandas, numpy, scikit-learn or similar)
  • Strong SQL (complex joins, aggregations, analytical functions)
  • Solid grounding in applied statistics, evaluation design, calibration, bias/fairness
  • Experience working closely with Engineering/Data teams in production-first environments
  • Comfortable operating within regulated industries

Desirable

  • AWS experience (S3, Athena/Glue, IAM, Lambda)
  • SageMaker or equivalent ML platform experience
  • Financial services domain knowledge (risk, fraud, affordability, payments)
  • Experience with model explainability and governance documentation

Package & Benefits

  • Hybrid working model
  • Competitive pension
  • Additional paid leave (birthday, charity, wellbeing, life events)
  • Employee assistance programme & Virtual GP
  • Modern collaborative office environment

Interested? Please Click Apply Now!Senior Data Scientist

Hedge Fund - Python Developer (Equities) - Trade life cycle - PnL - Kafka - Contract
Scope AT Limited
London
In office
Senior
Private salary
RECENTLY POSTED
+1

Our Hedge Fund client is looking for a Python Developer/Engineer Contract role

This team is responsible for the firms equity transaction data platform, including trade life cycle event processing, enrichment, and PnL calculations. The role is ideal for an engineer who enjoys building robust, high-throughput services and data pipelines in a fast-paced, delivery-focused environment.

Principal Responsibilities
Design and develop solutions for trade life cycle event processing, including corporate actions, expiries, and other post-trade events.
Build and operate Python-based services that perform large-scale data transformations and calculations.
Publish and distribute transaction and PnL data using Kafka, including AVRO-based schemas and streaming patterns.

Required Skills
Minimum of 6+ years of professional Python development experience, ideally in capital markets or a fintech firm.
Experience in finance: understanding of common financial asset classes; knowledge of equities corporate action processing, trade life cycle concepts, and/or P&L calculations is a strong plus.
Experience with Kafka (or equivalent streaming/messaging platforms) and schema-based event publishing (eg, AVRO).
Strong experience performing large-scale data calculations in Python using libraries like pandas, polars, and NumPy.
Experience building REST services using frameworks such as FastAPI and/or Flask.
Strong SQL skills and experience working with relational databases in production environments.
Hands-on experience with containerized deployments and modern infrastructure tooling (Docker, Kubernetes) and familiarity with cloud platforms.
Understanding of modern SDLC practices (testing strategy, CI/CD, release management, observability, and operational ownership).

Office based, 5 days per week.
Based in London.

Contract role inside IR35

By applying to this job you are sending us your CV, which may contain personal information. Please refer to our Privacy Notice to understand how we process this information. In short, in order to supply you with work finding services, we will hold and process your personal data, and only with your express permission we will share this personal data with a client (or a third party working on behalf of the client) by email or by upload to the Client/third parties vendor management system. By giving us permission to send your CV to a client, this constitutes permission to share the personal data that would be necessary to consider your application, interview you (Phone/video/face to face) and if successful hire you.
Scope AT acts as an employment agency for Permanent Recruitment and an employment business for the supply of temporary workers. By applying for this job you accept the Terms and Conditions, Data Protection Policy, Privacy Notice and Disclaimers which can be found at our website.

AI Engineer
Certain Advantage
London
Hybrid
Mid - Senior
Private salary
RECENTLY POSTED

Certain Advantage are recruiting on behalf of our Trading client for an AI Engineer on a contract basis for 6-12 months initially in London. This will require some onsite days in Central London during the week. We are seeking Engineers skilled in python with a strong focus on GenAI AI and LLMs to lead the integration of cutting-edge language technologies into real-world applications. If you’re someone passionate about building scalable, responsible, and high-impact GenAI solutions then this could be for you! We’re looking for Engineers offering competent core technical skills in Python Programming, Data Handling with NumPy, Pandas, SQL, and use of Git/GitHub for version control. Any experience with these GenAI Use Cases would be relevant and desirable; Chatbots, copilots, document summarisation, Q&A, content generation. To help make your application as relevant as possible, please ensure your CV demonstrates any prior experience you have relating to the below; System Integration & Deployment Model Deployment: Flask, FastAPI, MLflow Model Serving: Triton Inference Server, Hugging Face Inference Endpoints API Integration: OpenAI, Anthropic, Cohere, Mistral APIs LLM Frameworks: LangChain, LlamaIndex – for building LLM-powered applications Vector Databases: FAISS, Weaviate, Pinecone, Qdrant (Nice-to-Have) Retrieval-Augmented Generation (RAG): Experience building hybrid systems combining LLMs with enterprise dataMLOps & Infrastructure MLOps: Model versioning, monitoring, logging Bias Detection & Mitigation Content Filtering & Moderation Explainability & Transparency LLM Safety & Guardrails: Hallucination mitigation, prompt validation, safety layers Azure Cloud Experience Collaboration & Delivery Cross-functional Collaboration: Working with software engineers, DevOps, and product teams Rapid Prototyping: Building and deploying MVPs Understanding of ML & LLM Techniques: To support integration, scaling, and responsible deployment Prompt Engineering: Designing and optimising prompts for LLMs across use cases Model Evaluation & Monitoring Evaluation Metrics: Perplexity, relevance, response quality, user satisfaction Monitoring in Production: Drift detection, performance degradation, logging outputs Evaluation Pipelines: Automating metric tracking via MLflow or custom dashboards A/B Testing: Experience evaluating GenAI features in production environments Does this sound like your next career move? Apply today! Working with Certain Advantage We go the extra mile to find the best people for the job. If you’re hunting for a role where you can make an impact and grow your career, we’ll work with you to find it. We work with businesses across the UK to find the best people in Finance, Marketing, IT and Engineering. If this job isn’t for you, head to (url removed) and register for job alerts and career guidance tips

AI Engineer Placement Programme
Ad Warrior Ltd
Multiple locations
Fully remote
Graduate
£28,000 - £45,000
RECENTLY POSTED

AI Engineer Placement Programme- No Experience Needed Future-proof your career in Artificial Intelligence - starting today. Looking for a career change? Currently employed but want something better? Or maybe you're between jobs and ready for a fresh start? Their AI Placement Programme is designed to get you into one of the fastest-growing industries with zero experience required. Train online at your own pace and land your first AI Engineer role in 1-3 months. Please note this is a training course and fees apply Job guaranteed - complete the programme and get a job or get your money back. Candidates earn £28,000-£45,000. How It Works Step 1 - AI Engineering Fundamentals Start with the basics of AI, including neural networks and large language models, to build a solid foundation in AI engineering. Step 2 - Data Fundamentals Understand the data workflow, from collection to cleaning, and learn how to prepare data for AI applications. Step 3 - Notebooks & IDEs Get hands-on with industry-standard tools like Jupyter Notebooks and VS Code to develop AI systems. Step 4 - Python Programming Master Python, covering everything from the basics to object-oriented programming (OOP). Step 5 - Python Streamlit Project Apply your Python skills by building a car price prediction app using Python and Streamlit. Step 6 - Python for Data Learn essential Python libraries like NumPy, Pandas, and Matplotlib for data manipulation and visualisation. Step 7 - AI Sentiment Analysis Project Work with Hugging Face to build a sentiment analysis classifier using real-world AI techniques. Step 8 - AI Prompt Engineering Master prompt engineering, learning how to craft effective prompts for controlling AI outputs. Step 9 - Retrieval-Augmented Generation (RAG) Learn how to integrate external knowledge into AI systems using RAG techniques and vector databases. Step 10 - AI Specialised Customer Service Chatbot Project Combine prompt engineering and RAG to build an AI-powered customer service chatbot, delivering intelligent responses using vector databases and knowledge bases. Step 11 - Machine Learning Fundamentals Understand machine learning principles and algorithms, and how to train and test models using scikit-learn. Step 12 - Machine Learning Project Put your machine learning knowledge into practice with a hands-on project. Step 13 - AI & Data Ethics Study the ethical considerations in AI, including issues of bias, fairness, and data privacy. Step 14 - Oral Exam Complete a virtual oral exam to assess your understanding and ability to apply your learning. Step 15 - AWS Certified Cloud Practitioner Finish with the AWS Certified Cloud Practitioner course and exam to gain essential cloud computing knowledge. They Get You Hired They 're not new to this. The company has 15+ years of experience and has placed over 5,000 people into new roles. Their job programmes include certified tutors, UK-accredited qualifications, and one-on-one support from a recruitment adviser focused on placing you. They don't believe in empty promises. Complete their programme, follow the process, and if you don't land a job, you get your money back. "Five months from complete beginner to AI engineer. Best decision I ever made." - Jamie W., now working as a Junior AI Engineer in London Ready to Start? If you're motivated, curious, and excited about technology, they'll help you turn that into a career you can be proud of. Apply now, and one of thier expert Career Advisors will be in touch within 4 working hours to guide you through your next steps.

Trainee AI Engineer
Ad Warrior Ltd
Multiple locations
Fully remote
Graduate - Junior
£28,000 - £45,000
RECENTLY POSTED

Trainee AI Engineer - No Experience Needed Future-proof your career in Artificial Intelligence - starting today. Looking for a career change? Currently employed but want something better? Or maybe you're between jobs and ready for a fresh start? Their AI Traineeship is designed to get you into one of the fastest-growing industries with zero experience required. Train online at your own pace and land your first AI Engineer role in 1-3 months. Please note this is a training course and fees apply Job guaranteed - complete the programme and get a job or get your money back. Candidates earn £28,000-£45,000. How It Works Step 1 - AI Engineering Fundamentals Start with the basics of AI, including neural networks and large language models, to build a solid foundation in AI engineering. Step 2 - Data Fundamentals Understand the data workflow, from collection to cleaning, and learn how to prepare data for AI applications. Step 3 - Notebooks & IDEs Get hands-on with industry-standard tools like Jupyter Notebooks and VS Code to develop AI systems. Step 4 - Python Programming Master Python, covering everything from the basics to object-oriented programming (OOP). Step 5 - Python Streamlit Project Apply your Python skills by building a car price prediction app using Python and Streamlit. Step 6 - Python for Data Learn essential Python libraries like NumPy, Pandas, and Matplotlib for data manipulation and visualisation. Step 7 - AI Sentiment Analysis Projecti Work with Hugging Face to build a sentiment analysis classifier using real-world AI techniques. Step 8 - AI Prompt Engineering Master prompt engineering, learning how to craft effective prompts for controlling AI outputs. Step 9 - Retrieval-Augmented Generation (RAG) Learn how to integrate external knowledge into AI systems using RAG techniques and vector databases. Step 10 - AI Specialised Customer Service Chatbot Project Combine prompt engineering and RAG to build an AI-powered customer service chatbot, delivering intelligent responses using vector databases and knowledge bases. Step 11 - Machine Learning Fundamentals Understand machine learning principles and algorithms, and how to train and test models using scikit-learn. Step 12 - Machine Learning Project Put your machine learning knowledge into practice with a hands-on project. Step 13 - AI & Data Ethics Study the ethical considerations in AI, including issues of bias, fairness, and data privacy. Step 14 - Oral Exam Complete a virtual oral exam to assess your understanding and ability to apply your learning. Step 15 - AWS Certified Cloud Practitioner Finish with the AWS Certified Cloud Practitioner course and exam to gain essential cloud computing knowledge. They Get You Hired They 're not new to this. The company has 15+ years of experience and has placed over 5,000 people into new roles. Their job programmes include certified tutors, UK-accredited qualifications, and one-on-one support from a recruitment adviser focused on placing you. They don't believe in empty promises. Complete their programme, follow the process, and if you don't land a job, you get your money back. "Five months from complete beginner to AI engineer. Best decision I ever made." - Jamie W., now working as a Junior AI Engineer in London Ready to Start? If you're motivated, curious, and excited about technology, they'll help you turn that into a career you can be proud of. Apply now, and one of thier expert Career Advisors will be in touch within 4 working hours to guide you through your next steps.

Data / Machine Learning Ops Engineer
DXC
London
Hybrid
Junior - Mid
Private salary
+4

Location: Erskine, Scotland (Hybrid 2/3 days per week in the office)
Candidates must be eligible for clearance.

DXC Technology (NYSE: DXC) is a leading independent, end-to-end IT services company, helping organisations harness innovation to thrive through change. Serving nearly 6,000 private and public sector clients across 70 countries, DXC combines technology independence, global talent, and an extensive partner network to deliver next-generation IT services and solutions.

We are proud to be recognised globally for corporate responsibility and inclusive workplace practices.

The Role

Are you passionate about bringing machine learning solutions into real-world production environments? Do you enjoy collaborating with others to build scalable, reliable systems?

We are looking for a Machine Learning Ops Engineer to join our growing team. This role is ideal for someone who enjoys solving complex problems, working cross-functionally, and continuously developing their technical expertise in a supportive environment.

If you dont meet every single requirement listed below, we still encourage you to apply. We value potential, curiosity, and a willingness to learn.

What Youll Be Doing

  • Deploying, monitoring, and scaling machine learning models in production.
  • Collaborating with data scientists, engineers, and stakeholders to integrate AI solutions into scalable products.
  • Supporting the full ML lifecycle, from experimentation to deployment and optimisation.
  • Applying best practices in data engineering and contributing to architectural decisions.
  • Using modern MLOps tools and CI/CD approaches to improve reliability and efficiency.
  • Contributing to a culture of knowledge-sharing and continuous improvement.

Technical Experience

Were looking for experience in many of the following areas:

  • Strong Python skills and familiarity with ML libraries such as Pandas, NumPy, and scikit-learn.
  • Experience with frameworks such as TensorFlow, Keras, or PyTorch.
  • Exposure to gradient boosting tools such as XGBoost, LightGBM, or CatBoost.
  • Experience with model deployment tools (e.g., ONNX, TensorRT, TensorFlow Serving, TorchServe).
  • Familiarity with ML lifecycle tools such as MLflow, Kubeflow, or Azure ML Pipelines.
  • Experience working with distributed data processing (e.g., PySpark) and SQL.
  • Understanding of software engineering best practices, including version control (Git).
  • Knowledge of CI/CD principles in ML environments.
  • Experience with cloud-native ML platforms is advantageous.

What Were Looking For

  • A collaborative mindset and strong communication skills.
  • A thoughtful, structured approach to problem solving.
  • A commitment to continuous learning and professional growth.
  • The confidence to contribute ideas while valuing diverse perspectives.

Why Join Us?

  • Work on meaningful AI projects with real-world impact.
  • Join a supportive, forward-thinking team that values inclusion and diverse perspectives.
  • Access structured learning, mentoring, and career development opportunities.
  • Flexible hybrid working arrangements.
  • A workplace culture that supports wellbeing and work-life balance.

What We Offer

  • Competitive salary.
  • Pension scheme.
  • DXC Select comprehensive benefits package including private medical insurance, gym membership, and more.
  • Perks at Work discounts on technology, groceries, travel and more.
  • DXC incentives recognition tools, employee lunches, and regular social events.

Ready to Shape the Future of AI?

We are committed to building diverse teams and creating an inclusive environment where everyone can thrive. If this role excites you, wed love to hear from you.

Apply today and bring your skills, perspective, and ambition to a team that values innovation, collaboration, and growth.

Data Solution Designer Data Science
Stackstudio Digital Ltd.
Norwich
Hybrid
Mid - Senior
£550/day - £575/day
+5

Role / Job Title:Data Solution Designer Data ScienceWork Location:Norwich 3 Days (Flexible)Duration of Assignment:06 MonthsThe RoleThe Data Solution Designer Data Science is responsible for designing end to end data science and advanced analytics solutions that translate complex business problems into scalable, secure, and high performance data products.This role bridges business stakeholders, data engineering, data science, and IT architecture teams, ensuring solutions are production ready and aligned with enterprise standards.Your ResponsibilitiesSolution & Data Model Design1. Solution Design & Architecture

  • Design end to end data science solutions including data ingestion, feature engineering, model development, deployment, and monitoring
  • Define logical and physical architectures for analytics platforms, ML pipelines, and AI products
  • Ensure solutions are scalable, reusable, secure, and cost effective
  • Select appropriate ML/AI techniques (e.g., regression, classification, NLP, forecasting, clustering)
  1. Data & Analytics Engineering Alignment
  • Work closely with data engineers to define:
    • Data models and schemas
    • Data quality rules
    • ETL / ELT pipelines
  • Define feature stores, training datasets, and inference pipelines
  1. Model Development & Deployment Strategy
  • Guide data scientists on:
    • Model selection and evaluation strategies
    • Experiment tracking and reproducibility
  • Design MLOps frameworks for:
    • CI/CD of ML models
    • Model versioning and governance
    • Monitoring drift, accuracy, and bias
  1. Technology & Platform Governance
  • Define standards for:
    • Programming languages and frameworks
    • Cloud vs on prem deployments
    • Security, privacy, and compliance
  • Ensure adherence to data governance, regulatory, and risk controls (especially in BFSI)
  1. Documentation & Best Practices
  • Produce:
    • High level architecture diagrams
    • Low level design documents
    • Non functional requirement specifications
  • Establish best practices and reusable design patterns

Your ProfileEssential Skills / Knowledge / ExperienceData Science & ML

  • Supervised and unsupervised learning
  • Time series, NLP, recommendation systems (as applicable)

Programming

  • Python (NumPy, Pandas, Scikit learn)
  • Optional: R, SQL

Data Platforms

  • Relational & NoSQL databases
  • Big data frameworks (Spark, Hive, Databricks)

MLOps & Deployment

  • Model lifecycle management
  • CI/CD pipelines
  • Containerization (Docker, Kubernetes desirable)
  • Model packaging and REST APIs

Cloud & Tools (Any combination)

  • AWS / Azure / GCP analytics and ML services
  • MLflow, Azure ML, SageMaker, Vertex AI
  • Version control (Git)

Domain & Soft Skills

  • Strong analytical and problem solving skills
  • Ability to explain complex data science concepts in simple business language
  • Experience working in Agile / Scrum environments
  • Stakeholder management and decision facilitation

Preferred Qualifications

  • BFSI domain experience (risk, fraud, AML, credit, customer analytics)
  • Experience with regulatory data modelling and explainable AI (XAI)
  • Exposure to GenAI, LLMs, and vector databases

Desirable Skills / Knowledge / Experience

  • TOGAF or cloud architecture certifications
Senior Data Engineer
83zero Limited
London
Hybrid
Senior
£80,000
+2

Company Overview

We are working with an innovative organisation that recognises the increasing complexity of project delivery. Since 2013, our client has been helping companies of all sizes improve the way projects are delivered.

Their mission is to become the number one provider of innovative project solutions, driven by a community of experienced, caring, and passionate professionals who are committed to improving project delivery.

Why Join Our Client?

Our client is currently in an exciting phase of growth, making this an excellent time to join their journey.

They are building something special-scaling the business while maintaining a strong people-first approach. Investment in their teams is a key priority, creating an environment where development is encouraged and individuals are supported to grow with the organisation.

Their culture sets them apart from other consulting practices, and they are looking to build a team that is equally ambitious.

Position Overview

Our client is seeking a Senior Data Engineer who thrives on building scalable, cloud-first data systems.

In this role, you will design and manage data pipelines that support analytics, AI, and automation across complex infrastructure programmes. Your work will play a key part in enabling data-driven transformation across critical UK industries.

Core Responsibilities

  • Design, build, and optimise data pipelines using Azure Data Factory, Synapse, and Databricks
  • Develop and maintain ETL/ELT workflows to ensure high data quality and reliability
  • Collaborate with analysts and AI engineers to deliver robust and reusable data products
  • Manage data lakes and warehouses using formats such as Delta Lake and Parquet
  • Implement best practices for data governance, performance, and security
  • Continuously evaluate and adopt new technologies to evolve the organisation’s data platform
  • Provide technical guidance to junior engineers and contribute to team capability building

Technical Stack

Core:

  • Azure Data Factory
  • Azure Synapse Analytics
  • Azure Data Lake Storage Gen2
  • SQL Server
  • Databricks

Enhancements:

  • Python (PySpark, Pandas)
  • CI/CD (Azure DevOps)
  • Infrastructure as Code (Terraform, Bicep)
  • REST APIs
  • GitHub
  • Actions

Desirable:

  • Microsoft Fabric
  • Delta Live Tables
  • Power BI dataset automation
  • DataOps practices

What You’ll Bring

  • Professional experience in data engineering or cloud data development
  • Strong understanding of data architecture, APIs, and modern data pipeline design
  • Hands-on experience within Microsoft’s Azure ecosystem, with an interest in emerging technologies such as Fabric, AI-enhanced ETL, and real-time data streaming
  • Proven ability to lead technical workstreams and mentor junior team members
  • A strong alignment with the organisation’s IDEAL values: Integrity, Drive, Empathy, Adaptability, and Loyalty

Ready to Apply?

This is a fantastic opportunity to join a forward-thinking organisation at a key stage of growth, working on impactful projects across critical industries.

If you’re looking to take the next step in your career within a collaborative and innovative environment, we’d love to hear from you.

Lead Data Scientist
Adria Solutions
Manchester
Hybrid
Senior
£60,000 - £80,000

My client is a fast-growing UK FinTech business serving thousands of customers. They are investing heavily in their data capability and are now looking to appoint a Lead Data Scientist to drive end-to-end machine learning delivery within a regulated financial environment.

This is a hands-on leadership role combining technical ownership, team development, and production-grade model deployment.

The Role

As Lead Data Scientist, you will:

  • Lead and develop a growing Data Science team, setting standards and delivery cadence
  • Own end-to-end ML solutions - from problem framing and feature engineering to deployment, monitoring, and governance
  • Translate business objectives into modelling strategies aligned to risk appetite and operational constraints
  • Build and deploy models using Python, SQL, and AWS (SageMaker or equivalent)
  • Partner closely with Engineering, Data, and Risk/Financial Crime teams to ensure robust, production-ready solutions
  • Establish monitoring frameworks for performance, drift, and retraining
  • Drive clear documentation, traceability, and governance appropriate for a regulated environment

This role requires someone who thinks beyond experimentation - focusing on operational impact, adoption, and long-term model performance.

Essential Experience

  • Proven commercial ML/Data Science delivery with measurable impact
  • Experience taking models into production and managing performance over time
  • Prior experience leading or mentoring Data Scientists
  • Strong Python (pandas, numpy, scikit-learn or similar)
  • Strong SQL (complex joins, aggregations, analytical functions)
  • Solid grounding in applied statistics, evaluation design, calibration, bias/fairness
  • Experience working closely with Engineering/Data teams in production-first environments
  • Comfortable operating within regulated industries

Desirable

  • AWS experience (S3, Athena/Glue, IAM, Lambda)
  • SageMaker or equivalent ML platform experience
  • Financial services domain knowledge (risk, fraud, affordability, payments)
  • Experience with model explainability and governance documentation

Package & Benefits

  • Hybrid working model
  • Competitive pension
  • Additional paid leave (birthday, charity, wellbeing, life events)
  • Employee assistance programme & Virtual GP
  • Modern collaborative office environment

Interested? Please Click Apply Now! Lead Data Scientist

Senior Data Engineer
83zero Ltd
London
Hybrid
Senior
£60,000 - £80,000
+2

Company Overview

We are working with an innovative organisation that recognises the increasing complexity of project delivery. Since 2013, our client has been helping companies of all sizes improve the way projects are delivered.

Their mission is to become the number one provider of innovative project solutions, driven by a community of experienced, caring, and passionate professionals who are committed to improving project delivery.

Why Join Our Client?

Our client is currently in an exciting phase of growth, making this an excellent time to join their journey.

They are building something special-scaling the business while maintaining a strong people-first approach. Investment in their teams is a key priority, creating an environment where development is encouraged and individuals are supported to grow with the organisation.

Their culture sets them apart from other consulting practices, and they are looking to build a team that is equally ambitious.

Position Overview

Our client is seeking a Senior Data Engineer who thrives on building scalable, cloud-first data systems.

In this role, you will design and manage data pipelines that support analytics, AI, and automation across complex infrastructure programmes. Your work will play a key part in enabling data-driven transformation across critical UK industries.

Core Responsibilities

  • Design, build, and optimise data pipelines using Azure Data Factory, Synapse, and Databricks
  • Develop and maintain ETL/ELT workflows to ensure high data quality and reliability
  • Collaborate with analysts and AI engineers to deliver robust and reusable data products
  • Manage data lakes and warehouses using formats such as Delta Lake and Parquet
  • Implement best practices for data governance, performance, and security
  • Continuously evaluate and adopt new technologies to evolve the organisation’s data platform
  • Provide technical guidance to junior engineers and contribute to team capability building

Technical Stack

Core:

  • Azure Data Factory
  • Azure Synapse Analytics
  • Azure Data Lake Storage Gen2
  • SQL Server
  • Databricks

Enhancements:

  • Python (PySpark, Pandas)
  • CI/CD (Azure DevOps)
  • Infrastructure as Code (Terraform, Bicep)
  • REST APIs
  • GitHub
  • Actions

Desirable:

  • Microsoft Fabric
  • Delta Live Tables
  • Power BI dataset automation
  • DataOps practices

What You’ll Bring

  • Professional experience in data engineering or cloud data development
  • Strong understanding of data architecture, APIs, and modern data pipeline design
  • Hands-on experience within Microsoft’s Azure ecosystem, with an interest in emerging technologies such as Fabric, AI-enhanced ETL, and real-time data streaming
  • Proven ability to lead technical workstreams and mentor junior team members
  • A strong alignment with the organisation’s IDEAL values: Integrity, Drive, Empathy, Adaptability, and Loyalty

Ready to Apply?

This is a fantastic opportunity to join a forward-thinking organisation at a key stage of growth, working on impactful projects across critical industries.

If you’re looking to take the next step in your career within a collaborative and innovative environment, we’d love to hear from you.

Quant Developer - OTC Pricing
James Joseph Associates
London
In office
Mid - Senior
£150,000 - £170,000
+1

A high-growth institutional trading business in the digital assets market is expanding its London team and hiring a Quant Developer to join its OTC pricing function. This is a great opportunity to join a successful firm adding headcount as it continues to grow, and to work on highly visible quantitative systems that sit at the core of pricing, hedging and liquidity decisions. The role offers a rare blend of quantitative research partnership and hands-on production engineering, making it ideal for someone who enjoys solving real market problems in a fast-paced trading environment.

THE ROLE: Quant Developer OTC Pricing

This is a senior-level quant development role within the OTC pricing team, focused on turning quantitative ideas into robust, production-grade trading and pricing solutions. The successful candidate will work very closely with quantitative researchers, contributing not only to implementation but also to the design and refinement of pricing and liquidity models.

The role combines research-oriented modelling in Python with production engineering in Java. You will use Python to analyse data, test ideas and support model development, while using Java to build and enhance high-performance pricing infrastructure used in a live, global trading environment.

You will be involved in the development of pricing logic, flow analysis, spread optimisation and automated hedging tools. The role also includes working on distributed systems challenges in a 24/7 multi-region environment, where reliability, consistency and performance are critical. This is a high-impact position for someone who enjoys applying quantitative thinking to real trading and pricing problems at scale.

KEY RESPONSIBILITIES: Quant Developer OTC Pricing

  • Build and enhance quantitative pricing, hedging and optimisation models within a high-performance Java framework
  • Work alongside quantitative researchers to analyse large datasets and translate research into production-ready solutions
  • Use Python for model prototyping, data analysis, signal investigation and backtesting activity
  • Develop and improve pricing skew, spread and liquidity optimisation logic
  • Design and implement automated hedging strategies, taking into account execution risk, liquidity and market impact
  • Support pricing system deployment across distributed, multi-region architecture with a focus on uptime and consistency
  • Analyse client trading behaviour, including flow quality, decay and pricing performance, to support more effective pricing decisions
  • Contribute to the ongoing evolution of tools and systems used in a live institutional trading environment

REQUIRED - SKILLS/EXPERIENCE: Quant Developer OTC Pricing

  • Strong Java development experience, ideally 5+ years
  • Deep understanding of object-oriented design, concurrency, and high-performance distributed systems
  • Proficient in Python; Including use of libraries such as NumPy, SciPy and Pandas for quantitative analysis and prototyping
  • Proven experience applying numerical optimisation techniques and/or machine learning methods to pricing, trading or market-related problems
  • Prior experience in client pricing, electronic trading, or a closely related quantitative trading environment
  • Exposure to liquid markets such as FX, equities, ETFs or crypto
  • Strong academic background in a quantitative discipline such as Mathematics, Physics or Quantitative Finance
  • Ability to operate effectively in a role that bridges quantitative modelling and production engineering

DESIRABLE - SKILLS/EXPERIENCE: Quant Developer OTC Pricing

  • Familiarity with low-latency system optimisation, such as GC tuning or tools/frameworks used in high-performance messaging environments
  • Understanding of derivatives pricing and risk management, particularly across products such as futures, forwards, NDFs and CFDs
  • KDB+/Q
  • Exposure to AWS, Docker and Kubernetes
Page 1 of 1
Frequently asked questions
Our job board features a wide range of Pandas-related roles including data analyst, data scientist, machine learning engineer, and software developer positions that require expertise in Pandas for data manipulation and analysis.
While some positions require advanced knowledge of Pandas, many entry-level and intermediate roles are also available. Job descriptions typically specify the required level of expertise, so you can choose jobs that match your skill set.
Yes, our job board includes remote Pandas job listings from various companies worldwide, allowing you to work flexibly from your preferred location.
New Pandas jobs are posted regularly as employers update their hiring needs. We recommend checking the board frequently or subscribing to job alerts to stay informed about the latest opportunities.
Industries such as finance, healthcare, technology, e-commerce, and research frequently seek professionals proficient in Pandas for tasks involving data analysis, visualization, and machine learning.