Make yourself visible and let companies apply to you.
Role title
Roles
Pandas Jobs
Trending Pandas jobs
Get notified about new jobs that match this search?
Quant Developer OTC Pricing
James Joseph Associates Limited
London
In office
Senior
£100,000
RECENTLY POSTED
+1

A high-growth institutional trading business in the digital assets market is expanding its London team and hiring a Quant Developer to join its OTC pricing function. This is a great opportunity to join a successful firm adding headcount as it continues to grow, and to work on highly visible quantitative systems that sit at the core of pricing, hedging and liquidity decisions. The role offers a rare blend of quantitative research partnership and hands-on production engineering, making it ideal for someone who enjoys solving real market problems in a fast-paced trading environment.

THE ROLE: Quant Developer OTC Pricing

This is a senior-level quant development role within the OTC pricing team, focused on turning quantitative ideas into robust, production-grade trading and pricing solutions. The successful candidate will work very closely with quantitative researchers, contributing not only to implementation but also to the design and refinement of pricing and liquidity models.

The role combines research-oriented modelling in Python with production engineering in Java. You will use Python to analyse data, test ideas and support model development, while using Java to build and enhance high-performance pricing infrastructure used in a live, global trading environment.

You will be involved in the development of pricing logic, flow analysis, spread optimisation and automated hedging tools. The role also includes working on distributed systems challenges in a 24/7 multi-region environment, where reliability, consistency and performance are critical. This is a high-impact position for someone who enjoys applying quantitative thinking to real trading and pricing problems at scale.

KEY RESPONSIBILITIES: Quant Developer OTC Pricing

  • Build and enhance quantitative pricing, hedging and optimisation models within a high-performance Java framework
  • Work alongside quantitative researchers to analyse large datasets and translate research into production-ready solutions
  • Use Python for model prototyping, data analysis, signal investigation and backtesting activity
  • Develop and improve pricing skew, spread and liquidity optimisation logic
  • Design and implement automated hedging strategies, taking into account execution risk, liquidity and market impact
  • Support pricing system deployment across distributed, multi-region architecture with a focus on uptime and consistency
  • Analyse client trading behaviour, including flow quality, decay and pricing performance, to support more effective pricing decisions
  • Contribute to the ongoing evolution of tools and systems used in a live institutional trading environment

REQUIRED - SKILLS/EXPERIENCE:

  • Strong Java development experience, ideally 5+ years
  • Deep understanding of object-oriented design, concurrency, and high-performance distributed systems
  • Proficient in Python; Including use of libraries such as NumPy, SciPy and Pandas for quantitative analysis and prototyping
  • Proven experience applying numerical optimisation techniques and/or machine learning methods to pricing, trading or market-related problems
  • Prior experience in client pricing, electronic trading, or a closely related quantitative trading environment
  • Exposure to liquid markets such as FX, equities, ETFs or crypto
  • Strong academic background in a quantitative discipline such as Mathematics, Physics or Quantitative Finance
  • Ability to operate effectively in a role that bridges quantitative modelling and production engineering

DESIRABLE - SKILLS/EXPERIENCE:

  • Familiarity with low-latency system optimisation, such as GC tuning or tools/frameworks used in high-performance messaging environments
  • Understanding of derivatives pricing and risk management, particularly across products such as futures, forwards, NDFs and CFDs
  • KDB+/Q
  • Exposure to AWS, Docker and Kubernetes
Senior Data Engineer - Commodities & Energy Trading
Anson McCade
London
Hybrid
Senior
£175,000
RECENTLY POSTED
+2

£Up to £175,000 GBP

Competitive Bonus

Hybrid WORKING

Location: Central London, Greater London - United Kingdom Type: Permanent

Senior Data Engineer - Commodities & Energy Trading

London | Permanent | Hybrid Working

Our client is a global commodities and energy trading organisation operating an asset-light, highly diversified business model. The firm combines advanced analytics, proprietary technology, and robust risk management to support trading, optimisation, and risk-management decisions across energy and commodities markets.

As a Senior Data Engineer, you will join a highly technical Data, AI & Analytics function responsible for building the data platforms that underpin trading, quantitative research, predictive analytics, and machine-learning use cases. This is a hands-on role with ownership across data ingestion, transformation, storage, and distribution, working closely with Data Scientists, Traders, and technology teams.

You’ll have the opportunity to:

  • Build and maintain scalable data pipelines supporting trading and analytics use cases
  • Ingest structured and unstructured data from diverse internal and external sources
  • Support predictive analytics, systematic trading, and machine-learning workloads
  • Partner closely with Data Scientists and Trading teams to deliver high-quality datasets
  • Contribute to cloud-native data platforms using modern engineering practices
  • Drive improvements in data quality, performance, and self-service capabilities

Your Responsibilities

As a Senior Data Engineer, you will:

  • Design and implement data ingestion pipelines using ETL, streaming, scraping, and batch approaches
  • Clean, enrich, and transform datasets for analytical and operational consumption
  • Persist data across databases, warehouses, and data lakes
  • Distribute data internally via APIs, Python packages, and direct querying
  • Maintain and enhance production data pipelines and databases
  • Support post-processing automation, including analytics, models, and visualisation workflows
  • Enable Data Scientists through shared libraries, cloud resources, and documented data access
  • Maintain knowledge bases covering data sources, pipelines, and usage

Key Requirements

As a Senior Data Engineer, you should have:

  • Strong engineering background in Data Engineering, Computer Science, or similar
  • Experience working in commodities, energy trading, or financial markets environments
  • Advanced Python skills, including extensive use of Pandas
  • Experience with analytical or time-series databases (e.g. Redshift, ClickHouse)
  • Hands-on experience with Docker and containerised workloads
  • Experience with Git and modern DevOps practices
  • Practical experience deploying infrastructure on AWS using IaC (e.g. CDK, CloudFormation)
  • A proactive, ownership-driven mindset with strong problem-solving skills

You will gain exposure with:

Working as a Senior Data Engineer, you will gain exposure to:

  • Direct collaboration with Traders and quantitative teams
  • Cloud-native data platforms supporting real-time and batch processing
  • Big-data and distributed processing tools
  • AWS services such as S3, Lambda, Athena, EMR, Kinesis, and EC2
  • Advanced analytics, visualisation, and data-science workflows
  • Emerging technologies across AI and machine learning

Why Join?

  • Work on data platforms that directly support trading and optimisation decisions
  • Own data engineering solutions end-to-end in a high-impact environment
  • Operate in a technically deep, collaborative engineering culture
  • Competitive compensation with performance-linked bonus
  • Hybrid working model supporting flexibility and collaboration

Interested? Apply now.

Trainee AI Engineer Placement Programme
ITOL Recruit
Multiple locations
Fully remote
Graduate
£30,000 - £45,000
RECENTLY POSTED

Trainee AI Engineer – No Experience Needed Future-proof your career in Artificial Intelligence – starting today. Looking for a career change? Currently employed but want something better? Or maybe you're between jobs and ready for a fresh start? ITOL Recruit's AI Traineeship is designed to get you into one of the fastest-growing industries with zero experience required. Train online at your own pace and land your first AI Engineer role in 1-3 months. Please note this is a training course and fees apply Job guaranteed - complete the programme and get a job or get your money back. Our candidates earn £30,000-£45,000. Why AI? AI is reshaping every industry you can think of. Healthcare, finance, retail, and manufacturing – they’re all scrambling for skilled professionals. The demand far outstrips supply, which means excellent salaries, flexible working arrangements, and genuine job security. How It Works Step 1 – AI Engineering Fundamentals Start with the basics of AI, including neural networks and large language models, to build a solid foundation in AI engineering. Step 2 – Data Fundamentals Understand the data workflow, from collection to cleaning, and learn how to prepare data for AI applications. Step 3 – Notebooks & IDEs Get hands-on with industry-standard tools like Jupyter Notebooks and VS Code to develop AI systems. Step 4 – Python Programming Master Python, covering everything from the basics to object-oriented programming (OOP). Step 5 – Python Streamlit Project Apply your Python skills by building a car price prediction app using Python and Streamlit. Step 6 – Python for Data Learn essential Python libraries like NumPy, Pandas, and Matplotlib for data manipulation and visualisation. Step 7 – AI Sentiment Analysis Project Work with Hugging Face to build a sentiment analysis classifier using real-world AI techniques. Step 8 – AI Prompt Engineering Master prompt engineering, learning how to craft effective prompts for controlling AI outputs. Step 9 – Retrieval-Augmented Generation (RAG) Learn how to integrate external knowledge into AI systems using RAG techniques and vector databases. Step 10 – AI Specialised Customer Service Chatbot Project Combine prompt engineering and RAG to build an AI-powered customer service chatbot, delivering intelligent responses using vector databases and knowledge bases. Step 11 – Machine Learning Fundamentals Understand machine learning principles and algorithms, and how to train and test models using scikit-learn. Step 12 – Machine Learning Project Put your machine learning knowledge into practice with a hands-on project. Step 13 – AI & Data Ethics Study the ethical considerations in AI, including issues of bias, fairness, and data privacy. Step 14 – Oral Exam Complete a virtual oral exam to assess your understanding and ability to apply your learning. Step 15 – AWS Certified Cloud Practitioner Finish with the AWS Certified Cloud Practitioner course and exam to gain essential cloud computing knowledge. What You Get · 100% online, self-paced training · Microsoft AI-900 certification included · 1-to-1 tutor and recruitment support · Real-world project experience · Job guarantee – get a job or your money back · Starting salary of £30,000–£45,000 We Get You Hired! We're not new to this. ITOL Recruit has 15+ years of experience and has placed over 5,000 people into new roles. Our job programmes include certified tutors, UK-accredited qualifications, and one-on-one support from a recruitment adviser focused on placing you. We don't believe in empty promises. Complete our programme, follow the process, and if you don't land a job, you get your money back. "Five months from complete beginner to AI engineer. Best decision I ever made." – Jamie W., now working as a Junior AI Engineer in London Ready to Start? If you’re motivated, curious, and excited about technology, we’ll help you turn that into a career you can be proud of. Apply now, and one of our expert Career Advisors will be in touch within 4 working hours to guide you through your next steps

Research Physicist with Python - Freelance AI Trainer
Mindrift
UK
Fully remote
Mid - Senior
£35/hour
RECENTLY POSTED
+2

Please submit your CV in English and indicate your level of English proficiency.

Mindrift connects specialists with project-based AI opportunities for leading tech companies, focused on testing, evaluating, and improving AI systems. Participation is project-based, not permanent employment.

What this opportunity involves

While each project involves unique tasks, contributors may:

  • Design original computational physics problems that simulate real physics research workflows;
  • Create problems requiring Python programming to solve (using Numpy, SciPy, Sympy);
  • Ensure problems are computationally intensive and cannot be solved manually within reasonable timeframes (days/weeks);
  • Develop problems requiring non-trivial reasoning chains in mechanics, electromagnetism, thermodynamics, and quantum mechanics;
  • Base problems on real research challenges or practical applications from physics practice;
  • Verify solutions using Python with standard physics simulation libraries;
  • Document problem statements clearly and provide verified correct answers.

What we look for

This opportunity is a good fit for physicians with an experience in python open to part-time, non-permanent projects. Ideally, contributors will have:

  • Degree in Physics (Theoretical, Experimental, or Computational) or related fields;
  • Python proficiency for numerical validation. MATLAB, R, C, SQL, Numpy, Pandas, SciPy, domain-specific libraries, Stata or knowledge of any programming language can be equivalent;
  • 2+ years of professional experience: applied, research, or teaching experience is applicable;
  • Experience with numerical simulation methods;
  • Ability to design problems that mirror real physics research workflows;
  • Creative thinking in problem design across diverse physics areas;
  • Familiarity with physics modeling and approximation techniques;
  • Strong written English (C1+).

How it works

Apply ? Pass qualification(s) ? Join a project ? Complete tasks ? Get paid

Project time expectations

For this project, tasks are estimated to require around 1020 hours per week during active phases, based on project requirements. This is an estimate, not a guaranteed workload, and applies only while the project is active.

Compensation

On this project, contributors can earn up to $35 per hour equivalent, depending on their level and pace of contribution.

Compensation varies across projects depending on scope, complexity, and required expertise. Please note that other projects on the platform may offer different earning levels based on their requirements.

Python Engineer
McGregor Boyall
Penicuik
Remote or hybrid
Mid - Senior
£520/day
RECENTLY POSTED

Python Engineer - AI/ML Automation

Location - remote, with occasional trip to Edinburgh or Glasgow when required

Duration - 6 months with possible extensions

Day rate - circa £520 Outside ir35

We are looking for an experienced Python Engineer to join a forward-thinking agile team focused on building AI-driven automation solutions within a high-impact domain.

Key Responsibilities

  • Develop and enhance automation services using OCR, Object Detection, and Large Language Models (LLMs)
  • Build scalable system components to process and analyse complex document and text data
  • Design and maintain robust ETL pipelines and data processing workflows
  • Collaborate with cross-functional teams to integrate solutions into existing digital platforms
  • Support production systems, ensuring performance, reliability, and continuous improvement
  • Contribute to research and development of innovative AI/ML solutions
  • Write clean, maintainable, and well-tested code following best engineering practices
  • Participate in agile ceremonies, code reviews, and collaborative development activities
  • Mentor team members and share knowledge across the wider engineering community

Required Skills & Experience

  • Strong Python (3.9+) development experience
  • Hands-on experience with AI/ML technologies (OCR, LLMs, Object Detection)
  • Experience with libraries such as PyTorch, Hugging Face, OpenCV, and Pandas
  • Knowledge of AWS services (Lambda, S3, SQS, CloudWatch)
  • Experience building APIs using FastAPI
  • Solid understanding of data processing, system design, and cloud-based architectures
  • Familiarity with Agile methodologies and modern development practices (TDD, CI/CD)

If this seems like a good fit, please apply today or email your CV to

McGregor Boyall is an equal opportunity employer and do not discriminate on any grounds.

Quant Developer - OTC Pricing
James Joseph Associates
London
In office
Mid - Senior
£150,000 - £170,000
+1

A high-growth institutional trading business in the digital assets market is expanding its London team and hiring a Quant Developer to join its OTC pricing function. This is a great opportunity to join a successful firm adding headcount as it continues to grow, and to work on highly visible quantitative systems that sit at the core of pricing, hedging and liquidity decisions. The role offers a rare blend of quantitative research partnership and hands-on production engineering, making it ideal for someone who enjoys solving real market problems in a fast-paced trading environment.

THE ROLE: Quant Developer OTC Pricing

This is a senior-level quant development role within the OTC pricing team, focused on turning quantitative ideas into robust, production-grade trading and pricing solutions. The successful candidate will work very closely with quantitative researchers, contributing not only to implementation but also to the design and refinement of pricing and liquidity models.

The role combines research-oriented modelling in Python with production engineering in Java. You will use Python to analyse data, test ideas and support model development, while using Java to build and enhance high-performance pricing infrastructure used in a live, global trading environment.

You will be involved in the development of pricing logic, flow analysis, spread optimisation and automated hedging tools. The role also includes working on distributed systems challenges in a 24/7 multi-region environment, where reliability, consistency and performance are critical. This is a high-impact position for someone who enjoys applying quantitative thinking to real trading and pricing problems at scale.

KEY RESPONSIBILITIES: Quant Developer OTC Pricing

  • Build and enhance quantitative pricing, hedging and optimisation models within a high-performance Java framework
  • Work alongside quantitative researchers to analyse large datasets and translate research into production-ready solutions
  • Use Python for model prototyping, data analysis, signal investigation and backtesting activity
  • Develop and improve pricing skew, spread and liquidity optimisation logic
  • Design and implement automated hedging strategies, taking into account execution risk, liquidity and market impact
  • Support pricing system deployment across distributed, multi-region architecture with a focus on uptime and consistency
  • Analyse client trading behaviour, including flow quality, decay and pricing performance, to support more effective pricing decisions
  • Contribute to the ongoing evolution of tools and systems used in a live institutional trading environment

REQUIRED - SKILLS/EXPERIENCE: Quant Developer OTC Pricing

  • Strong Java development experience, ideally 5+ years
  • Deep understanding of object-oriented design, concurrency, and high-performance distributed systems
  • Proficient in Python; Including use of libraries such as NumPy, SciPy and Pandas for quantitative analysis and prototyping
  • Proven experience applying numerical optimisation techniques and/or machine learning methods to pricing, trading or market-related problems
  • Prior experience in client pricing, electronic trading, or a closely related quantitative trading environment
  • Exposure to liquid markets such as FX, equities, ETFs or crypto
  • Strong academic background in a quantitative discipline such as Mathematics, Physics or Quantitative Finance
  • Ability to operate effectively in a role that bridges quantitative modelling and production engineering

DESIRABLE - SKILLS/EXPERIENCE: Quant Developer OTC Pricing

  • Familiarity with low-latency system optimisation, such as GC tuning or tools/frameworks used in high-performance messaging environments
  • Understanding of derivatives pricing and risk management, particularly across products such as futures, forwards, NDFs and CFDs
  • KDB+/Q
  • Exposure to AWS, Docker and Kubernetes
Data Engineer
Gold Group
London
Fully remote
Mid - Senior
£50,000 - £65,000

Data Engineer Fully Remote £50,000 - £65,000 + Car Allowance - £5,200 + 5% Bonus 6 Month initial FTC with view to extend Brief Data Engineer needed for a large well known Facilities Management organisation. The role I have can be based fully remote from home with all equipment provided. My is looking to employ an experienced and well-rounded Data Engineer that takes pride in their work with proven 4 years experience working in a fast-paced environment, preferably within a dynamic data engineering team. The successful candidate must have proven skills at managing day-to-day BAU tasks to maintain existing data processes/ETLs, while simultaneously contributing to strategic roadmap projects and initiatives. Along with previous responsibility for end-to-end ownership of data pipelines and automation of data workflows. Finally have 5 years Strong programming experience at an advanced level in Python, PySpark, and SQL scripting. Benefits Salary: £50,000 - £65,000 per annum
25 day’s holiday
Company car / Allowance
Variable annual bonus based 5-15%
Pension Plan
Career Progression What the role entails: Some of the main duties of the Data Engineer will include: Develop high-performance data pipelines using Python (Pandas) or Spark within Palantir Foundry, ensuring seamless data transformation and integration to support analytics, reporting, and machine learning use cases across the enterprise.
Design and implement reusable, scalable, and well-documented data workflows that ingest, transform, and curate data within Palantir Foundry, enabling the business to generate actionable insights and drive strategic decision-making.
Lead workshops with business stakeholders to capture data requirements, translating these into flexible and scalable designs that utilise Palantir Foundry’s advanced toolsets to deliver reliable, high-quality data solutions that align with strategic objectives.
Demonstrate Palantir Foundry’s full potential by showcasing its capabilities to a diverse set of business stakeholders, guiding them in leveraging the platform’s full range of functionality to deliver transformative business outcomes.
Ensure the availability, performance, and integrity of all data services within Palantir Foundry, continuously monitoring and optimising data processes to meet stringent service level agreements (SLAs) and business requirements. What experience you need to be the successful Data Engineer: Proficiency in Palantir Foundry pipeline development: Extensive experience in building, maintaining, and optimising data pipelines and transformations within Palantir Foundry, leveraging tools such as Foundry Code Workbooks, Pipelines, and Object Explorer for efficient data processing and integration.
Strong coding skills in Python (Pandas) and PySpark: Essential experience in using Python (Pandas) and PySpark for data manipulation, transformation, and pipeline development within Palantir Foundry, ensuring high performance and scalability across various data flows.
Experience in data modelling and ontology development: Proven expertise in using Palantir Foundry’s ontology to model data in ways that optimise its accessibility and usability, supporting robust reporting, analytics, and machine learning pipelines.
Deep understanding of Foundry’s data governance and metadata management: Strong knowledge of how Palantir Foundry manages data governance, metadata, lineage, and auditing, ensuring compliance with enterprise-wide data governance policies and regulatory standards.
Data pipeline automation and orchestration: Experience with Foundry’s Pipeline Builder for designing, developing, and automating ETL processes that ensure the availability of clean, curated data for business analytics and decision-making.
Code development, testing, and deployment in Foundry: Demonstrated ability to design, test, and deploy code using Foundry’s integration with CI/CD pipelines, ensuring data processes are efficiently managed from development through to production.
Effective collaboration and communication skills: Ability to work with cross-functional teams and effectively communicate technical concepts related to data workflows in Palantir Foundry to both technical and non-technical stakeholders, ensuring a clear understanding of how data solutions drive business outcomes.
Deep familiarity with Palantir Foundry’s tools and frameworks: Direct experience working within Palantir Foundry’s platform, leveraging its comprehensive suite of tools, including Code Workbooks, Pipelines, Object Explorer, and Foundry’s data governance and ontology capabilities.
Experience in multi-cloud and data migration environments: Experience working with data environments in Microsoft Asure, AWS, and legacy systems, and a proven ability to consolidate and migrate data processes and workloads into Palantir Foundry as part of a broader data strategy.
Data-driven business value: Demonstrated ability to deliver tangible business value through the development and deployment of data engineering solutions, leveraging Palantir Foundry’s capabilities to support key analytics, reporting, and decision-making initiatives. This really is a fantastic opportunity for a Data Engineer to progress their career. If you are interested please apply as soon as possible as this position will be filled quickly so don’t miss out! Services advertised by Gold Group are those of an Agency and/or an Employment Business.
We will contact you within the next 14 days if you are selected for interview. For a copy of our privacy policy please visit our website

Senior Data Scientist
Adria Solutions Ltd
Manchester
Hybrid
Senior
£60,000 - £75,000

My client is a fast-growing UK business serving thousands of customers. They are investing heavily in their data capability and are now looking to appoint a Lead Data Scientist to drive end-to-end machine learning delivery within a regulated financial environment.

This is a hands-on role combining technical ownership and production-grade model deployment.

The Role

As Senior Data Scientist, you will:

  • Own end-to-end ML solutions - from problem framing and feature engineering to deployment, monitoring, and governance
  • Translate business objectives into modelling strategies aligned to risk appetite and operational constraints
  • Build and deploy models using Python, SQL, and AWS (SageMaker or equivalent)
  • Partner closely with Engineering, Data, and Risk/Financial Crime teams to ensure robust, production-ready solutions
  • Establish monitoring frameworks for performance, drift, and retraining
  • Drive clear documentation, traceability, and governance appropriate for a regulated environment

This role requires someone who thinks beyond experimentation - focusing on operational impact, adoption, and long-term model performance.

Essential Experience

  • Proven commercial ML/Data Science delivery with measurable impact
  • Experience taking models into production and managing performance over time
  • Prior experience leading or mentoring Data Scientists
  • Strong Python (pandas, numpy, scikit-learn or similar)
  • Strong SQL (complex joins, aggregations, analytical functions)
  • Solid grounding in applied statistics, evaluation design, calibration, bias/fairness
  • Experience working closely with Engineering/Data teams in production-first environments
  • Comfortable operating within regulated industries

Desirable

  • AWS experience (S3, Athena/Glue, IAM, Lambda)
  • SageMaker or equivalent ML platform experience
  • Financial services domain knowledge (risk, fraud, affordability, payments)
  • Experience with model explainability and governance documentation

Package & Benefits

  • Hybrid working model
  • Competitive pension
  • Additional paid leave (birthday, charity, wellbeing, life events)
  • Employee assistance programme & Virtual GP
  • Modern collaborative office environment

Interested? Please Click Apply Now! Senior Data Scientist

Senior Python/GenAI Developer
Randstad Digital
Belfast
Hybrid
Senior
£75,000

Role: Senior Python/GenAI Developer

Location: Dublin (OR) Belfast (Hybrid - 3 Days In-Office)

Role Type: Permanent / Full-Time (FTE)

Our client is looking for a Senior GenAI Application Developer/ Engineer to join their global technology hub in Dublin. This is a high-impact, permanent role designed for a Python expert who can move beyond basic AI experimentation and into the engineering of production-grade, autonomous systems.

What our client is looking for:

  • The Python Specialist: A developer with 6-10 years of professional experience. You must have ‘under-the-hood’ knowledge of Python, specifically for building high-throughput microservices and complex data pipelines using FastAPI, Pandas, and NumPy.
  • The RAG & Agentic Expert: This is the ‘Critical’ requirement. Our client needs someone with deep hands-on experience building Retrieval-Augmented Generation (RAG) pipelines and Agentic frameworks. You should know how to use LangChain or LlamaIndex to create AI that can execute multi-step tasks.
  • The Data Architect: Proficiency in Vector Databases is essential. You should be comfortable designing data persistence layers using PG Vector, Pinecone, Milvus, or Mongo Atlas to handle large amounts of unstructured data.
  • The MLOps Engineer: You don’t just write code; you ship it. Our client requires experience deploying GenAI models into production using Kubernetes (or OpenShift) and establishing robust CI/CD pipelines via Jenkins, GitLab, or Azure DevOps.
  • The AI Safety Advocate: A working knowledge of Guardrails is key. You should understand how to assess the performance and safety of GenAI features to ensure they meet the rigorous standards of a global bank.

If you are interested then please apply or share your updated CV on yogeshwari .sen @randstad digital .com with your availability and I will give you call back to discuss the role further.

Randstad Technologies Ltd is a leading specialist recruitment business for the IT & Engineering industries. Please note that due to a high level of applications, we can only respond to applicants whose skills & qualifications are suitable for this position. No terminology in this advert is intended to discriminate against any of the protected characteristics that fall under the Equality Act 2010. For the purposes of the Conduct Regulations 2003, when advertising permanent vacancies we are acting as an Employment Agency, and when advertising temporary/contract vacancies we are acting as an Employment Business.

Data Engineer
Gold Group
City of London
Fully remote
Mid - Senior
£50,000 - £65,000

Fully Remote £50,000 - £65,000 + Car Allowance - £5,200 + 5% Bonus 6 Month initial FTC with view to extend Brief Data Engineer needed for a large well known Facilities Management organisation. The role I have can be based fully remote from home with all equipment provided. My is looking to employ an experienced and well-rounded Data Engineer that takes pride in their work with proven 4 years experience working in a fast-paced environment, preferably within a dynamic data engineering team. The successful candidate must have proven skills at managing day-to-day BAU tasks to maintain existing data processes/ETLs, while simultaneously contributing to strategic roadmap projects and initiatives. Along with previous responsibility for end-to-end ownership of data pipelines and automation of data workflows. Finally have 5 years Strong programming experience at an advanced level in Python, PySpark, and SQL scripting. Benefits Salary: £50,000 - £65,000 per annum 25 day's holiday Company car / Allowance Variable annual bonus based 5-15% Pension Plan Career Progression What the role entails: Some of the main duties of the Data Engineer will include: Develop high-performance data pipelines using Python (Pandas) or Spark within Palantir Foundry, ensuring seamless data transformation and integration to support analytics, reporting, and machine learning use cases across the enterprise. Design and implement reusable, scalable, and well-documented data workflows that ingest, transform, and curate data within Palantir Foundry, enabling the business to generate actionable insights and drive strategic decision-making. Lead workshops with business stakeholders to capture data requirements, translating these into flexible and scalable designs that utilise Palantir Foundry's advanced toolsets to deliver reliable, high-quality data solutions that align with strategic objectives. Demonstrate Palantir Foundry's full potential by showcasing its capabilities to a diverse set of business stakeholders, guiding them in leveraging the platform's full range of functionality to deliver transformative business outcomes. Ensure the availability, performance, and integrity of all data services within Palantir Foundry, continuously monitoring and optimising data processes to meet stringent service level agreements (SLAs) and business requirements. What experience you need to be the successful Data Engineer: Proficiency in Palantir Foundry pipeline development: Extensive experience in building, maintaining, and optimising data pipelines and transformations within Palantir Foundry, leveraging tools such as Foundry Code Workbooks, Pipelines, and Object Explorer for efficient data processing and integration. Strong coding skills in Python (Pandas) and PySpark: Essential experience in using Python (Pandas) and PySpark for data manipulation, transformation, and pipeline development within Palantir Foundry, ensuring high performance and scalability across various data flows. Experience in data modelling and ontology development: Proven expertise in using Palantir Foundry's ontology to model data in ways that optimise its accessibility and usability, supporting robust reporting, analytics, and machine learning pipelines. Deep understanding of Foundry's data governance and metadata management: Strong knowledge of how Palantir Foundry manages data governance, metadata, lineage, and auditing, ensuring compliance with enterprise-wide data governance policies and regulatory standards. Data pipeline automation and orchestration: Experience with Foundry's Pipeline Builder for designing, developing, and automating ETL processes that ensure the availability of clean, curated data for business analytics and decision-making. Code development, testing, and deployment in Foundry: Demonstrated ability to design, test, and deploy code using Foundry's integration with CI/CD pipelines, ensuring data processes are efficiently managed from development through to production. Effective collaboration and communication skills: Ability to work with cross-functional teams and effectively communicate technical concepts related to data workflows in Palantir Foundry to both technical and non-technical stakeholders, ensuring a clear understanding of how data solutions drive business outcomes. Deep familiarity with Palantir Foundry's tools and frameworks: Direct experience working within Palantir Foundry's platform, leveraging its comprehensive suite of tools, including Code Workbooks, Pipelines, Object Explorer, and Foundry's data governance and ontology capabilities. Experience in multi-cloud and data migration environments: Experience working with data environments in Microsoft Asure, AWS, and legacy systems, and a proven ability to consolidate and migrate data processes and workloads into Palantir Foundry as part of a broader data strategy. Data-driven business value: Demonstrated ability to deliver tangible business value through the development and deployment of data engineering solutions, leveraging Palantir Foundry's capabilities to support key analytics, reporting, and decision-making initiatives. This really is a fantastic opportunity for a Data Engineer to progress their career. If you are interested please apply as soon as possible as this position will be filled quickly so don't miss out! Services advertised by Gold Group are those of an Agency and/or an Employment Business. We will contact you within the next 14 days if you are selected for interview. For a copy of our privacy policy please visit our website

AI Engineer
Certain Advantage
London
Hybrid
Mid - Senior
Private salary

Certain Advantage are recruiting on behalf of our Trading client for an AI Engineer on a contract basis for 6-12 months initially in London. This will require some onsite days in Central London during the week. We are seeking Engineers skilled in python with a strong focus on GenAI AI and LLMs to lead the integration of cutting-edge language technologies into real-world applications. If you’re someone passionate about building scalable, responsible, and high-impact GenAI solutions then this could be for you! We’re looking for Engineers offering competent core technical skills in Python Programming, Data Handling with NumPy, Pandas, SQL, and use of Git/GitHub for version control. Any experience with these GenAI Use Cases would be relevant and desirable; Chatbots, copilots, document summarisation, Q&A, content generation. To help make your application as relevant as possible, please ensure your CV demonstrates any prior experience you have relating to the below; System Integration & Deployment Model Deployment: Flask, FastAPI, MLflow Model Serving: Triton Inference Server, Hugging Face Inference Endpoints API Integration: OpenAI, Anthropic, Cohere, Mistral APIs LLM Frameworks: LangChain, LlamaIndex – for building LLM-powered applications Vector Databases: FAISS, Weaviate, Pinecone, Qdrant (Nice-to-Have) Retrieval-Augmented Generation (RAG): Experience building hybrid systems combining LLMs with enterprise dataMLOps & Infrastructure MLOps: Model versioning, monitoring, logging Bias Detection & Mitigation Content Filtering & Moderation Explainability & Transparency LLM Safety & Guardrails: Hallucination mitigation, prompt validation, safety layers Azure Cloud Experience Collaboration & Delivery Cross-functional Collaboration: Working with software engineers, DevOps, and product teams Rapid Prototyping: Building and deploying MVPs Understanding of ML & LLM Techniques: To support integration, scaling, and responsible deployment Prompt Engineering: Designing and optimising prompts for LLMs across use cases Model Evaluation & Monitoring Evaluation Metrics: Perplexity, relevance, response quality, user satisfaction Monitoring in Production: Drift detection, performance degradation, logging outputs Evaluation Pipelines: Automating metric tracking via MLflow or custom dashboards A/B Testing: Experience evaluating GenAI features in production environments Does this sound like your next career move? Apply today! Working with Certain Advantage We go the extra mile to find the best people for the job. If you’re hunting for a role where you can make an impact and grow your career, we’ll work with you to find it. We work with businesses across the UK to find the best people in Finance, Marketing, IT and Engineering. If this job isn’t for you, head to (url removed) and register for job alerts and career guidance tips

Page 2 of 2