Make yourself visible and let companies apply to you.
Roles
Remote Python Jobs
Overview
Discover top remote Python jobs on Haystack, your go-to IT job board for flexible, work-from-anywhere opportunities. Whether you're a Python developer, engineer, or software architect, find the latest remote Python roles that fit your skills and career goals. Start your remote Python job search today and take the next step in your tech career with Haystack!
Principal Pricing Analyst
Gerrard White
Peterborough
Remote or hybrid
Senior
Private salary
RECENTLY POSTED

Job Title: Principal Pricing Analyst

Locations: This can be a largely remote position with the occasion travel to the office closest to you. We have offices based in Manchester, Stoke, London and Peterborough.

Role Overview

Markerstudy Group are looking for a Principal Pricing Analyst to join a quickly growing and developing pricing department across a range of insurance lines.

You will utilise your technical expertise, in-depth knowledge of insurance industry and market leading tools to produce creative and actionable pricing solutions. This role requires a large element of coaching team members and championing best practice across the department.

Reporting to the our Associate Director, you will make use of WTW Radar and Emblem and you will have responsibility for the development and maintenance of predictive models (GLM) and price optimisation including machine learning algorithms (GBM), LTV (Lifetime Value) and fair pricing principles. Ultimately creating value for our customers.

Bringing best in class pricing experience, you’ll be expected to provide pricing proposals considering customer and commercial outcomes, communicating these in a compelling, impactful way to all levels of stakeholders to help us make the right decisions at the right times.

You’ll work on multiple priorities within a fast paced, dynamic environment. You’ll need to be able to manage the expectations of stakeholders alongside prioritising your workload.

As a Principal Pricing Analyst, you will use your advanced analytical skills to:

  • Be a key stakeholder influencing the direction & outcome of projects
  • Provide technical leadership on WTW toolkit (in particular Radar Optimiser) to drive forward effective and efficient solutions
  • Provide thought leadership on optimisation and modelling concepts
  • Research, develop and champion the use of best practice methods and standards and ensure they are embedded throughout the department
  • Lead the development of the Groups pricing capability
  • Query large databases to extract and manipulate data that is fit for purpose
  • Oversee and assist in the development and implementation of the market leading methodologies you’ve identified
  • Continuously evaluate methodologies, understanding how they fit into the wider piece, and identify where they can be improved

Key Skills and Experience:

  • Previous experience within general insurance pricing
  • Experience with some of the following predictive modelling techniques; Logistic Regression, GBMs, Elastic Net GLMs, GAMs, Decision Trees, Random Forests, Neural Nets and Clustering
  • Experience in statistical and data science programming languages (e.g. R, Python, PySpark, SAS, SQL)
  • A quantitative degree (Mathematics, Statistics, Engineering, Physics, Computer Science, Actuarial Science)
  • Experience of WTW’s Radar software
  • Proficient at communicating results in a concise manner both verbally and written

About us

Markerstudy Group is a major force in UK general insurance market, combining scale with innovation. Markerstudy Group have deep product and distribution reach through multiple brands and an experienced leadership foundation coordinating diverse and fast-evolving business units. The Group employs more than 6,000 people across the UK.

Senior Python Developer
hireful
Leeds
Fully remote
Senior
£60,000
RECENTLY POSTED

Are you a Senior Python Developer looking to help shape the future of how schools use technology to empower learners across the UK? We are searching for a Senior Python Developer / Lead Python Developer who can guide a talented engineering team and push the boundaries of what cloud-native EdTech can be. You will be working with technologies such as Python, Flask, Vue and AWS. This is a fantastic opportunity to come in and revamp the future of technology in education. Fancy taking ownership over building scalable Python applications, driving Agile delivery, and designing robust AWS-based systems using modern Infrastructure-as-Code approaches? Role: Senior Python Developer, Python Software Engineer, Python Developer, Lead Python Developer Location: 100% work from home / Remote Salary: £55k - £62k base plus great benefits What you will bring to the role: You will champion engineering excellence, define best practices, inspire and mentor developers, and keep innovation at the heart of everything you do – from microservices and RESTful APIs to CI/CD pipelines and containerised deployments. With a strong background in Python development and deep knowledge of AWS, containers and automation, you'll bring both technical expertise and the confidence to steer a team through rapid growth. Experience with serverless, AI, or EdTech is a bonus, but passion and leadership are essential. You must bring experience of full stack development, including Flask framework and Vue. Keen to join a group that cares about delivering real change in education. CLICK APPLY and send through a copy of a CV.

Azure Devops Engineer
Infoplus Technologies UK Ltd
Sheffield
Fully remote
Mid - Senior
£500/day - £520/day
RECENTLY POSTED
+1

JD: The role requires an experienced systems engineer with strong technical leadership and collaboration skills. The ideal candidate will have significant experience in cloud platform management, infrastructure delivery pipelines (e.g. Azure, AWS, GCP, scripting in Bash, PowerShell, Python, Terraform, etc.).

In this role you will: Act as a Technical SME, designing and developing innovative automated solutions to complex problems utilising the cloud environments. Design and support custom-built applications Azure environment, ensuring secure, reliable and high-performing deployments. Build and manage Azure Infrastructure, including Virtual Machines, VM images, Virtual Networks (VNets), subnets, private endpoints and Azure Storage. Develop and deploy Python functions within an Azure Functions App. Develop Infrastructure-as-Code (IaC) such as ARM templates, Bicep, or Terraform. Support CI/CD practices through deployment automation and version-controlled infrastructure in Azure DevOps. Integrate monitoring, logging and diagnostics for custom applications using Azure Monitor, Application Insights and Log Analytics. Integrate with AI-related Azure services such as OpenAI and contribute to integrations strategies involving LLMs. Ensure that custom-built applications are built and maintained inline with client standards, governance and controls, ensuring compliance with SLDC & DEPL controls, AI Governance and legal & regulatory requirements. Support and extend an existing architecture in close partnership with the principal architect and core development team. Produce well-documented, maintainable infrastructure configurations and effectively communicating implementation details with engineers and stakeholders. Work within evolving technical landscape and contribute to the refinement and evolution of architecture and infrastructure decisions. Utilise strong problem-solving skills, with the ability to investigate issues, troubleshoot deployment challenges, and propose scalable and secure solutions. Promote a self-critical and continuous assessment and improvement culture whereby identification of weaknesses in the banks control plane (people, process and technology) are brought to light and addressed in an effective and timely manner. Support engagement of Global Businesses and Functions to drive a global up-lift in cyber-security awareness and help to evangelise Cybersecurity efforts and success.

To be successful in this role you should meet the following requirements: Experience within an enterprise scale organisation including hands-on experience of complex data centre environments, working within a similar role ie DevOps Engineer, Cloud Engineer, Security Engineer is mandatory. Expert level knowledge of one of more leading Cloud platforms including Microsoft Azure, Amazon Web Services, Google Cloud Platform and Alibaba Cloud. Expert level knowledge and proven experience managing Azure App Services, Azure Virtual Machines, and Azure Storage solutions. Hands on experience in one or more programming or scripting language (e.g Python, PowerShell, Bash, Terraform). Experience and demonstrated experience of building and maintaining CI/CD Pipelines to support efficient software delivery.

TPBN1_UKTJ

Lead Enterprise AI Engineer
SMS
Cardiff
Fully remote
Senior
Private salary

Why choose us?

Choosing to work for SMS means choosing to make a difference.We are changing how businesses and consumers use energy for the better, helping achieve a greener, sustainable, and more affordable energy system for everyone. Through our range of innovative energy solutions, we are delivering the future of smart energy working closely with private and public sector partners we are playing a critical role in transforming and decarbonising the UK economy by 2050.

What’s in it for you?

  • 25 personal holiday days per year (with additional 8 public holidays) increasing to 30 personal days after 5 years of service (includes options to buy and sell)
  • Hybrid working options (for some positions).
  • Enhanced Maternity leave. Paternity and Adoption leave.
  • 24/7 free and confidential employee assistance service.
  • Simply Health plan offers a wide variety of benefits from cashback on everyday healthcare treatments like optical, dental and physio treatments. Discounted gym memberships and free 24/7 online GP.
  • Life Insurance (4 x annual salary)
  • Pension matching scheme (up to 5% of salary)

Visit Our People page

What’s the role?

Step into a role where strategy meets engineering excellence. As our Lead Enterprise AI Engineer, youll be the driving force turning bold business opportunities into production-ready AI solutions. Youll shape intelligent agents, craft next-generation conversational experiences using Microsoft Copilot and Databricks Mosaic AI, and architect the Semantic Layer that ensures our models deliver accurate, trusted insights every time.

If youre ready to take ownership of an organisations AI futureand build the systems that make it realthis is your stage.

You will report to the Data, Analytics and AI Director and work remotely on a full-time, 40-hour contract.

Please note: That travel is required, to have face to face meeting with your line manager.

Key responsibilities:

AI Solution Development & Agent Building

  • Design, build, and deploy low-code and pro-code AI agents using Microsoft Copilot tools to automate business workflows (e.g., HR queries, IT support, operational data retrieval).
  • Develop custom RAG (Retrieval-Augmented Generation) solutions within Databricks to allow LLMs to reason over proprietary SMS documents and data.
  • Integrate AI agents with enterprise systems (Dynamics 365, SharePoint, etc.) via APIs and Power Automate connectors.

Semantic Modelling & Databricks Genie Curation

  • Own the creation and maintenance of Databricks Genie Spaces. This involves translating complex database schemas into business-friendly semantic models.
  • Define and govern standard metrics, dimensions, and synonyms within Unity Catalog to ensure the AI “speaks the language of the business.”
  • Continuously monitor Genie performance, reviewing “human feedback” on answers to refine the semantic model and improve accuracy over time.

Business Engagement & Prototyping

  • Partner with business stakeholders (Finance, Operations, Commercial) to decompose high-level use cases into technical requirements.
  • Rapidly prototype AI solutions to demonstrate value and gain quick traction within business units.
  • Act as a technical evangelist, demonstrating to non-technical teams how to interact with Genie spaces and Copilots effectively.

AI Operations (LLMOps) & Governance

  • Implement monitoring frameworks to track the cost, latency, and quality of AI model outputs.
  • Ensure all AI solutions adhere to the organisation’s data governance and security standards (e.g., preventing data leakage via LLMs).
  • Manage the lifecycle of AI models and agents from development through to production and retirement.

To be considered for this role, we would love you to have:

  • Degree in Computer Science, Data Engineering, Artificial Intelligence, or related field, or equivalent industry experience.
  • Deep hands-on experience with Azure/Microsoft Fabric ecosystem (specifically Copilot Studio & Power Platform) and Databricks (SQL, Unity Catalog, Mosaic AI).
  • Strong ability to design data models for analytics. Experience with defining metrics layers (e.g., DBX semantic layer or Databricks Genie) is essential.
  • Practical experience with Large Language Models (LLMs), Prompt Engineering, and RAG architectures.
  • Proficient in Python (for data manipulation and API interaction) and SQL (for data modelling).
  • Experience working with REST APIs and connecting disparate business systems.
  • Ability to explain technical AI concepts to non-technical business users and translate their feedback into code.

#LI-Remote

Senior Customer Deployment Specialist
Head Resourcing
Edinburgh
Remote or hybrid
Senior
£45,000 - £65,000
+7

Location: UK (Remote with occasional travel to UK & Europe)
Sector: Healthcare / Medical Technology / AI

About the Company

Our MedTech client helps healthcare organisations unlock the value of AI by providing access to a broad portfolio of market-leading imaging and operational AI solutions through a proven, enterprise-grade technology platform.
Seamlessly integrated into existing clinical systems, the platform simplifies the deployment, management, and scaling of both third-party and custom AI applications-reducing implementation time, cost, and ongoing operational overhead for healthcare providers.

The Role

The Senior Customer Deployment Specialist plays a critical role within the Customer Operations function, leading complex customer implementations and accelerating adoption of the platform across clinical environments.
This is a hands-on, customer-facing position requiring deep technical expertise in healthcare IT, clinical system integration, and cloud infrastructure. You will own deployments end-to-end, working closely with clinical, technical, and non-technical stakeholders to ensure high-quality, secure, and timely delivery.

Key Responsibilities

Software Deployment & Configuration

  • Lead complex platform and third-party application deployments from initiation through post-implementation review
  • Configure and optimise deployments to meet performance, security, and customer-specific requirements
  • Champion deployment best practices aligned with regulatory, quality, and compliance standards

Architecture & Technical Leadership

  • Design and oversee enterprise healthcare IT architectures integrating PACS, RIS, EMR, AI solutions, and cloud-native services
  • Lead implementation of HL7 v2 and DICOM workflows, with future expansion to FHIR standards
  • Share deployment learnings with Solution Architects to continuously improve delivery standards

Technical Stakeholder Engagement

  • Work closely with customers and partners to gather technical and integration requirements
  • Act as a trusted subject matter expert, building confidence through clear technical guidance

Platform Management

  • Maintain and support Windows Server and Linux-based systems, ensuring platform stability, performance, and security

Healthcare Data Standards & Integration

  • Enable seamless data exchange between imaging modalities, hospital systems, and external partners using DICOM and HL7 v2

Interoperability & Web Services

  • Implement APIs and web services to support interoperability across healthcare systems

Tooling & Scripting

  • Develop and maintain scripts (Python, Bash, PowerShell) for automation, monitoring, and integration
  • Create and improve internal tooling to enhance deployment consistency, efficiency, and security

Troubleshooting

  • Respond rapidly to system issues, conduct root cause analysis, and implement corrective actions

Collaboration & Mentorship

  • Partner with Product, Engineering, and Customer Success teams to influence deployment readiness
  • Mentor junior team members and promote a culture of continuous improvement

Documentation & Compliance

  • Maintain detailed technical and deployment documentation
  • Adhere to all information security and acceptable use policies

Essential Experience & Qualifications

  • Significant experience in Healthcare IT, including enterprise software implementation
  • Degree in Computer Science, Software Engineering, or equivalent industry experience
  • Proven leadership delivering large-scale, complex deployments
  • Strong understanding of clinical workflows and system integration
  • Experience deploying software in virtualised environments (VMware, Hyper-V)

Technical Expertise

Microsoft Technologies

  • Expert-level Windows Server (2016+), Active Directory, scripting, and security

Linux Technologies

  • Expert-level Linux (Ubuntu, Red Hat, CentOS) including scripting and security
  • Strong experience configuring GPU resources in virtualised environments

Containers

  • Deep experience deploying Dockerised applications using Docker and Docker Compose

Healthcare Standards

  • Strong knowledge of DICOM, HL7 v2, VNA, PACS, and RIS systems

Web Services & APIs

  • RESTful services, XML, JSON

Cloud Infrastructure

  • Hands-on experience with AWS, Azure, or GCP (VMs, networking, security, storage)

Networking & Security

  • Solid understanding of networking, firewalls, VPNs, and healthcare data security
  • Experience working in regulated industries

Desirable

  • Cloud certifications (AWS, Azure, or GCP)
  • Kubernetes deployment and operations experience
  • Integration engines (e.g. Mirth, Rhapsody)
  • Infrastructure as Code (Terraform, ARM, Bicep)
  • Configuration management and monitoring tools

Personal Attributes

  • Strong ownership mindset with the ability to drive delivery independently
  • Excellent communication and stakeholder management skills
  • Structured, analytical, and detail-oriented approach
  • Collaborative, curious, and committed to continuous learning
  • Comfortable managing multiple projects and priorities

Deployments are primarily remote, with occasional travel to customer sites in the UK and Europe (historically limited).

OpenShift Architecture and Migration Design Specialist
Infoplus Technologies UK Ltd
Sheffield
Remote or hybrid
Senior - Leader
£450/day - £520/day
+4

Skills: OCP, Ansible, IaC

We are seeking an experienced OpenShift Architecture and Migration Design Specialist to lead the design, planning, and execution of OpenShift architectures and migration strategies. The ideal candidate will have expertise in designing robust, scalable, and secure OpenShift environments, as well as creating and implementing migration plans for transitioning workloads and applications to OpenShift. Experience with VMware and Pure Storage is essential to ensure seamless integration with existing infrastructure.________________________________________Key Responsibilities:1.Architecture Design:oDesign the target architecture for OpenShift, including cluster topology, networking, and storage solutions.oDefine and implement best practices for OpenShift cluster setup, including multi-zone and multi-region deployments.oEnsure the architecture supports high availability, fault tolerance, and disaster recovery.2.Migration Design and Optimization:oAssess existing infrastructure, applications, and workloads to determine migration readiness.oDevelop detailed migration plans, including strategies for containerization, workload transfer, and data migration.oImplement migration processes, ensuring minimal downtime and disruption to business operations.oIdentify and mitigate risks associated with the migration process.3.VMware and Pure Storage Integration design:oDesign and implement OpenShift solutions that integrate seamlessly with VMware virtualized environments.oLeverage VMware tools (e.g., vSphere, vCenter, NSX) to optimize OpenShift deployments.oConfigure and manage Pure Storage solutions (e.g., FlashArray, FlashBlade) to provide high-performance, scalable storage for OpenShift workloads.oEnsure compatibility and performance optimization between OpenShift, VMware, and Pure Storage.4.CI/CD Pipelines and DevOps Workflows:oDesign and implement CI/CD pipelines tailored for the OpenShift environment.oIntegrate DevOps workflows with OpenShift-native tools and third-party solutions.oAutomate deployment, scaling, and monitoring processes to streamline application delivery.5.Scalability and Security:oEnsure the architecture and migration plans are scalable to meet future growth and workload demands.oImplement security best practices, including role-based access control (RBAC), network policies, and encryption.oConduct regular security assessments and audits to maintain compliance with organizational standards.6.Collaboration and Documentation:oWork closely with development, DevOps, and operations teams to align architecture and migration plans with business needs.oProvide detailed documentation of the architecture, migration strategies, workflows, and configurations.oOffer technical guidance and training to teams on OpenShift architecture, migration, and best practices.________________________________________Required Skills and Qualifications:Strong experience in designing and implementing OpenShift architectures and migration strategies.In-depth knowledge of Kubernetes, containerization, and orchestration.Expertise in VMware tools and technologies (e.g., vSphere, vCenter, NSX).Hands-on experience with Pure Storage solutions (e.g., FlashArray, FlashBlade).Expertise in networking concepts (e.g., ingress, load balancing, DNS) and storage solutions (e.g., persistent volumes, dynamic provisioning).Hands-on experience with CI/CD tools (e.g., Jenkins, Github, ArgoCD) and DevOps workflows.Strong understanding of high availability, scalability, and security principles in cloud-native environments.Proven experience in workload and application migration to OpenShift or similar platforms.Proficiency in scripting and automation (e.g., Bash, Python, Ansible, Terraform).Excellent problem-solving and communication skills.________________________________________Preferred Qualifications:OpenShift certifications (e.g., Red Hat Certified Specialist in OpenShift Administration).Experience with multi-cluster and hybrid cloud OpenShift deployments.Familiarity with monitoring and logging tools (e.g., oTel, Grafana, Splunk stack).Knowledge of OpenShift Operators and Helm charts.Experience with large-scale migration projects.

AI Infrastructure Architect
MicroTECH Global Ltd
Edinburgh
Remote or hybrid
Mid - Senior
Private salary

Responsibilities:

Design a unified AI Infra & Serving architecture platform for composite AI workloads such as LLM Training & Inference, RLHF, Agent, and Multimodal processing. This platform will integrate inference, orchestration, and state management, defining the technical evolution path for Serverless AI + Agentic Serving

Design a heterogeneous execution framework across CPU/GPU/NPU for agent memory, tool invocation, and long-running multi-turn conversations and tasks. Build an efficient memory/KV-cache/vector store/logging and state-management subsystem to support agent retrieval, planning, and persistent memory.

Build a high-performance Runtime/Framework that defines the next-generation Serverless AI foundation through elastic scaling, cold start optimization, batch processing, function-based inference, request orchestration, dynamic decoupled deployment, and other features to support performance scenarios such as multiple models, multi-tenancy, and high concurrency.

Key Requirements:

  • Strong foundational knowledge in system architecture, or computer architecture, operating systems, and runtime environments;
  • Hands-on experience with Serverless architectures and cloud-native optimization technologies such as containers, Kubernetes, service orchestration, and autoscaling
  • vLLM, SGLang, Ray Serve, etc.); understand common optimization concepts such as continuous batching, KV-Cache reuse, parallelism, and compression/quantization/distillation
  • Proficient in using Profiling/Tracing tools; experienced in analyzing and optimizing system-level bottlenecks regarding GPU utilization, memory/bandwidth, Interconnect Fabric, and network/storage paths
  • Proficient in at least one system-level language (e.g., C/C++, Go, Rust) and one scripting language (e.g., Python)
Senior Network Architect GCP (Virtual Data Center)
Gazelle Global Consulting Ltd
West Midlands
Remote or hybrid
Senior
Private salary

Senior GCP VDC Network Engineer

Public Cloud Platform | Enterprise Scale

We are recruiting aSenior GCP VDC Network Engineerto join a Public Cloud Platform function responsible for delivering compliant, secure, and efficient Google Cloud infrastructure and DevOps capabilities across the Group.

This role sits at the heart of large-scale Google Cloud adoption. You will help design and build reusable, enterprise-grade GCP network products, modernise cloud network services, and enable engineering teams to adopt Google Cloud quickly, safely, and at scale.

You will operate as part of a cross-disciplinary feature team while also acting as a senior technical authority within the wider engineering community.

Role Overview

As aGCP VDC Network Specialist, you will design and build the foundational network layer that underpins enterprise workloads on Google Cloud. This includes VDC network architecture, hybrid connectivity, policy enforcement, and automation using Infrastructure as Code and DevOps practices.

This is a hands-on senior engineering role with a strong consulting element. You are expected to influence design decisions, guide strategy, and set standards, not just execute tickets.

Key Responsibilities

  • Design, implement, and maintain enterprise-scaleGCP Virtual Data Centre (VDC) network architectures.
  • Build and manageVPCs, subnets, firewall rules, routing, and VPC peeringto enable secure, scalable connectivity.
  • Implementhybrid connectivityusing Cloud VPN and Interconnect to support on-prem and multi-cloud integration.
  • Develop and maintainInfrastructure as Codefor GCP network resources using Terraform and Terraform Cloud.
  • Automate network provisioning and configuration usingPython scripting.
  • Define and enforceGCP Organisation Policiesto meet security, compliance, and governance requirements.
  • Integrate network deployments intoCI/CD pipelinesfor automated build, test, and release.
  • Implementpolicy-as-code guardrailsusing Sentinel or OPA to ensure consistent network governance.
  • Optimise network performance, resilience, and availability through monitoring, logging, and proactive tuning.
  • Partner with security teams to embed network security best practices, including firewall design, private access, and service perimeters.
  • Support migration of legacy network designs into standardised, reusable VDC templates.
  • Diagnose and resolve complex, multi-layer network issues across GCP environments.

Essential Skills and Experience

  • Strong, hands-on experience withGCP networking, including VPCs, subnets, firewall rules, routing, and peering.
  • Proven expertise inhybrid connectivity, specifically Cloud VPN and Interconnect.
  • Advanced experience usingTerraform and Terraform Cloudfor network IaC.
  • Python scripting for infrastructure automation.
  • Experience integrating infrastructure workflows intoCI/CD pipelinesusing tools such as Jenkins, GitHub, or Harness.
  • Solid understanding ofGCP Organisation Policyand policy-as-code frameworks such as Sentinel or OPA.
  • GCP certification, ideallyProfessional Cloud Network Engineeror equivalent.

Nice to have

  • Experience working with internal developer platforms or cloud engineering portals such asBackstage.

Desirable Profile

  • Senior-level GCP SME with experience operating as atechnical consultant, influencing architecture, design, and cloud strategy.
  • Demonstrated thought leadership in cloud networking, automation, and platform engineering best practices.
  • Strong communicator and collaborator, comfortable working across engineering, security, and senior stakeholder groups.
  • Able to balance engineering rigour with pragmatism in a regulated enterprise environment.
Technical Pricing Manager
Gerrard White
Peterborough
Remote or hybrid
Senior
Private salary

Job Title: Technical Pricing Manager

Location: A large potion of the team are based in Peterborough, however we are happy to have a largely remote working approach to this, with the occasional travel should you not be local.

Role purpose

We are looking for a Technical Pricing Manager to generate incremental lifetime value of our portfolio through the delivery and development of retail pricing models and optimisations using innovative and cutting-edge modelling approaches.

You will help continuously improve the pricing process and enhance the abilities of the wider team, as well as being involved with integrating and establishing the use of advanced data science and statistical techniques to enhance pricing model accuracy and output.

Key Responsibilities

  • End to end production of pricing models using a tailor-made pricing pipeline
  • Use of Earnix to build predictive statistical models and intelligently optimise customer prices
  • Contribute and implement improvements to the pricing process to increase pricing performance and efficiency
  • Contribute and lead research and development opportunities to help innovate and improve current modelling and pricing methodologies
  • Evaluate and utilise tools and data items created by the data science teams
  • Ensure all activity is compliant with pricing governance and follows established controls
  • Work closely with the Commercial Pricing Team to ensure pricing models meet business objectives, and manage relationships with key stakeholders around the business
  • Manage, mentor and coach more junior members of the team

About you:

  • Highly numerate with a graduate or postgraduate degree in Statistics, Mathematics or another analytical subject
  • Experience in a pricing or actuarial role within general insurance
  • Experience with price optimisation tools (Earnix/Radar)
  • Experience using and implementing advanced machine learning methods
  • Able to communicate complicated statistical concepts to an informed but non-technical audience
  • Experience with using software packages such as R or Python to solve problems
  • Proven ability to deliver commercial value through pricing insight
  • Proven ability to provide commercial uplift from research and development projects
  • Strong people management skills
Cyber Asset Management Engineer
Randstad Digital
Edinburgh
Remote or hybrid
Mid - Senior
Private salary

?? Cyber Asset Management Engineer

?? Remote (must be able to travel to Edinburgh when required)
?? 12-month contract | Strong chance to go permanent
?? IR35: Inside
?? Rate: Open - focused on the right person

??? The Role

This is a hands-on cyber engineering role focused on asset visibility, integrations, automation, and real risk reduction - not just reporting.

You’ll help the business:
? Know exactly what assets exist
? Identify security gaps
? Automate fixes wherever possible

?? What You’ll Do

?? Build full visibility across:

  • Devices
  • Users
  • Cloud & SaaS

?? Engineer solutions by:

  • Integrating systems via APIs
  • Connecting security tooling data
  • Creating dashboards & automations
  • Driving remediation of security gaps

?? Work with tools like SIEM, EDR, Vulnerability Mgmt, CSPM, IAM
?? Automate using Python or PowerShell

?? Success Looks Like

? Higher asset coverage
? Fewer unknown devices
? Automated detection of gaps
? Automated / semi-automated fixes
? Clear dashboards for leadership

?? Who They Want

? Hands-on, proactive engineer
? Comfortable with complex environments
? Spots problems and solves them
? Self-directed and organised
? Able to document and explain work clearly

?? Interviews

Stage 1 - Technical with engineers
Stage 2 - Culture fit with the Senior Director

Randstad Technologies is acting as an Employment Business in relation to this vacancy.

Senior Data Engineer (AWS, Airflow, Python)
Triad
London
Remote or hybrid
Senior
£60,000 - £65,000
+1

Based at client locations, working remotely, or based in our Godalming or Milton Keynes offices.
Salary up to £65k plus company benefits.

About Us

Triad Group Plc is an award-winning digital, data, and solutions consultancy with over 35 years’ experience primarily serving the UK public sector and central government. We deliver high-quality solutions that make a real difference to users, citizens and consumers.

At Triad, collaboration thrives, knowledge is shared, and every voice matters. Our close-knit, supportive culture ensures you’re valued from day one. Whether working with cutting-edge technology or shaping strategy for national-scale projects, you’ll be trusted, challenged, and empowered to grow.

We nurture learning through communities of practice and encourage creativity, autonomy, and innovation. If you’re passionate about solving meaningful problems with smart and passionate people, Triad could be the place for you.

Glassdoor score of 4.7

96% of our staff would recommend Triad to a friend

100% CEO approval

See for yourself some of the work that makes us all so proud:

Helping law enforcement with secure intelligence systems that keep the UK safe

Supporting the UK’s national meteorological service in leveraging supercomputers for next-level weather forecasting

Assisting a UK government department responsible for consumer product safety with systems to track unsafe products

Powering systems that help the government monitor and reduce greenhouse gas emissions from commercial transport

Role Summary

Triad is seeking a Senior Data Engineer to play a key role in delivering high-quality data solutions across a range of client assignments, primarily within the UK public sector. You will design, build, and optimise cloud-based data platforms, working closely with multidisciplinary teams to understand data requirements and deliver scalable, reliable, and secure data pipelines. This role offers the opportunity to shape data architecture, influence technical decisions, and contribute to meaningful, data-driven outcomes.

Key Responsibilities

Design, develop, and maintain scalable data pipelines to extract, transform, and load (ETL) data into cloud-based data platforms, primarily AWS.

Create and manage data models that support efficient storage, retrieval, and analysis of data.

Utilise AWS services such as S3, EC2, Glue, Aurora, Redshift, DynamoDB and Lambda to architect and maintain cloud data solutions.

Maintain modular Terraform based IaC for reliable provisioning of AWS infrastructure.

Develop, optimise and maintain robust data pipelines using Apache Airflow.

Implement data transformation processes using Python to clean, preprocess, and enrich data for analytical use.

Collaborate with data analysts, data scientists, developers, and other stakeholders to understand and integrate data requirements.

Monitor, optimise, and tune data pipelines to ensure performance, reliability, and scalability.

Identify data quality issues and implement data validation and cleansing processes.

Maintain clear and comprehensive documentation covering data pipelines, models, and best practices.

Work within a continuous integration environment with automated builds, deployments, and testing.

Skills and Experience

  • Strong experience designing and building data pipelines on cloud platforms, particularly AWS.
  • Excellent proficiency in developing ETL processes and data transformation workflows.
  • Strong SQL skills (postgresql) and advanced Python coding capability (essential).
  • Experience working with AWS services such as S3, EC2, Glue, Aurora, Redshift, DynamoDB and Lambda (essential).
  • Understanding of Terraform codebases to create and manage AWS infrastructure.
  • Experience developing, optimising, and maintaining data pipelines using Apache Airflow.
  • Familiarity with distributed data processing systems such as Spark or Databricks.
  • Experience working with high-performing, low-latency, or large-volume data systems.
  • Ability to collaborate effectively within cross-functional, agile, delivery-focused teams.
  • Experience defining data models, metadata, and data dictionaries to ensure consistency and accuracy.

Qualifications & Certifications

  • A degree or equivalent qualification in Computer Science, Data Science, or a related discipline (desirable).
  • Due to the nature of this position, you must be willing and eligible to achieve a minimum of SC clearance. To be eligible, you must have been a resident in the UK for a minimum of 5 years and have the right to work in the UK.

Triad’s Commitment to You

As a growing and ambitious company, Triad prioritises your development and well-being:

  • Continuous Training & Development: Access to top-rated Udemy Business courses.
  • Work Environment: Collaborative, creative, and free from discriminatioBenefits:
    • 25 days of annual leave, plus bank holidays.
    • Matched pension contributions (5%).
    • Private healthcare with Bupa.
    • Gym membership support or Lakeshore Fitness access.
    • Perkbox membership.
    • Cycle-to-work scheme.

What Our Colleagues Have to Say

Please see for yourself on Glassdoor and our “Day in the Life” videos at the bottom of our Careers Page.

Our Selection Process

After applying for the role, our in-house talent team will contact you to discuss Triad and the position. If shortlisted, you will be invited for:

  1. A technical test including numerical, logical and verbal reasoning
  2. A technical interview with our consultants
  3. A management interview to assess cultural fit

We aim to complete interviews and progress candidates to offer stage within 2-3 weeks of the initial conversation.

Other Information

If this role is of interest to you or you would like further information, please contact Ryan Jordanand submit your application now.

Triad is an equal opportunities employer and welcomes applications from all suitably qualified people regardless of sex, race, disability, age, sexual orientation, gender reassignment, religion, or belief. We are proud that our recruitment process is inclusive and accessible to disabled people who meet the minimum criteria for any role. Triad is a signatory to the Tech Talent Charter and a Disability Confident Leader.

Trainee AI Engineer Placement Programme
ITOL Recruit
Multiple locations
Fully remote
Graduate
£30,000 - £45,000

Trainee AI Engineer – No Experience Needed Future-proof your career in Artificial Intelligence – starting today. Looking for a career change? Currently employed but want something better? Or maybe you're between jobs and ready for a fresh start? ITOL Recruit's AI Traineeship is designed to get you into one of the fastest-growing industries with zero experience required. Train online at your own pace and land your first AI Engineer role in 1-3 months. Please note this is a training course and fees apply Job guaranteed - complete the programme and get a job or get your money back. Our candidates earn £30,000-£45,000. Why AI? AI is reshaping every industry you can think of. Healthcare, finance, retail, and manufacturing – they’re all scrambling for skilled professionals. The demand far outstrips supply, which means excellent salaries, flexible working arrangements, and genuine job security. How It Works Step 1 – AI Engineering Fundamentals Start with the basics of AI, including neural networks and large language models, to build a solid foundation in AI engineering. Step 2 – Data Fundamentals Understand the data workflow, from collection to cleaning, and learn how to prepare data for AI applications. Step 3 – Notebooks & IDEs Get hands-on with industry-standard tools like Jupyter Notebooks and VS Code to develop AI systems. Step 4 – Python Programming Master Python, covering everything from the basics to object-oriented programming (OOP). Step 5 – Python Streamlit Project Apply your Python skills by building a car price prediction app using Python and Streamlit. Step 6 – Python for Data Learn essential Python libraries like NumPy, Pandas, and Matplotlib for data manipulation and visualisation. Step 7 – AI Sentiment Analysis Project Work with Hugging Face to build a sentiment analysis classifier using real-world AI techniques. Step 8 – AI Prompt Engineering Master prompt engineering, learning how to craft effective prompts for controlling AI outputs. Step 9 – Retrieval-Augmented Generation (RAG) Learn how to integrate external knowledge into AI systems using RAG techniques and vector databases. Step 10 – AI Specialised Customer Service Chatbot Project Combine prompt engineering and RAG to build an AI-powered customer service chatbot, delivering intelligent responses using vector databases and knowledge bases. Step 11 – Machine Learning Fundamentals Understand machine learning principles and algorithms, and how to train and test models using scikit-learn. Step 12 – Machine Learning Project Put your machine learning knowledge into practice with a hands-on project. Step 13 – AI & Data Ethics Study the ethical considerations in AI, including issues of bias, fairness, and data privacy. Step 14 – Oral Exam Complete a virtual oral exam to assess your understanding and ability to apply your learning. Step 15 – AWS Certified Cloud Practitioner Finish with the AWS Certified Cloud Practitioner course and exam to gain essential cloud computing knowledge. What You Get · 100% online, self-paced training · Microsoft AI-900 certification included · 1-to-1 tutor and recruitment support · Real-world project experience · Job guarantee – get a job or your money back · Starting salary of £30,000–£45,000 We Get You Hired! We're not new to this. ITOL Recruit has 15+ years of experience and has placed over 5,000 people into new roles. Our job programmes include certified tutors, UK-accredited qualifications, and one-on-one support from a recruitment adviser focused on placing you. We don't believe in empty promises. Complete our programme, follow the process, and if you don't land a job, you get your money back. "Five months from complete beginner to AI engineer. Best decision I ever made." – Jamie W., now working as a Junior AI Engineer in London Ready to Start? If you’re motivated, curious, and excited about technology, we’ll help you turn that into a career you can be proud of. Apply now, and one of our expert Career Advisors will be in touch within 4 working hours to guide you through your next steps

Geospatial Software Engineer
ISR RECRUITMENT LIMITED
Manchester
Remote or hybrid
Mid - Senior
£60,000 - £90,000
+5

The Opportunity: You’ll join an experienced, collaborative consultancy team delivering greenfield, enterprise-scale digital services for high-profile public and private sector clients. This opportunity is ideal for a practical, adaptable Geospatial Full Stack Engineer who enjoys working across disciplines and solving complex problems and challenges that will have a real-world impact. Collaboration sits at the heart of how our client operates, so you’ll be partnering closely with colleagues across Software Engineering, User-Centred Design, Delivery Management, Data Science and Live Services to deliver outcomes that genuinely make a difference in today’s society. As a consultancy, they are technology-agnostic by design, focusing on choosing the right tools for each problem, rather than forcing one stack everywhere. Their teams regularly work with .NET, Java, Python, Node.js, AWS and Azure, giving you genuine scope to broaden your skills and develop your career across a range of languages and platforms. Many of their projects also involve Geographic Information Systems (GIS) and open-source geospatial technologies, helping clients unlock the value of location-based data through mapping, spatial analysis and data-driven decision making. Skills and Experience: Essential \* 3+ years’ experience in a Full Stack Engineering role \* Strong development skills in .NET, Java or Python, alongside modern JavaScript frameworks/libraries \* Experience working in Agile environments (Scrum, Kanban, TDD) \* Solid understanding of architectural and design patterns, including microservices and serverless \* Hands-on experience designing and delivering solutions on AWS or Azure \* Experience working with GIS systems or geospatial data, and familiarity with tools such as Leaflet, OpenLayers, QGIS, GeoServer, PostGIS, etc. \* A collaborative mindset and experience working in multi-disciplinary teams Desirable \* Experience working in a consultancy environment \* Exposure to public sector projects \* Familiarity with CI/CD tooling (e.g. Jenkins, Terraform) \* Awareness of the Digital Service Standard and Technology Code of Practice, particularly in geospatial or public sector contexts Role and Responsibilities: This is a varied role suited to someone who enjoys the pace, responsibility and collaboration of consultancy. You will be involved with the following types of activity: \* Design and deliver high-quality solutions: building, enhancing and maintaining software, infrastructure and deployment pipelines that are robust, secure and scalable. Projects may include solutions involving geospatial data, GIS platforms and open-source mapping tools. \* Work collaboratively across disciplines: partnering with Senior and Lead Engineers, Delivery Managers, Designers and Data Scientists to shape solutions, contribute to technical documentation and deliver against agreed plans. \* Apply standards and best practice: follow established engineering approaches, contribute accurate technical estimates and proactively identify and escalate risks or issues. \* Communicate clearly and build relationships: present ideas, prototypes and progress updates to stakeholders, while building strong working relationships with colleagues, clients and partner organisations. Applications: Please contact Edward here at ISR to learn more about our client and how they are leading the way in developing the next generation of technical solutions through innovation and transformational technology?

Junior Data Analyst
Newto Training
Multiple locations
Remote or hybrid
Junior
Private salary

Ready to start your career as a Data Analyst?

The demand for skilled data professionals in the UK is booming - and organisations are searching for people who can turn raw data into meaningful insight. If you’re looking for a career with purpose and strong growth, our Data Analyst Career Programme is built for you, with a job guarantee on completion.

Why this programme matters

We focus on equipping you with both the tools and the real-world experience you need to hit the ground running. With industry-recognised certifications, live instruction and project work you’ll be ready for business challenges from day one.

What you’ll get:

  • Seven training modules, covering Excel, SQL, Python, Tableau, Power BI and more.
  • Three official certifications: Microsoft Azure Data Fundamentals, Microsoft Power BI Data Analyst Associate and Microsoft Azure AI Fundamentals.
  • Real-world project work to enhance your CV and show our end employers you can deliver.
  • Job guarantee: If you complete the programme and don’t receive a job offer, we’ll refund 100% of your course fee.

Your investment:

  • Course cost: £2,795
  • Payment plan: From £232.91 per month (interest-free)

No prior tech-job experience? No problem.

You don’t need to come from a data background. If you bring curiosity, communication skills, and a willingness to learn, this programme will equip you for a transition into a demanding and rewarding role.

Take the next step now.

Click ‘Apply Now’ and embark on a career where data drives decisions, and you drive your future.

Full Stack Developer - Typescript
Computer Futures
Coventry
Fully remote
Senior
£70,000 - £90,000
+7

Job Title: Senior Full Stack Developer (TypeScript, Node.js, AWS)

Location: Remote (must be UK citizen)
Contract Type: Permanent
Experience Level: 10-20 years

About the Role

We are seeking an exceptional Senior Full Stack Developer with a proven track record in designing and delivering scalable, high-performance applications. This role requires deep technical expertise, strong architectural skills, and the ability to collaborate effectively across teams.

Key Responsibilities

  • Design, develop, and maintain robust full-stack applications and services.
  • Architect and implement scalable cloud-based solutions leveraging AWS.
  • Optimise system performance, reliability, and security.
  • Collaborate with developers, DevOps engineers, and product managers to deliver high-quality solutions.
  • Conduct code reviews and mentor team members to uphold best practices.
  • Drive continuous improvement through automation and modern development methodologies.
  • Troubleshoot and resolve complex technical issues efficiently.

Essential Skills & Experience

  • TypeScript expertise is mandatory.
    If you do not have strong, demonstrable experience with TypeScript, your CV will not progress beyond initial screening.
  • Minimum 10 years of hands-on software development experience (10-20 years preferred).
  • Strong back-end development skills using Node.js.
  • Proven experience with AWS and cloud-based architectures.
  • Full-stack proficiency with modern frameworks (e.g., React).
  • Solid understanding of software architecture, design principles, and microservices.
  • Experience with serverless architecture, containers (Docker, Kubernetes), and CI/CD pipelines.
  • Excellent problem-solving, debugging, and communication skills.

Preferred Qualifications

  • Experience with databases such as PostgreSQL, Redis, TimescaleDB.
  • Familiarity with additional languages (Python, Java, C/C++).
  • Knowledge of infrastructure as code (IaC), DevOps methodologies, and security best practices.
  • Exposure to monitoring tools (Prometheus, Nagios) and API design (GraphQL, REST).

To find out more about Computer Futures please visit (url removed)

Computer Futures, a trading division of SThree Partnership LLP is acting as an Employment Business in relation to this vacancy Registered office 8 Bishopsgate, London, EC2N 4BQ, United Kingdom Partnership Number OC(phone number removed) England and Wales

Page 9 of 9
Frequently asked questions
Haystack features a wide range of remote Python jobs, including roles in web development, data science, machine learning, automation, and backend engineering, suitable for various experience levels.
To apply, simply create a profile on Haystack, upload your resume, and use the search filters to find remote Python jobs that match your skills. You can then submit your application directly through our platform.
Our job board includes both full-time remote Python positions and freelance or contract opportunities, allowing you to choose the work arrangement that best fits your needs.
Yes, we vet all job postings to ensure they are genuinely remote or offer remote flexibility, so you can confidently apply to positions that suit your remote work preferences.
Absolutely! You can create customized job alerts on Haystack to receive email notifications when new remote Python jobs matching your criteria are posted.