Make yourself visible and let companies apply to you.
Roles
Apache Kafka Jobs
Overview
Discover top Apache Kafka jobs on Haystack, your go-to IT job board for expert roles in data streaming and real-time processing. Explore the latest Apache Kafka developer, engineer, and architect positions to advance your career in this high-demand technology. Start your search now and connect with leading employers looking for skilled Apache Kafka professionals!
Integration Architect
Stackstudio Digital Ltd.
UK
Hybrid
Senior - Leader
Private salary
RECENTLY POSTED
+6

Role/Job Title: Integration Architect Work Location: Norwich (2 to 3 days a week) Duration of Assignment: 6 Months The Role The Integration Architect is responsible for designing, governing, and delivering enterprise scale integration solutions across distributed systems. This role requires deep expertise in event driven architecture (EDA), real time streaming, and cloud native integration patterns using Kafka and AWS messaging/streaming services such as EventBridge, SQS, SNS, Kinesis. The Integration Architect partners with engineering, product, and cloud teams to create scalable, secure, and resilient integration landscapes. Your Responsibilities:

  • Define enterprise integration architecture using event driven, microservices, and real time streaming patterns.
  • Architect solutions using Kafka, AWS EventBridge, SQS/SNS, Kinesis Streams/Firehose, and Kafka Connect.
  • Establish integration standards, best practices, reusable frameworks, and governance models.
  • Design solutions that ensure high availability, scalability, observability, and security.
  • Evaluate system integration options and recommend optimal patterns (Pub/Sub, CQRS, event sourcing, streaming analytics, request-response APIs, batch).
  • Lead the end to end delivery of integration platforms and streaming pipelines.
  • Define event schemas, streaming topologies, routing logic, partitions, consumer groups, and throughput targets.
  • Guide development teams in building producers, consumers, connectors, and stream processing applications.
  • Review designs/code to ensure alignment with architecture guidelines.

Architect integration workloads using:

  • AWS Kinesis Streams & Kinesis Firehose
  • AWS EventBridge event bus
  • SQS/SNS for messaging patterns
  • Kafka clusters (Confluent, MSK, or open-source)
  • Work closely with cloud engineering teams on infrastructure design, IaC (Terraform/CloudFormation), performance tuning, and cost optimization.
  • Implement monitoring using CloudWatch, Grafana, Prometheus, or OpenTelemetry.

Enforce integration security practices:

  • Authentication/Authorization
  • IAM policies
  • Encryption at rest/in transit
  • Data governance & lineage
  • Ensure solutions meet RPO, RTO, resiliency, disaster recovery, and failover requirements.
  • Establish observability using tracing, logging, alerting, and dashboards.
  • Collaborate with product owners, domain architects, delivery managers, and business stakeholders.
  • Translate business requirements into scalable integration architectures.
  • Provide technical leadership across teams and mentor integration engineers.

Your Profile Essential skills/knowledge/experience: Strong hands on experience with Kafka:

  • Topics, partitions, consumer groups
  • Kafka Streams, ksqlDB
  • Schema Registry & Avro/JSON/Protobuf
  • Kafka Connect connectors

Deep expertise in AWS streaming & messaging:

  • Amazon Kinesis Data Streams, Firehose
  • EventBridge (rules, event buses, routing)
  • SQS/SNS with dead letter queues
  • AWS Lambda event-based integrations
  • Experience designing event-driven and microservices architectures.

Strong knowledge of:

  • API integration patterns (REST, GraphQL)
  • ETL/ELT and data pipelines
  • Distributed system design
  • High throughput and low-latency data streaming
  • Good understanding of Java/Python/Node.js for integration logic.
  • Familiarity with containerization & orchestration (Docker, Kubernetes).
  • Working knowledge of CI/CD, DevOps, IaC.
  • Architectural thinking with strong problem-solving abilities.
  • Ability to lead teams, mentor developers, and influence decisions.
  • Strong communication and stakeholder engagement skills.
  • Experience working in Agile environments.
Senior Data Engineer
Lynx Recruitment Limited
Sutton
Fully remote
Senior
ÂŁ80,000
RECENTLY POSTED

Senior DataEngineer-RemoteWorking

LynxarecurrentlyworkingwithalargeITconsultancytohelpthemsourceaDataEngineer.OurclientislargeglobalConsultancywhoworkwithenterpriseclients,solvingtheirbusinessandtechnologyproblemsusingcuttingedgesolutions.

AsaDataEngineer,youwillplayakeyroleindesigning,building,andmaintainingrobustdataplatformsandpipelines.Youllworkalongsidedatascientists,analysts,andengineerstoensurereliable,scalable,andhigh-qualitydatasolutions.

Whatyoullbedoing:

  • Designingandbuildingdatapipelines,models,andarchitectures
  • DevelopingandmaintainingETL/ELTprocessesusingmoderntools
  • Implementingdatastoragesolutionsacrossrelational,non-relational,andcloudplatforms
  • Ensuringdataquality,reliability,andoperationalexcellence
  • Leadingthedesignanddeliveryofnewdataproductsandpipelines
  • Collaboratingcloselywithstakeholdersandtechnicalteams
  • Providingtechnicalleadershipandmentoringwithindeliveryteams

Whattherelookingfor:

  • Strongexperienceindataengineeringroles
  • Provenleadershiporlinemanagementexperience
  • Hands-onexperiencewithcloudplatforms(AWSorAzure;GCPbeneficial)
  • StrongSQLskillsandexperiencewithrelationaldatabases
  • Pythondevelopmentexperienceandfamiliaritywithframeworks(e.g.Flask,Django)
  • ExperiencebuildingandorchestratingETLpipelines(e.g.Airflow,Luigi,Argo)
  • Exposuretobigdatatechnologies(e.g.Spark,Kafka,Hadoop)
  • ExperienceworkinginAgileenvironments
  • Solidunderstandingofdataarchitectureanddataqualitybestpractices
Senior Database Administrator / Engineer Sybase - Trading
client server
London
Hybrid
Senior
ÂŁ100,000
RECENTLY POSTED
+1

Senior Database Administrator / Engineer (Sybase DBA) London to ÂŁ200k+ 12 month FTC

Do you have expertise with Sybase database?

You could be progressing your career at a hugely profitable Hedge Fund.

As a Senior Database Administrator / Engineer you will be responsible for managing and improving a mission-critical, diverse data platform as part of a small, talented team. You’ll have a broad scope from providing operational support through to leading continuous design and implementation to enhance scalability, reduce redundancy and ensure fault tolerance of data systems.

You will construct and manage integrations with monitoring and alerting systems to increase internal visibility and automatically respond to incidents in various environments, building and enhancing security controls, governance processes and management interfaces to reduce permitter and improve risk profile.

As a senior team member you’ll also act as a subject matter expert and mentor more junior DBAs and developers to ensure the platform is supported and used appropriately, including designing and building tools to automate operational processes.

Location / WFH:

You’ll join colleagues in prestigious Central London offices (with onsite restaurant, gym etc.) with flexibility to work from home once a week.

About you:

  • You have achieved a 2.1 or above in a relevant field e.g. Data, Mathematics, Statistics, Data Analysis or other quantitative discipline
  • You have commercial experience in a similar role at an investment firm or asset manager with a good understanding of various asset classes such as Equities, Fixed Income, Derivatives, and Commodities
  • You have strong RDBMS experience, including Sybase expertise, Postgres and SQL
  • You have a good knowledge of Linux OS
  • You have experience with modern tools such as Kafka, MongoDB, SAP IQ, Ansible and Python
  • You have excellent communication and client facing skills
  • You enjoy learning and seeking continuous improvement

What’s in it for you:

*Please note this role is on a 12 month Fixed Term Contract basis, with benefits\

As a Senior Database Administrator / Engineer (Sybase DBA) you will earn a highly competitive salary plus a fully comprehensive benefits package including:

  • Salary to ÂŁ200k
  • Generous Pension contribution
  • Life Assurance, Critical Illness cover
  • Childcare vouchers
  • Enhanced Paternity package and Adoption Assistance
  • Charitable fund raising matching and much more

Apply now to find out more about this Senior Database Administrator / Engineer (Sybase DBA) opportunity.

At Client Server we believe in a diverse workplace that allows people to play to their strengths and continually learn. We’re an equal opportunities employer whose people come from all walks of life and will never discriminate based on race, colour, religion, sex, gender identity or expression, sexual orientation, national origin, genetics, disability, age, or veteran status. The clients we work with share our values.

Lead Data Engineer
Canada Life UK
Multiple locations
Remote or hybrid
Senior
Private salary
RECENTLY POSTED
+4

Canada Life UK looks after the retirement, investment and protection needs of individuals, families and companies. We help to build better futures for our customers, our intermediaries and our employees by operating as a modern, agile and welcoming organisation.

Part of our parent company Great-West Lifeco, Canada Life UK has operated in the United Kingdom since 1903. We have hundreds of respected and supported employees committed to doing the right thing for our customers and colleagues.

Canada Life UK is transforming to create a more customer-focused business by providing our customers with expertise on financial and tax planning, offering home finance and annuities propositions, and providing collective fund solutions to third party customers.

Job Purpose

The Lead Data Engineer will provide hands-on technical leadership in Azure cloud and Databricks-based solutions within our Enterprise Data Platform. The role requires strong expertise in Azure cloud services, Databricks, data engineering, and DevOps, leading a cross-functional team to build, deploy, and support high-performance data-driven solutions.

The role involves:

  • Interpreting Outcomes and user stories and translating them into technical solutions.
  • Creating innovative solution designs for domain and enterprise data products.
  • Overseeing Data Analysts to support detailed data discovery.
  • Overseeing data modelling for Finance and Enterprise data products.
  • Leading Product Increment planning to break down solutions into Features and Epics for incremental delivery.
  • Designing and implementing scalable data solutions on Azure and Databricks within the assigned domain.
  • Ensuring appropriate engineering standards are applied to maintain data quality, performance and reliability.

Duties/Responsibilities

  • Work with Product Owners and Business Analysts to understand Outcomes, refine user stories.
  • Lead solution design for Finance and Enterprise data products, ensuring alignment with enterprise patterns and guardrails.
  • Direct and collaborate with Data Analysts on detailed data discovery, source understanding and requirements refinement.
  • Oversee logical and physical data modelling for Finance and Enterprise data products, working closely with architecture where required.
  • Implement and maintain data pipelines and ETL workflows in Databricks (PySpark, Delta Lake).
  • Contribute to CI/CD pipelines for data applications using Azure DevOps and infrastructure-as-code (Terraform) in line with established patterns.
  • Apply security, access control and compliance standards for Azure and Databricks in collaboration with platform and security teams.
  • Support monitoring, logging and basic cost optimisation for the team’s data products.
  • Support the development of DevOps practices within the team, including reducing technical debt and improving automation over time.

Skills, Knowledge and Experience

Lead Data Engineers are expected to have strong capability in at least three of the following areas of engineering practice.

Core skills

  • Automation including testing of data pipelines and data products.
  • Strong teamwork, communication and problem-solving skills to collaborate effectively with cross-functional teams.
  • Awareness of security principles and best practices to ensure secure data solutions.
  • Commitment to continuous learning and staying current with Azure, Databricks and data engineering trends.
  • Strong experience working within an agile development methodology, ideally Scaled Agile (SAFe or similar).
  • Excellent time and self-management through effective planning and prioritisation of tasks.
  • Proven and demonstrable data engineering capability.
  • Ability to influence within the team and communicate clearly with technical and non-technical stakeholders.

Data Engineer (New Technology / Microsoft)

  • Strong experience with Databricks (Spark, PySpark, Delta Lake, and Unity Catalog advantageous).
  • Proficiency in Azure data services (Azure Data Factory, Data Lake, Azure Functions advantageous).
  • Experience contributing to CI/CD pipelines (Azure DevOps, GitHub Actions, Terraform).
  • Scripting and programming skills (Python advantageous).
  • Good understanding of DevOps and automation concepts (e.g. YAML pipelines, IaC).
  • Solid understanding of cloud security, compliance and governance principles.
  • Experience working with Databricks and Azure in a product or Scaled Agile delivery environment.

Qualifications

  • Degree level IT or technical/scientific subject (or equivalent experience).
  • Microsoft Azure Data Engineer or Solutions Architect certification (desirable).
  • Databricks Certified Data Engineer or Machine Learning Associate (desirable).
  • Experience with streaming solutions (Kafka, Event Hubs, Spark Streaming) (desirable).
  • Knowledge of machine learning and AI on Databricks (desirable).

Benefits of working at Canada Life

We believe in recognising and rewarding our people, so we offer a competitive salary and benefits package that’s regularly reviewed. As a Canada Life UK colleague, you’ll receive a competitive salary and comprehensive reward package including a generous pension and bonus scheme, along with, income protection, private medical insurance and life assurance. We have a fantastic number of other benefits and support services as well as regular personal and professional development.

How we work at Canada Life

Our culture is unique and incredibly important to us. We care about doing the right thing for our people, customers and community and helping others to build better futures. Our blueprint behaviours shape and influence how we work, and are central to the relationships we have with others. Every day we are encouraged to be more curious, own the outcome, face into things together and find a way forward.

We want colleagues to have rewarding careers with us so we invest in the development of our people, technology and workplaces. That’s why we offer a range of training, flexible working and opportunities to grow and develop.

Diversity and inclusion

Building an inclusive workplace with a diverse workforce where everyone can feel they belong and achieve their potential regardless of gender, ethnicity or any other characteristic is a key commitment for us. We are proud of the progress we’re making in DEI, and we continue for it to be a significant focus.

“At Canada Life we believe in the power of great people from different backgrounds, experiences and perspectives coming together to build better futures. Emerging talent is crucial to our growth and creating an environment that continues to inspire us all.” Nick Harding, Chief People Officer, Canada Life UK

We appreciate that everyone has different work and life responsibilities. We’re happy to discuss flexible working arrangements, including part time, for any of our roles should this be a requirement for you.

Salesforce Developer
Ventula Consulting
Farnborough
Hybrid
Mid - Senior
ÂŁ550/day
RECENTLY POSTED

Salesforce Developer/Technical Specialist - Automotive - ÂŁ550 pd Inside IR35

Our client within the automotive sector is seeking an experienced Salesforce Developer/CRM Technical Specialist to support a major CRM transformation programme.

This role will work closely with the CRM SME on the data migration from Legacy CRM platforms and the build of new services within a modern, cloud-based IT landscape.

The successful candidate will be a hands-on technical specialist with strong database knowledge and proven experience delivering end-to-end CRM solutions across development, testing, and rollout.

Key Skills:

Strong database expertise, including:

  • Oracle
  • PostgreSQL
  • Proven experience delivering end-to-end CRM/application development
  • Hands-on experience with data migration and Legacy system decommissioning
  • Experience working with CRM platforms (Siebel experience highly beneficial)

Solid understanding of modern integration architectures, including:

  • AWS
  • APIs
  • Kafka integrations
  • Experience integrating cloud services with Legacy/on-prem Oracle environments
  • Automotive experience ideally

Rate: ÂŁ550 per day Inside IR35

Duration: 3 months initially

Location: Hybrid/Farnborough (2 days per week on site)

This is an excellent opportunity to join a high-profile transformation programme within a globally recognised automotive environment, working on modern cloud-led architecture and large-scale CRM change.

Software Developer - Rust & Python
IO Associates
Brackley
Hybrid
Mid - Senior
ÂŁ350/day - ÂŁ400/day
RECENTLY POSTED
+1

Developer - Rust & Python

2 days a week onsite in Banbury

ÂŁ350pd to ÂŁ400pd - Inside IR35

8 month contract

Our client are looking for an experienced Developer to join them on a contract basis

They are a household name, operating at the forefront of the IT and engineering sectors, specialising in the development of advanced engineering software and digital tools.

This critical role is designed to support the development of high-performance, maintainable software that underpins innovative engineering applications. The position offers an exciting chance to be at the forefront of technological advancement, shaping the tools and solutions used in complex engineering environments. Your work will have a tangible impact on engineering excellence and operational efficiency

We are looking for:

  • Proven experience in designing and developing software in Rust, complemented by practical knowledge of other languages such as C# and Python
  • Strong understanding of object-oriented programming and agile development methodologies
  • Experience deploying and managing microservices in containerised environments, particularly Kubernetes
  • Proficiency in database design and query optimisation for systems like MS SQL Server or Postgres
  • Familiarity with message queuing systems including RabbitMQ or Kafka
  • Knowledge of software development tools such as Azure DevOps and experience setting up CI/CD pipelines

Interested to hear more? I would be more than happy to discuss the role in more detail, along with any other opportunities you may be open to!

SC needed Fully Remote Solution Architect - AWS/Data/Event Driven Architecture/App/Dev Background
Scope AT Limited
Not Specified
Fully remote
Senior
Private salary
RECENTLY POSTED

(SC needed) Fully Remote Solution Architect - AWS/Data/Event Driven Architecture/Solution architects - Application/Dev background

Our client is looking for a strong generalist Solution Architect with an active SC Clearance ideally with strong Data Platform experience, Integration Architect experience, Application and Technical Architect skills.

Skills

  • AWS site and AWS preferred.
  • Kafka ideal.
  • Opensource Technologies.
  • Experience from large scale CRM data platform environments (any massive customer database exposure).
  • A range of Solution, Application, Application Integration, with any Cloud (AWS ideal) from a Dev/Application angle with infra understanding.
  • Long term multi-year program of work.
  • Required to bringing a clear understanding and practical application of architecture and delivery methodologies such as: TOGAF, Zachmann, MODAF or SAFe.
  • The contract will be REMOTE with some client visits. You could be based anywhere in the UK.
  • Must already have an active SC Clearance

Contract role - Inside IR35 - Remote

By applying to this job you are sending us your CV, which may contain personal information. Please refer to our Privacy Notice to understand how we process this information. In short, in order to supply you with work finding services, we will hold and process your personal data, and only with your express permission we will share this personal data with a client (or a third party working on behalf of the client) by email or by upload to the Client/third parties vendor management system. By giving us permission to send your CV to a client, this constitutes permission to share the personal data that would be necessary to consider your application, interview you (Phone/video/face to face) and if successful hire you.
Scope AT acts as an employment agency for Permanent Recruitment and an employment business for the supply of temporary workers. By applying for this job you accept the Terms and Conditions, Data Protection Policy, Privacy Notice and Disclaimers which can be found at our website.

Lead Java Backend Engineer
Harvey Nash
Newcastle upon Tyne
Hybrid
Senior
ÂŁ85,000
RECENTLY POSTED
+2

This is an opportunity for a Lead Java Backend Engineer to shape the backend technical direction, bring teams together, and introduce modern engineering practices that support scalable, secure, and high-performing systems.

What You’ll Do

  • Lead key engineering initiatives and introduce modern best practices (design patterns, new languages, modern architecture approaches)
  • Guide backend architecture and ensure high standards in performance, security, and reliability
  • Support engineers through pair programming, technical guidance, and resolving high-priority issues
  • Drive solution architecture and influence major technical and design decisions
  • 30-40% hands-on, 60% leadership/strategy
  • Mentor engineers, run design reviews, and foster continuous improvement

Top Priority Skills

  • Java (11+)
  • Spring Boot
  • Solution architecture experience
  • Strong communicator able to influence decisions

Tech Stack & Environment

  • Java, Spring Boot, REST APIs, microservices
  • Kafka & event-driven systems
  • AWS (preferred), Kubernetes, containers
  • SQL & NoSQL databases
  • CI/CD pipelines, automated testing, modern deployment practices

This opportunity is paying up to ÂŁ85,000 and will be hybrid working.
If this sounds like the right next step for you, get in touch today to find out more.

To
From
Record Yes No

Always use these settings

TECHNICAL LEAD COLLIBRA
Infoplus Technologies UK Ltd
Norwich
In office
Senior
Private salary
RECENTLY POSTED

Role: TECHNICAL LEAD COLLIBRA

Location: Norwich, UK

Contract

Inside IR35

The Role

We are seeking a Senior Technical Lead with strong expertise in data governance, data quality, metadata management, and broader data management practices, with hands-on experience delivering solutions using the Collibra platform. The role involves defining solution architectures, leading configuration and integration of Collibra modules, implementing governance workflows, establishing metadata standards, and working closely with enterprise architects, data stewards, and platform teams.

Your responsibilities:

Solution Architecture & Design

  • Lead the design and implementation of Collibra-based data governance solutions, including:
    • Data Catalog
    • Business Glossary
    • Policy & Standards
    • Data Quality integration
    • Operating models (roles, responsibilities, stewardships)
  • Define Collibra architecture, metadata models, custom domains, and asset types.
  • Create reusable frameworks for metadata ingestion, lineage, and governance workflows.

Collibra Configuration & Development

  • Configure Collibra components, including:
    • Custom asset models
    • Attribute definitions
    • Relations and hierarchies
    • Workflows (BPMN-based)
    • Data quality dashboards & certification workflows
  • Develop and maintain Collibra REST APIs, Connect integrations, and catalog crawlers.
  • Implement metadata onboarding processes and automated connectors to cloud/on-prem data platforms.

Data Governance & Data Quality Leadership

  • Collaborate with data governance councils, data stewards, and business SMEs to define:
    • Data standards
    • Business glossary terms
    • Data ownership & stewardship models
    • Data quality rules, SLAs, and data issue management workflows
  • Ensure alignment with enterprise data governance frameworks and compliance requirements.

Integration & Metadata Lineage

  • Implement automated metadata ingestion from:
    • Data lakes
    • ETL tools (Glue, Informatica, Talend, IICS)
    • Warehouses (Redshift, Snowflake, BigQuery)
    • BI tools (Power BI, Tableau)
    • Streaming sources (Kafka)
  • Build lineage diagrams across ETL pipelines, APIs, datasets, and downstream consumption layers.
  • Ensure metadata completeness, accuracy, and operational SLAs.

Your Profile

Essential skills/knowledge/experience:

  • Data lakes
  • ETL tools (Glue, Informatica, Talend, IICS)
  • Warehouses (Redshift, Snowflake, BigQuery)
  • BI tools (Power BI, Tableau)
  • Streaming sources (Kafka)
  • Build lineage diagrams across ETL pipelines, APIs, datasets, and downstream consumption layers.
  • Ensure metadata completeness, accuracy, and operational SLAs.
AWS Devops Engineer (SC Cleared)
scrumconnect ltd
Newcastle upon Tyne
Remote or hybrid
Mid - Senior
ÂŁ400,000 - ÂŁ450,000
RECENTLY POSTED
+13

About Scrumconnect Consulting

Scrumconnect Consulting is a multi-award-winning digital consultancy, recognised for delivering impactful technology solutions across UK government departments. Our work has positively influenced the lives of over 40 million UK citizens.

With a strong commitment to user-centred design, secure engineering, and agile delivery, we continue to build innovative digital services that truly matter.

Preferred Technical ExpertiseCloud Infrastructure

  • AWS (EKS, RDS, Aurora, ElastiCache, Kafka, IAM)

Secure Hosting

  • Experience working within air-gapped or government-secure environments

Container & Cluster Management

  • Docker, Kubernetes, Rancher
  • Helm
  • Jenkins

Monitoring & Observability

  • Prometheus
  • Grafana
  • ELK Stack
  • Dynatrace

Secrets & Identity Management

  • HashiCorp Vault
  • Keycloak

CI/CD & DevOps Tooling

  • Jenkins
  • Git
  • ServiceNow
  • Trivy
  • Terraform

Streaming & Messaging

  • Apache Kafka (including Kafka Replication)

Data Technologies

  • PostgreSQL
  • Redis
  • RDLs

Automation & Scripting

  • Infrastructure as Code (IaC)
  • Pipeline automation
  • Event relay tooling
  • Bash, Python, Groovy
  • AWS Lambda

Key Responsibilities

  • Run, manage, and continuously evolve AWS and secure on-premise environments to ensure high availability

  • Lead Level 3 (L3) production support, including non-production maintenance and 24/7 on-call support

  • Ensure services comply with government security standards and change/release governance models

  • Build and maintain infrastructure components including:

    • Kafka event streaming
    • Aurora, RDS, and Redis databases
    • Identity management (Keycloak)
    • Caching and secure data layers
  • Enhance and maintain CI/CD pipelines and developer self-service tooling

  • Proactively manage technical debt in collaboration with governance bodies

  • Improve automation, observability, and testing coverage across platform services

  • Align infrastructure delivery with product roadmaps and platform strategy

  • Support critical national infrastructure tasks including:

    • Deployments
    • Incident, problem, and change management (ITIL-aligned)
    • Continuous service improvement
  • Use and integrate ServiceNow (or successor platforms) for operational governance

Diversity & Inclusion

At Scrumconnect Consulting, we believe diversity drives innovation. We are committed to creating an inclusive environment where every individual is respected, valued, and supported.

We welcome applications from candidates of all backgrounds and experiences and actively encourage applications from women, people with disabilities, underrepresented communities, and those seeking flexible working arrangements.

Java Software Developer (Inside IR35) - Contract
Stealth IT Consulting Limited
Manchester
Hybrid
Mid - Senior
ÂŁ476/day
RECENTLY POSTED
+12

Contract: 6 months (likely extension)

Rate: ÂŁ476 per day (Inside IR35)

Location: Manchester 1 day per week onsite

Start Date: 1st April

Interview Process: 2 stages

Project: Large-scale public sector programme

Overview We are looking for experienced Java Software Developers to support a major digital transformation programme within the public sector. The role involves building secure, scalable services and contributing to a high-performing Agile delivery team.

Key Responsibilities Develop, enhance, and maintain Java-based applications and microservices.

Work within cross-functional Agile teams (Developers, BAs, QAs, DevOps).

Contribute to API development, integration, and backend service build.

Deliver clean, testable, maintainable code aligned with best practices.

Participate in code reviews, pair programming, and continuous improvement.

Collaborate closely with architects and technical leads to ensure robust designs.

Support release processes and CI/CD pipelines.

Essential Skills & Experience Strong commercial experience in Java (8 or 11+) .

Hands-on experience with Spring / Spring Boot frameworks.

Experience building RESTful APIs and microservices.

Good understanding of AWS, Azure, or similar cloud platforms .

Familiarity with CI/CD pipelines (Jenkins, GitLab, GitHub Actions, etc.).

Experience with TDD/BDD and unit testing frameworks (JUnit, Mockito).

Background in Agile/Scrum delivery environments.

Ability to work within secure, complex, large-scale government / enterprise settings.

Desirable Skills Experience with public sector or GDS-aligned projects.

Knowledge of Docker, Kubernetes, or containerised environments.

Exposure to messaging technologies (Kafka, SNS/SQS, RabbitMQ).

Understanding of monitoring/logging tools (ELK, Grafana, Prometheus).

Additional Information Role is Inside IR35 payable via umbrella.

Onsite requirement: 1 day per week in Manchester .

Must be eligible to work in the UK and pass standard BPSS checks.

TPBN1_UKTJ

Data Engineer - SC Cleared
Sanderson Government and Defence
London
Hybrid
Junior - Mid
ÂŁ70,000
RECENTLY POSTED
+4

Data Engineer

Salary: ÂŁ40 - 72K + Benefits
Location: London or Manchester (aligned to office for client on-site requirements)
Working Pattern: Hybrid / On-site depending on client needs
Security Clearance: SC Clearance required

You will join a people-focused digital consultancy supporting data-driven services across the UK public sector. The organisation values collaboration, inclusion, and work-life balance, and actively supports continuous learning and professional development through access to training, modern engineering tools, and supportive multidisciplinary teams.

The consultancy works closely with government departments and public sector organisations to design, build, and operate secure, scalable data platforms that enable advanced analytics, data science, and machine learning. Diversity and inclusion are core values, and hiring decisions are based on skills, experience, and potential. Empowering individuals and building strong teams are central to delivering meaningful outcomes for clients and citizens.

This role is suited to Data Software Engineers with a strong technical foundation and an interest in data engineering, data science, and machine learning. You will work within agile, multidisciplinary teams alongside data scientists, platform engineers, and stakeholders to build robust data processing systems in secure environments.

Role Responsibilities

Design, build, and maintain scalable data processing and integration systems to support data science and analytics workloads.

Develop high-quality, well-tested software using a Test-Driven Development (TDD) approach.

Collaborate closely with data scientists to enable effective use of data for analytics and machine learning.

Build and operate cloud-based solutions, with a strong focus on AWS services.

Work with messaging, streaming, or data flow technologies to support real-time and batch data processing.

Contribute to infrastructure and platform automation using Infrastructure as Code.

Participate in agile ceremonies, technical design discussions, and code reviews.

Ensure solutions meet security, performance, and reliability requirements within public sector environments.

What You Will Bring to the Team

A strong interest in data, particularly data engineering, data science, or machine learning.

A solid technical background, with experience in Java, Python, TypeScript, or similar languages.

Experience developing software using TDD or a strong willingness to adopt TDD practices.

Strong problem-solving skills and the ability to work collaboratively within multidisciplinary teams.

Good communication skills, with the ability to explain technical concepts to both technical and non-technical stakeholders.

A proactive mindset with attention to detail and a commitment to quality.

Desirable Skills and Experience

Experience working with cloud platforms, ideally AWS.

A strong Linux background.

Experience with data integration and messaging technologies such as Apache NiFi, Apache Kafka, RabbitMQ, or similar tools.

Experience using Infrastructure as Code tools such as Terraform or CloudFormation.

Previous experience working in a consultancy or public sector delivery environment.

Familiarity with secure or regulated environments.

Reasonable Adjustments:

Respect and equality are core values to us. We are proud of the diverse and inclusive community we have built, and we welcome applications from people of all backgrounds and perspectives. Our success is driven by our people, united by the spirit of partnership to deliver the best resourcing solutions for our clients.

If you need any help or adjustments during the recruitment process for any reason, please let us know when you apply or talk to the recruiters directly so we can support you.

Splunk Enterprise and ITSI Expert
Stealth IT Consulting Limited
London
Hybrid
Mid - Senior
ÂŁ500/day
RECENTLY POSTED

Location: Hybrid 3 days onsite per week Sheffield, Birmingham, or London (UK)

Contract Duration:8 months

Day Rate: ÂŁ450 ÂŁ500 per day (Inside IR35)

Role Overview

This is a specialist role focused on designing, deploying, and optimising Splunk Enterprise and Splunk IT Service Intelligence (ITSI) in complex hybrid Kubernetes/OpenShift environments. You will handle large-scale data onboarding, build advanced ITSI service models and monitoring views, tune platform performance, implement secure governance, and integrate with modern observability pipelines. The position supports critical observability, reliability, and cost management for containerised workloads in a high-stakes enterprise setting.

Key Responsibilities

  • Design, deploy, and operate Splunk Enterprise and ITSI in hybrid Kubernetes/OpenShift environments.
  • Onboard data at scale using HEC, Universal Forwarders/Deployment Server; align to Common Information Model (CIM); enforce RBAC, retention policies, and cost guardrails.
  • Build ITSI service decompositions, KPIs (including multi-KPI), adaptive/time-based thresholds, NEAP policies, glass tables, deep dives, and service health scoring.
  • Create OpenShift-specific executive and operations views: cluster health (API/etcd), node readiness/pressure, pod restart hotspots, network/storage errors, capacity, quotas, and bursting visibility.
  • Tune search/platform performance: workload rules, concurrency limits, Data Model Acceleration (DMA), summary indexing, and scheduling optimisation.
  • Implement alerting, event enrichment, routing to ITSM/ChatOps, suppression windows, maintenance schedules, and runbook automation.
  • Govern data ingest and security: allow/deny lists, PII handling, TLS/mTLS, token/cert governance, index/role mapping, and data quality SLAs.
  • Integrate upstream sources/pipelines: OpenTelemetry (OTLP), Prometheus exporters, Fluentd/Fluent Bit/Vector, Kafka (with TLS), CMDB/ITSM enrichments, and AIOps/ML anomaly detection.

Essential Skills & Experience

  • Deep Splunk Enterprise expertise: SPL mastery, CIM alignment, KV stores/lookups/macros, saved searches, index/retention/RBAC design, search performance tuning.
  • Advanced Splunk ITSI knowledge: Service trees/decompositions, KPIs/thresholds (adaptive/time-based), NEAP tuning, glass tables, deep dives, Service Analyzer configuration.
  • Strong OpenShift/Kubernetes observability: Cluster/control-plane metrics, kube events/logs, workload/node/network/storage correlations, capacity/noisy-neighbor detection.
  • Experience with data pipelines/collectors: OpenTelemetry, Prometheus scraping, Fluentd/Fluent Bit/Vector, Kafka (TLS-secured), HEC/UF/DS onboarding.
  • Reliability & SLOs: Golden signals, rollout/rollback health checks, SLO/KPI mapping to namespaces/apps, executive/ops dashboards.
  • Performance & cost optimisation: Workload rules, DMA, summary indexing, schedule hygiene, license/cost guardrails.
  • Security & compliance: TLS/mTLS, token/cert management, PII controls, auditability, role/index mappings.
  • Automation & integrations: ITSM/ChatOps routing, runbooks, CMDB enrichment, webhook/AIOps integrations.

Preferred / Desirable

  • Hands-on experience in regulated/financial services environments.
  • Certifications: Splunk Enterprise Certified Architect, Splunk ITSI Certified Admin, or equivalent.
  • Familiarity with AIOps/ML features in Splunk for anomaly detection.
  • Previous work with container platforms (Kubernetes/OpenShift) for observability at scale.

Success Measures

  • High-quality, scalable Splunk/ITSI deployments with optimised performance and cost controls.
  • Effective service health monitoring via ITSI (accurate KPIs, glass tables, deep dives).
  • Reduced alerting noise, improved incident response through enriched routing and automation.
  • Strong governance, security compliance, and traceability in data ingest/observability pipelines.

This role is ideal for a Splunk specialist with proven expertise in ITSI and container observability, who can deliver robust, production-grade monitoring solutions in dynamic hybrid environments. Applications must be PAYE via Umbrella.

Lead Python Data Engineer - Leading Technology AI Brand
MLR Associates
London
In office
Senior
ÂŁ70,000 - ÂŁ100,000
RECENTLY POSTED
+1
  • Senior Engineer/Architect
  • Leading Technology AI Brand
  • SaaS - Platform based Technology Services
  • London/City
  • ÂŁ70-100k salary + equity package

Our client a global technology leader is currently looking for a Senior/Lead Data Engineer to work with the dev team to guide the provision of Software Development for an exciting new AI product.

Key Responsibilities:-

  • Architect and build scalable data pipelines and infrastructure
  • Design and maintain data ingestion, transformation, and storage architectures for operational and AI workloads.
  • Develop and manage batch and Real Time data pipelines.
  • Build and optimize systems for vector search, retrieval, and ML data pipelines.
  • Ensure data reliability, security, and governance across the platform.
  • Collaborate with AI and Back End engineering teams to support training, inference, and product features.
  • Implement monitoring, observability, and data quality frameworks.

Core Experience:-

  • 7+ years of experience in data engineering or Back End engineering roles.
  • Strong experience designing and building data pipelines and distributed data systems.
  • Experience working with relational databases (PostgreSQL preferred, but MySQL or similar is acceptable).
  • Experience with NoSQL databases.
  • Experience with vector databases used in modern AI systems.
  • Strong programming experience in Python.

Frameworks/Infrastructure:-

  • Apache Spark
  • Apache Airflow
  • Kafka
  • Elasticsearch/OpenSearch
Team Lead Engineer/developer - (Back End) - Hybrid or Remote
VANRATH
Belfast
Remote or hybrid
Senior
ÂŁ90,000
RECENTLY POSTED
+4

Job Description

My client, a leading technology provider within the sports streaming and iGaming sector, is hiring a Team Lead Backend Developer to join an established engineering team based in Belfast. This is a brand-new position within a growing division, focused on building and scaling ultra-low latency, high-availability backend systems that power real-time sports content delivery and sportsbook integrations.
* Competitive salary up to ÂŁ75k
* Hybrid working - 3 days a week in the office
* Flexible working
* Bonus scheme
* Clear career progression
You will be part of a global technology team delivering innovative, high-performance streaming and API solutions to major sportsbook operators worldwide. My client is passionate about building scalable, low-latency systems using cutting-edge cloud and distributed technologies.
As a Team Lead Backend Developer, you will combine hands-on development with technical leadership responsibilities. You will design and develop robust microservices and APIs, optimise systems for real-time performance, and mentor a team of backend engineers. You will contribute to architectural decisions, drive engineering best practice, and ensure delivery of highly available, scalable backend solutions across cloud environments.

The Person

* Strong backend development experience (6+ years) using Java, C#, .NET, Golang, or Node.js
* Previous experience in a technical lead, senior, or mentoring capacity
* Experience building microservices and distributed systems
* Strong understanding of REST APIs and event-driven architecture
* Experience with messaging systems (Kafka, RabbitMQ, etc.)
* Containerisation experience (Docker, Kubernetes, etc.)
* Cloud platform experience (AWS, Azure, or GCP)
* Experience optimising systems for performance, scalability, and low latency
* Familiarity with CI/CD pipelines and DevOps practices
* Experience working in Agile/Scrum environments
* Strong communication and stakeholder management skills
Desirable:
* Experience within iGaming, sportsbook, fintech, or streaming platforms
* Knowledge of real-time systems or low-latency environments
* Experience working in high-availability production systems

For further information on this job, or any other Software Development roles in Belfast or Northern Ireland, apply via the link or contact Kelly Nixon for a confidential chat today.
VANRATH acts as an agency and employment business for permanent recruitment and the supply of temporary workers. Successful applicants may be required to satisfactorily complete pre-employment checks (such as references, criminal record checks, right to work checks) in line with the client or VANRATH’s policy.

Benefits:
Work From Home

TECHNICAL LEAD L1
Wipro
Belfast
In office
Senior
Private salary
RECENTLY POSTED
+10

Job Description

Job Title: TECHNICAL LEAD L1 City: Belfast State/Province: Belfast Posting Start Date: 3/5/26 Wipro Limited (NYSE: WIT, BSE: 507685, NSE: WIPRO) is a leading technology services and consulting company focused on building innovative solutions that address clients most complex digital transformation needs. Leveraging our holistic portfolio of capabilities in consulting, design, engineering, and operations, we help clients realize their boldest ambitions and build future-ready, sustainable businesses. With over 230,000 employees and business partners across 65 countries, we deliver on the promise of helping our customers, colleagues, and communities thrive in an ever-changing world. For additional information, visit . Job Description: Job Description Your Responsibilities

  • Design, develop, test, and maintain high-quality full-stack applications

Work within an Agile delivery team to achieve sprint goals
Build scalable microservices and APIs using modern frameworks
Develop intuitive user interfaces using React and JavaScript
Contribute to CI/CD pipelines and DevOps practices
Apply containerisation using Docker and orchestration with Kubernetes
Collaborate with technical leadership and stakeholders
Identify and resolve complex technical issues
Leverage AI-assisted development tools to improve productivity
Mandatory Skills

  • Java, Spring Boot, and microservices architecture

React, JavaScript, HTML5, and CSS
Object-oriented design and data structures
RESTful API and event-driven services
CI/CD tools such as Git, Maven, Jenkins, and Docker
SQL and relational databases (Oracle preferred)
Messaging platforms such as Kafka or MQ
Desirable Skills

  • Cloud platforms such as AWS

Infrastructure-as-Code tools (Terraform, CloudFormation)
Kubernetes and OpenShift
Automated testing frameworks
Agile and Scrum methodologies
Mandatory Skills: Fullstack Java Enterprise .

Reinvent your world. We are building a modern Wipro. We are an end-to-end digital transformation partner with the boldest ambitions. To realize them, we need people inspired by reinvention. Of yourself, your career, and your skills. We want to see the constant evolution of our business and our industry. It has always been in our DNA - as the world around us changes, so do we. Join a business powered by purpose and a place that empowers you to design your own reinvention.

Senior Software Developer
Queen Square Recruitment Ltd
Newcastle upon Tyne
Hybrid
Senior
ÂŁ350/day - ÂŁ400/day
RECENTLY POSTED
+6

Location: Hybrid - Newcastle office preferred. Other UK locations available including London. Start day: ASAP Contractor rate: £400 per day inside IR35 Duration: 6 months, initially Role Overview As a Senior Software Developer, you’ll design, build, and maintain full‑stack digital services that support citizen‑facing government platforms. Working within a multi‑disciplinary Agile team, you’ll contribute to solution design while ensuring the stability, security, and resilience of the products you deliver. You’ll work across modern cloud environments, develop scalable backend services, build high‑quality UIs, and support continuous delivery through strong engineering practices and automated testing. Key Responsibilities • Develop and maintain digital services and contribute to solution design • Build backend services using Java (8+), Node.js, and Spring Boot • Develop UI components with performance, usability, and security in mind • Build secure, scalable solutions using AWS or Azure • Work with APIs, RESTful services, and microservice architectures • Use Docker/Kubernetes for containerisation • Create automated test suites to support CI/CD • Follow engineering best practices and code quality standards • Collaborate across Agile ceremonies with BA/DevOps/QA teams • Ensure accessibility standards (WCAG 2.2 AAA) are met Skills & Experience Essential • Full‑stack development experience • Strong hands‑on Java (8+), Node.js, Spring Boot • Experience with AWS and/or Azure • Knowledge of SQL and NoSQL databases • Docker and Kubernetes experience • Strong CI/CD and automated testing capability • Experience with APIs and microservices • Understanding of accessibility standards • Strong Agile delivery experience Desirable • Event‑driven architecture (e.g., Kafka) • ETL or external systems integration • Experience with SonarQube or similar tools • Background in accessible digital services • Cloud‑native/serverless familiarity • Awareness of AI/ML‑enabled development • Strong communication and hybrid‑working collaboration skills

Solution Architect
Uniting Ambition
Manchester
Hybrid
Senior - Leader
ÂŁ90,000 - ÂŁ100,000
RECENTLY POSTED

Solution Architect Shape the Future of a High-Scale Platform

Kafka, SQL, C#, .NET, Golang, TypeScript

Are you ready to define and drive the architecture of a platform built to handle millions of transactions per second, powering international growth and long-term scalability?

We re embarking on a full systems transformation of a mission-critical, large-scale platform with major investment and ambitious expansion plans. This role offers a rare opportunity to step into the heart of a greenfield rebuild, setting the foundations for the next decade of growth.

This isn t just a coding role. We re looking for a Solution Architectcan bridge hands-on engineering with architectural vision , someone who thrives in shaping strategy, designing resilient systems, and guiding teams towards world-class delivery.

What You ll Do

  • Define solution architecture and technical strategy for a complex, high-scale platform
  • Own end-to-end design across distributed, event-driven systems (Kafka, SQL, cloud-native)
  • Partner with product, engineering, and business stakeholders to translate vision into architecture
  • Lead technical planning, standards, and best practices across multiple teams
  • Provide architectural oversight and mentorship while influencing build vs. buy decisions
  • Ensure performance, scalability, and reliability are built into the DNA of the platform

What We re Looking For

  • Proven track record designing or architecting distributed, high-volume systems
  • Experience with event-driven architectures and high-throughput data pipelines (Kafka)
  • Strong knowledge of at least one of: C#, .NET, Golang, or TypeScript, plus SQL/databases
  • Ability to balance hands-on technical leadership with high-level solution design
  • Excellent communication and stakeholder management skills
  • Experience in platform modernisation, migrations, or full-scale rebuilds is highly desirable
  • Already started to explore AI in previous role

Why Join Us?

  • Hybrid working offices in Manchester or Staffordshire
  • Major ownership over architectural direction in a pivotal transformation
  • Chance to shape the backbone of a fast-scaling, international tech business

If you re ready to step into a strategic, solution-focused role at the intersection of architecture and engineering leadership, we d love to hear from you.

Confluent Engineer
LA International Computer Consultants Ltd
London
Hybrid
Senior
ÂŁ403/day
RECENTLY POSTED
+2

Role Title: Confluent Engineer
Location: London
Duration: 27/05/2026
Days on site: 2-3
Rate INSIDE IR35 ÂŁ402.75 PER DAY

MUST BE THROUGH UMBRELLA

Role Description:

* Role Title: Senior Software Engineer - Confluent Streaming Platform

Role Purpose
* We are seeking a Senior Software Engineer with strong hands-on experience in Confluent Platform, Apache Kafka, and Apache Flink to support the introduction and evolution of Intact’s enterprise streaming capabilities. This role sits within the Integration function, responsible for enabling Real Time data, event-driven architecture, and high-performance integrations across the organisation.
* The ideal candidate will contribute to the design, development, and scaling of Intact’s Confluent-based streaming platform, supporting teams across the organisation in adopting event-driven approaches.
* This opportunity sits within a significant cloud-modernisation programme, leveraging Agile and DevOps practices to continuously deliver business value.

Key Accountabilities
* Deliver engineering tasks across the Confluent Streaming Platform, including the design, development, testing, and deployment of event-driven services and data pipelines.
* Develop Kafka topics, schemas, and streaming applications using Kafka, Kafka Connect, Schema Registry, and Flink.
* Collaborate with architects and platform teams to shape the event streaming roadmap.
* Provide subject-matter expertise in distributed streaming and event-driven architecture.
* Review streaming applications produced by other engineers, ensuring quality and best practices.
* Troubleshoot production streaming issues and conduct root-cause analysis.
* Promote platform standards, governance models, and reusable patterns.
* Collaborate with vendor teams and Confluent professional services.
* Actively participate in Agile ceremonies and technical discussions.

Customer Conduct Framework
* Understand how FCA Conduct Rules apply to this role and consistently demonstrate behaviours that support positive customer outcomes and safe data handling.

Functional/Technical Skills
* 6+ years of software engineering experience, including 3+ years hands-on with Confluent/Apache Kafka.
* Experience designing and building distributed streaming applications using Confluent Platform components.
* Strong understanding of event-driven architecture concepts (event streaming, event sourcing, Pub/Sub, stream processing).
* Experience with Avro/JSON/Protobuf, schema evolution, and Schema Registry.
* Integration experience with Back End systems such as SQL/NoSQL databases, APIs, and cloud data platforms.
* Familiarity with API design, data modelling, and microservice integration patterns.
* Proficient with Git, Jira, Azure DevOps, Docker, Kubernetes.
* Working knowledge of AWS and/or Azure.
* Strong understanding of clean code, reusable component design, and Agile/DevOps practices.

Decision-Making Authority
* Makes decisions on design and implementation of streaming components within agreed architecture and standards.
* Provides expert guidance impacting platform reliability, performance, and integration quality.
* Escalates risks, issues, and architectural concerns appropriately.

Please send latest CV

LA International is a HMG approved ICT Recruitment and Project Solutions Consultancy, operating globally from the largest single site in the UK as an IT Consultancy or as an Employment Business & Agency depending upon the precise nature of the work, for security cleared jobs or non-clearance vacancies, LA International welcome applications from all sections of the community and from people with diverse experience and backgrounds.

Award Winning LA International, winner of the Recruiter Awards for Excellence, Best IT Recruitment Company, Best Public Sector Recruitment Company and overall Gold Award winner, has now secured the most prestigious business award that any business can receive, The Queens Award for Enterprise: International Trade, for the second consecutive period.

Confluent Engineer
eTeam Workforce Limited
London
Hybrid
Senior
ÂŁ402/day
RECENTLY POSTED
+2

We are a Global Recruitment specialist that provides support to the clients across EMEA, APAC, US and Canada. We have an excellent job opportunity for you.

Role Title: Confluent Engineer
Location: London
Duration: 27/05/2026
Days on site: 2-3
Rate: ÂŁ402/Day -Inside IR35

MUST BE PAYE THROUGH UMBRELLA

Role Description:

Role Title: Senior Software Engineer - Confluent Streaming Platform

Role Purpose
We are seeking a Senior Software Engineer with strong hands-on experience in Confluent Platform, Apache Kafka, and Apache Flink to support the introduction and evolution of Intact’s enterprise streaming capabilities. This role sits within the Integration function, responsible for enabling Real Time data, event-driven architecture, and high-performance integrations across the organisation.
The ideal candidate will contribute to the design, development, and scaling of Intact’s Confluent-based streaming platform, supporting teams across the organisation in adopting event-driven approaches.
This opportunity sits within a significant cloud-modernisation programme, leveraging Agile and DevOps practices to continuously deliver business value.

Key Accountabilities
Deliver engineering tasks across the Confluent Streaming Platform, including the design, development, testing, and deployment of event-driven services and data pipelines.
Develop Kafka topics, schemas, and streaming applications using Kafka, Kafka Connect, Schema Registry, and Flink.
Collaborate with architects and platform teams to shape the event streaming roadmap.
Provide subject-matter expertise in distributed streaming and event-driven architecture.
Review streaming applications produced by other engineers, ensuring quality and best practices.
Troubleshoot production streaming issues and conduct root-cause analysis.
Promote platform standards, governance models, and reusable patterns.
Collaborate with vendor teams and Confluent professional services.
Actively participate in Agile ceremonies and technical discussions.
Customer Conduct Framework
Understand how FCA Conduct Rules apply to this role and consistently demonstrate behaviours that support positive customer outcomes and safe data handling.

Functional/Technical Skills
6+ years of software engineering experience, including 3+ years hands-on with Confluent/Apache Kafka.
Experience designing and building distributed streaming applications using Confluent Platform components.
Strong understanding of event-driven architecture concepts (event streaming, event sourcing, Pub/Sub, stream processing).
Experience with Avro/JSON/Protobuf, schema evolution, and Schema Registry.
Integration experience with Back End systems such as SQL/NoSQL databases, APIs, and cloud data platforms.
Familiarity with API design, data modelling, and microservice integration patterns.
Proficient with Git, Jira, Azure DevOps, Docker, Kubernetes.
Working knowledge of AWS and/or Azure.
Strong understanding of clean code, reusable component design, and Agile/DevOps practices.
Decision-Making Authority
Makes decisions on design and implementation of streaming components within agreed architecture and standards.
Provides expert guidance impacting platform reliability, performance, and integration quality.
Escalates risks, issues, and architectural concerns appropriately.

If you are interested in this position and would like to learn more, please send through your CV and we will get in touch with you as soon as possible. Please note, candidates are often Shortlisted within 48 hours.

Java Developer eTrading
Hays Specialist Recruitment Limited
London
Hybrid
Mid - Senior
Private salary

Join a leading independent technology and services provider as a Java Developer - e-Trading! Job OverviewJoin a fast-paced and highly collaborative eFX Quantitative Developer team within Global Markets. You'll help build and enhance a best-in-class eFX trading platform, working closely with quants and traders to drive automation, performance, and revenue. This role sits within the Principal Flow Trading stream and offers hands-on ownership of the full development lifecycle. Location: London (Hybrid) Daily Rate: Flexible Contract Length: Until Dec 2026 Start Date: ASAP Key Responsibilities Develop and enhance a low-latency, high-throughput eFX algorithmic trading platform Own initiatives end-to-end: analysis, design, implementation, delivery Design, implement, and back-test pricing and execution strategies Build analytics to monitor model and platform performance Enhance the proprietary eTrading framework used across Global Markets Collaborate closely with quants, traders, and technology teams in an agile environment Key Requirements Strong business knowledge of electronic trading, ideally eFX Proven experience building low-latency, event-driven algorithmic trading platforms Advanced Java expertise, including lock-free and low-garbage techniques Experience working with quant teams and implementing algorithmic models Familiarity with messaging protocols such as Aeron, Kafka, FIX, SBE, ITCH, OUCH Knowledge of time-series databases (preferably KDB) and Python for analytics Full-stack experience (React) is beneficial for building trader tools and dashboards Additional Information Interview Process: Typically 2 stages - technical interview, system design, and team discussion How to ApplyIf you're interested in this role, click 'apply now' to forward an up-to-date copy of your CV, or call us today. If this job isn't quite right for you, but you are looking for a new position, please contact us for a confidential discussion about your career. Hays Talent Solutions is a trading division of Hays Specialist Recruitment Limited and acts as an employment agency for permanent recruitment and employment business for the supply of temporary workers. By applying for this job you accept the T&C's, Privacy Policy and Disclaimers which can be found at hays.co.uk

Page 1 of 3
Frequently asked questions
Haystack features a wide range of Apache Kafka jobs including developer, architect, administrator, and DevOps roles, suited for various experience levels from junior to senior.
You can filter Apache Kafka job listings by experience level using the search filters on our platform. Simply select your preferred experience range such as entry-level, mid-level, or senior positions to tailor your job search.
Yes, Haystack includes remote Apache Kafka job opportunities. Use the 'Remote' filter option to find jobs that allow you to work from anywhere.
Commonly required skills for Apache Kafka jobs include strong knowledge of Kafka architecture, experience with stream processing, proficiency in programming languages like Java or Scala, and familiarity with other big data technologies.
To apply for Apache Kafka jobs on Haystack, create a profile, upload your resume, and click the 'Apply' button on the job listing. Some employers may redirect you to their application portals.