Make yourself visible and let companies apply to you.
Roles

Apache Kafka Jobs in London

Overview

Looking for Apache Kafka jobs in London? Explore top IT opportunities on Haystack, the leading job board connecting Apache Kafka professionals with leading employers in the London tech scene. Find your next career move today!
Filters applied
London
Apache Kafka
Search
Salary
Location
Remote preference
Role type
Seniority
Tech stack
Sectors
Contract type
Company size
Visa sponsorship
Data Engineer (National Security)
Sanderson Government and Defence
London
Hybrid
Mid - Senior
£65,000
RECENTLY POSTED
mongodb
kafka
python
nosql
java
apache-nifi
+1
The Role As a Data Engineer, you’ll be responsible for designing, building, and maintaining robust data pipelines and architectures. You will work closely with stakeholders to understand complex data challenges, transform raw data into meaningful insights, and support analytics and reporting. This includes working with batch, streaming, real-time, and unstructured data, applying distributed compute techniques to handle large datasets efficiently. Key Responsibilities Develop and maintain data ingestion pipelines and orchestration workflows Design database schemas and data models Integrate and enrich data from multiple sources, ensuring consistency and quality Design and implement ETL/ELT processes (e.g., using Apache NiFi) Produce reusable, maintainable code with a test-driven approach Maintain and enhance existing data platforms and services Investigate and resolve operational issues in integrated datasets Implement data security measures to protect sensitive information Support Agile delivery, breaking down user requirements into actionable tasks Monitor and optimise system performance for reliability and efficiency Required Skills Apache Kafka Apache NiFi SQL and NoSQL databases (e.g., MongoDB) ETL/ELT development with Groovy, Python, or Java About the Employer With over 60 years of experience supporting government and defence programmes, this employer delivers deep technical expertise in sensors, communications, cyber, and advanced analytics. The organisation applies innovation, technology, and data to help clients make informed decisions and protect critical systems and infrastructure. Clearances Due to the nature of this role, we require you to be eligible to achieve the highest level of security clearance. Benefits & Culture Work at the cutting edge of technology in defence and national security Opportunity to spend time on innovative R&D projects and concept creation Collaborative, geeky, and creative environment that celebrates technical brilliance Competitive bonus scheme up to £3,000 / 6% of salary Generous holiday: 30 days + bank holidays, 3.5 days over Christmas, option to buy/sell extra leave Supportive and engaging culture, focused on growth and innovation Hybrid working: 3 days in the office, 2 days from home, flexible to work fully on-site if needed Reasonable Adjustments: Respect and equality are core values to us. We are proud of the diverse and inclusive community we have built, and we welcome applications from people of all backgrounds and perspectives. Our success is driven by our people, united by the spirit of partnership to deliver the best resourcing solutions for our clients. If you need any help or adjustments during the recruitment process for any reason , please let us know when you apply or talk to the recruiters directly so we can support you. TPBN1_UKTJ
Lead Data Scientist
Mastercard
London
Hybrid
Senior
Private salary
RECENTLY POSTED
tensorflow
kafka
python
pandas
Our PurposeMastercard powers economies and empowers people in 200+ countries and territories worldwide. Together with our customers, we’re helping build a sustainable economy where everyone can prosper. We support a wide range of digital payments choices, making transactions secure, simple, smart and accessible. Our technology and innovation, partnerships and networks combine to deliver a unique set of products and services that help people, businesses and governments realize their greatest potential.Title and SummaryLead Data ScientistLead Data Scientist – Financial CrimeWho is Mastercard? Mastercard is a global technology company in the payments industry. Our mission is to connect and power an inclusive, digital economy that benefits everyone, everywhere by making transactions safe, simple, smart, and accessible. Using secure data and networks, partnerships and passion, our innovations and solutions help individuals, financial institutions, governments, and businesses realize their greatest potential. Our decency quotient, or DQ, drives our culture and everything we do inside and outside of our company. With connections across more than 210 countries and territories, we are building a sustainable world that unlocks priceless possibilities for all.Overview In the Financial Crime Solutions team at Mastercard, we build and deliver products and services powered by payments data to find and stop financial crime. We’re an award winning team with a proven track record of combining data science technique with an intimate knowledge of payments data to aid Financial Institutions in their fight against money laundering and fraud. Headquartered in The City of London, and operating globally, we craft bespoke algorithms that help our clients gain an understanding of the underlying criminal behaviour that drives financial crime, empowering them to take action.Role As a Data Scientist, you will join one of the first teams in the world looking at payments data in the UK and across the world. In the research discipline you will help build systems that expose money laundering and detect fraud as well as work with the other data scientists and clients to understand the underlying behaviours employed by criminals. You will be product focused, working in close collaboration with our engineering and operations data scientists as well as the wider sales, consulting, and product teams.In this position, you will: • Perform proof-of-concept projects, engage in product design and build prototypes. • Use the full range of data science based techniques to develop new and novel algorithms to aid existing and new financial crime products. • Be able to perform novel research to help us and our clients understand the different criminal behaviours in payments data. • Think about how derived insights can be turned into new products and services we can offer to external clients. • Be ready to learn new technologies as required and engage with legacy and future technology stacks, in the UK and internationally. • Write white papers, patents, and client facing data visualisations. • Consider the full impact of your work. This means considering privacy, security, and regulation, as well as the performance of your code and the accuracy of your models.Skills Required Your passion is focused on the design of algorithms to solve real, pressing problems using data. You will have an interest in the financial services industry and want to tackle financial crime in the wider economy. You are excited by building products for clients and are keen to engage in the design processes this involves. Specifically:• You can write Python to a high standard and are familiar with the standard data science libraries such as pandas, scikit-learn and networkx. • You are capable of developing new algorithms in novel situations and can demonstrate previous work to evidence this. • You are keen to understand the data we work with and have a keen interest in how to model the behaviours it exposes. • You are able to communicate with non-tech colleagues about technical matters, and you are comfortable putting yourself in other people’s shoes. • You are happy and excited to explore new programming languages, technologies, and techniques. • You have a can-do attitude, can be pragmatic where necessary, and are excited to work as part of a specialist team. You can engage in constructive criticism and aren’t afraid to have your code reviewed.As we are often breaking new ground, both for Mastercard and more widely in our sector, we strongly encourage exploring new technologies and techniques. Some of the following experience is therefore desirable:• Practical experience using streaming technologies, including streaming platforms (e.g. Kafka), online algorithms (e.g. stochastic gradient descent), and fixed-memory data structures (e.g. Bloom Filters). • Experience using next generation machine learning techniques and tools, including Deep Neural Networks and TensorFlow. • Exposure to Network Theory, especially social network analysis and graph diffusion analysis. Ability to build custom data visualisations, prototype browser based UX/UI, and the server side microservices to support them.Corporate Security Responsibility Every person working for, or on behalf of, Mastercard is responsible for information security. All activities involving access to Mastercard assets, information, and networks comes with an inherent risk to the organization and therefore, it is expected that the successful candidate for this position must:• Abide by Mastercard’s security policies and practices; • Ensure the confidentiality and integrity of the information being accessed; • Report any suspected information security violation or breach, and • Complete all periodic mandatory security trainings in accordance with Mastercard’s guidelines.#AICorporate Security ResponsibilityAll activities involving access to Mastercard assets, information, and networks comes with an inherent risk to the organization and, therefore, it is expected that every person working for, or on behalf of, Mastercard is responsible for information security and must:
Abide by Mastercard’s security policies and practices;
Ensure the confidentiality and integrity of the information being accessed;
Report any suspected information security violation or breach, and
Complete all periodic mandatory security trainings in accordance with Mastercard’s guidelines.
Sr.Data Engineer
Cognizant
Multiple locations
Remote or hybrid
Senior
Private salary
RECENTLY POSTED
aws
terraform
github
grafana
kafka
python
+4
We are hiring a senior Data Engineer to lead the development of intelligent, scalable data platforms for Industry 4.0 initiatives. This role will drive integration across OT/IT systems, enable real-time analytics, and ensure robust data governance and quality frameworks. The engineer will collaborate with cross-functional teams to support AI/ML, GenAI, and IIoT use cases in manufacturing and industrial environments. Key Responsibilities Architect and implement cloud-native data pipelines on AWS or Azure for ingesting, transforming, and storing industrial data. Integrate data coming from OT systems (SCADA, PLC, MES, Historian) and IT systems (ERP, CRM, LIMS) using protocols like OPC UA, MQTT, REST. Design and manage data lakes, warehouses, and streaming platforms for predictive analytics, digital twins, and operational intelligence. Define and maintain asset hierarchies, semantic models, and metadata frameworks for contextualized industrial data. Implement CI/CD pipelines for data workflows and ensure lineage, observability, and compliance across environments. Collaborate with AI/ML teams to support model training, deployment, and monitoring using MLOps frameworks. Establish and enforce data governance policies, stewardship models, and metadata management practices Monitor and improve data quality using rule-based profiling, anomaly detection, and GenAI-powered automation Support GenAI initiatives through data readiness, synthetic data generation, and prompt engineering. Mandatory Skills: Cloud Platforms:Deep experience with AWS (S3, Lambda, Glue, Redshift) and/or Azure (Data Lake, Synapse). Programming & Scripting:Proficiency in Python, SQL, PySpark etc. ETL/ELT & Streaming:Expertise in technologies like Apache Airflow, Glue, Kafka, Informatica, EventBridge etc. Industrial Data Integration:Familiarity with OT data schema originating from OSIsoft PI, SCADA, MES, and Historian systems. Information Modeling:Experience in defining semantic layers, asset hierarchies, and contextual models. Data Governance:Hands-on experience Data Quality:Ability to implement profiling, cleansing, standardization, and anomaly detection frameworks. Security & Compliance:Knowledge of data privacy, access control, and secure data exchange protocols. Defining and creating MLOPs pipeline Good to Have Skills GenAI Exposure:Experience with LLMs, LangChain, HuggingFace, synthetic data generation, and prompt engineering. Digital Twin Integration:Familiarity with nVidia Omniverse, AWS TwinMaker, Azure Digital Twin or similar platforms and concepts Visualization Tools:Power BI, Grafana, or custom dashboards for operational insights. DevOps & Automation:CI/CD tools (Jenkins, GitHub Actions), infrastructure-as-code (Terraform, CloudFormation). Industry Standards:ISA-95, Unified Namespace (UNS), FAIR data principles, and DataOps methodologies. TPBN1_UKTJ
Lead Software Developer
Ncounter LTD
Bexley
Remote or hybrid
Senior
£100,000
RECENTLY POSTED
processing-js
c++
kafka
python
postgresql
Software Developer Post-Trade AutomationNcounter are supporting a globally recognised systematic investment firm in hiring a Python focused Software Developer to join their Post Trade engineering group. This team builds and operates the pipelines that sit at the heart of post trade processing, powering millions of transactions and ensuring accuracy, timeliness and compliance across global markets.You will design and deliver high performance components that automate the full lifecycle of post trade workflows, from ingesting transaction feeds through to booking, enrichment, reconciliation and analytics. This is a hands on engineering role where you will own production systems, improve reliability, optimise data flows and help shape the architecture behind one of the most advanced trading environments in the industry.The environment is deeply technical, collaborative and quality driven, with strong expectations around software engineering standards, data integrity, and scalable distributed design.Core RequirementsStrong Computer Science background with 5 years Python engineering in trading or post trade domainsProven experience building data intensive services and working with PostgreSQL and data frame toolingBackground in trade booking and FIX protocol integrationAbility to design and implement scalable, high availability and distributed architecturesExperience building reporting and reconciliation tooling and working with large transactional datasetsUnderstanding of OTC products including CDS, Interest Rate Swaps and Variance SwapsHighly DesirableExperience with C++, Spark, Kafka or other distributed compute toolsExposure to position keeping, risk, or PnL systemsStrong debugging, profiling and optimisation capability across data pipelinesIf you want to work at the intersection of software engineering, distributed systems and financial markets and enjoy taking ownership of critical production platforms, Ncounter would like to speak with you. Please get in touch for a confidential discussion.TPBN1_UKTJ
Java eTrading Strategist - Rates Front Office London
McGregor Boyall
London
Hybrid
Mid - Senior
Private salary
RECENTLY POSTED
java
kafka
A leading global investment bank is seeking a Java Developer/eTrading Strategist to join its London Rates eTrading team. This Front Office role sits at the intersection of quantitative research, trading, and technology, focused on delivering high-performance Java systems for pricing and electronic execution across the bank’s global Rates business.The RoleYou will design, build, and optimise low-latency Java components supporting Real Time pricing, algorithmic execution, and market connectivity. Working closely with quants and traders, you’ll transform quantitative models and execution logic into robust, production-grade trading applications. The role requires deep technical expertise and an interest in market microstructure and electronic execution dynamics.Key Responsibilities
Engineer low-latency, multithreaded Java applications powering Rates pricing and execution.
Partner with quants to integrate and enhance pricing models and execution algorithms.
Develop and tune smart order Routers, auto-quoting, and market-making components.
Profile and optimise Java systems for throughput, GC efficiency, and predictable latency.
Implement Real Time monitoring, logging, and performance diagnostics.
Collaborate across technology and trading teams to continuously refine execution performance and market response.
Candidate Profile
Degree in Computer Science, Engineering, Mathematics, or a related quantitative field.
10+ years’ experience in Java development for low-latency or electronic trading systems.
Deep expertise in Java concurrency, GC tuning, memory management, and NIO.
Strong grasp of market microstructure, FIX protocols, and order handling.
Proven record of delivering production-grade Java trading platforms in Front Office environments.
Experience in Rates eTrading (cash or derivatives) strongly preferred although open to other asset classes such as Equities and FX.
Preferred Technical Skills
Core Java 11+, Multithreading, lock-free programming.
Low-latency messaging (Aeron, Chronicle Queue, Kafka).
Market-making and execution algo frameworks.
Familiarity with distributed architectures and cloud-native Java.
What’s on Offer
Direct Front Office impact on the Rates eTrading desk.
Close collaboration with quants and traders on model integration and execution logic.
Work on cutting-edge low-latency and algo engineering challenges.
Hybrid setup - around 2 days per month in the London office.
Competitive compensation and clear progression opportunities.
If you are passionate about Java, pricing, execution, and low-latency trading, we’d love to hear from you.McGregor Boyall is an equal opportunity employer and do not discriminate on any grounds.
Lead Data Engineer
JLA Resourcing Ltd
Multiple locations
Hybrid
Senior
£75,000
RECENTLY POSTED
fabric
unity-3d
kafka
powerbi
azure-databricks
Lead Data Engineer - £70-77k + bonus + benefits - Basingstoke 3 days a week The Opportunity: We are looking for a Lead Data Engineer to join a Basingstoke based organisation who are investing heavily in their Digital Transformation Programme. The Role: You’ll play a proactive role in the delivery of next-generation data platforms, will manage / mentor the existing person and drive the design, development and governance of the data pipelines. You’ll be working really closely with stakeholders across the technology function and within the business and will the availability, integrity and compliance of the systems. You’ll play a key role in the ownership of the core architecture / engineering across the new Azure Databricks ecosystem. You’ll ensure that the data platform architecture supports availability and growth targets and that the platforms leverage advances in AI and Machine Learning capability. They are currently working with a 3rd Party Data Partner who have recommended a number of improvements - you’ll work closely with them selecting, implementing and managing technology so it’s a great opportunity to really make a difference. The Person: Key to this is proactivity - they’re really looking for someone who is always looking at “what’s next” - are there new tools or functionality that will help the business move forward. Other key background / attributes include:
In depth experience of modern data solution architecture design and delivery in a hybrid cloud environment but predominantly Azure / Databricks
Experience of pipeline orchestration management
Strong exprerience with Databricks
Experience of implementing machine learning and AI
Tooling such as Purview and Unity Catalog as well as the use of observability tools such as Monte Carlo, Fabric Monitoring and Log Analytics
Mentoring / Leading / Management experience initially with a small team but with a view that this will grow
Ideally skills in Delta Live Tables, Kafka, Azure Stream Analytics, Azure ML, PowerBI and Financial Modelling experience. TPBN1_UKTJ
Senior Backend Engineer - Systmes Design - Golang
Ventula Consulting Limited
Multiple locations
Hybrid
Senior
£100,000
RECENTLY POSTED
goland
aws
kubernetes
c++
restful
kafka
+2
Senior Backend EngineerWe are seeking a deeply technical and security-minded Senior Backend Engineer to join a newly-founded, high-impact AI joint venture. Backed by five of the world’s leading telecommunications giants, our mission is to restore trust in global voice communication.This is not a standard backend role. We are seeking a foundational engineer to own our single greatest strategic asset: our unique, privileged access to network-level intelligence via the GSMA CAMARA API standard. This is our right to win, and you will be the engineer responsible for building the bridge to it.You will be the Critical Path Owner for Track Zero, the 30-day foundational sprint to validate and integrate the first-ever CAMARA-based signals (like sim-swap and device-roaming-status) from our telco founders. Your success is the Go/No-Go gate for our MWC 2026 launch. You will be directly responsible for building the out-of-band data path that enables our flagship Telco-Verified Security Shield and its sub-500ms Time-to-Trust metric, our core differentiator that no over-the-top competitor can replicate.This position offers a unique opportunity to define a new category of network-aware security, working directly with the world’s leading carriers to turn their network data into a real-time defense against global fraud.Key ResponsibilitiesTelco Integration & Architecture
Own and buildthe Security Signal Ingestion path, the secure, low-latency, and out-of-band data channel connecting to our founding members’ network API gateways.
Architect and implementa carrier-agnostic, vendor-agnostic connector layer to consume RESTful APIs from a heterogeneous global landscape of telco partners and IMS vendors (e.g., Nokia, Ericsson, Mavenir).
Serve as the primary technical liaisonto the engineering teams at our telco founders (Deutsche Telekom, Singtel, SKT, etc.), working hand-in-glove to navigate, validate, and productionize their new CAMARA network APIs.
Design and buildthe high-throughput microservices that will query, ingest, normalize, and cache network signals (e.g., sim-swap, device-roaming-status) to be used in our real-time Scam_Score model.
Implementa mandatory Zero Trust security model for this critical integration, our most sensitive asset. This includes mTLS, least-privilege IAM, and network micro-segmentation.
System Ownership & Performance
Serve as the Critical Path Ownerfor Track Zero, our 30-day sprint to validate and integrate real-time signals from at least two telco partners, culminating in a Go/No-Go demo.
Ensureall network API integrations meet the stringent P99 latency budgets (e.g., < 150ms) required to support our sub-500ms Time-to-Trust product goal.
Collaboratewith the platform team to build a parallel development path using mocked data to mitigate risks of network API delays.
Define and ownthe data contracts and pipelines that feed this “ground-truth” network data from the integration layer to our core AI Service Bus (Apache Kafka).
Cross-functional Collaboration
Work closelywith the Scam Detection Service and AI/ML teams to define the feature vectors and data payloads needed from the network to power our proprietary machine learning models.
Partner with product and leadershipto define the Phase 2 (post-MWC) roadmap for co-developing new,proprietarynetwork APIs (like Caller_Velocity from CDRs) that will become our long-term, indefensible moat.
Documentintegration architectures, data schemas, and security controls to create setup guides for our Founding Member partners.
Collaboratewith our external InfoSec vendor to ensure the integration layer is continuously validated and hardened against threats.
Required QualificationsEducation & Experience
Bachelor’s degree in Computer Science, Engineering, or a related field.
7+ years of hands-on experience in backend engineering, with a proven track record of building and maintaining high-performance, distributed systems in production.
Required Technical Skills
A minimum of 5 years of production experience with C++ or Go (Golang).
Strong, demonstrable experiefnce with real-time, low-latency data processing.You obsess over milliseconds and understand the trade-offs.
Proficiency with cloud platforms(AWS, GCP, or Azure) and containerization technologies (Kubernetes, Docker).
Deep understanding of API design(REST, gRPC, Webhooks) and API security (OAuth 2.0, mTLS, JWTs).
Knowledge of (or deep, demonstrable curiosity about) telecommunications protocols and architectures.You must be comfortable talking to network engineers.
Experience with high-throughput messaging or streaming platforms(e.g., Kafka, Pulsar).
This is a permanent position with hybrid working of two days a week in the central London office and the rest WFH. The salary is very much dependant on experience with a guide between £130k-£160K basic + package.
Databricks Engineer (SC Cleared)
Syntax Consultancy Limited
London
Hybrid
Mid - Senior
£550/day - £600/day
RECENTLY POSTED
mysql
git
kubernetes
kafka
jenkins
docker
+5
London (Hybrid)6 Month Contract£550-600/day (Inside IR35)Databricks Engineer needed with active SC Security Clearance for 6 Month Contract based in Central London (Hybrid).Developing a cutting-edge Azure Databricks platform for economic data modelling, analysis, and forecasting. Start asap in Dec 2025 / Jan 2026.Hybrid Working - 2 days/week remote (WFH), and 3 days/week working on-site from the Central London office.A chance to work with a leading global IT and Digital transformation business specialising in Government projects:
In-depth Data Engineering + strong hands-on Azure Databricks expertise.
Azure Data Services, Azure Data Factory, Azure Blob Storage + Azure SQL Database.
Designing, developing, building + optimising data pipelines, implementing data transformations, ensuring data quality and reliability.
Deep Data Warehousing knowledge including data modelling techniques + data integration patterns.
Experience of working with complex data pipelines, large data sets, data pipeline optimization + data architecture design.
Implementing complex data transformations using Spark, PySpark or Scala + working with SQL / MySQL databases.
Experience with data quality, data governance processes, Git version control + Agile development environments.
Azure Data Engineer certification preferred -eg- Azure Data Engineer Associate.
Advantageous skills: Azure Event Hubs, Kafka, data visualisation tools, Power BI, Tableau, Azure DevOps, Docker, Kubernetes, Jenkins.
C++ Software Developer
Ncounter Limited
London
In office
Mid - Senior
£160,000 - £170,000
RECENTLY POSTED
c++
linux
kafka
python
bash
C++ Software Developer, Risk Technology160,000 to 170,000Join a high-performance engineering group responsible for the systems that sit at the heart of a global trading operation. This team owns the core risk platform, handling everything from trade intake and real time position tracking to PnL calculation, inventory control and internal routing logic. The platform processes heavy market data flows and fast changing state across both live and historical workloads, so efficient memory management, intelligent data structures and tight control of latency are essential.Significant investment is now reshaping this architecture into a modern, service-oriented environment. We are looking for engineers who enjoy solving problems at scale and want to design robust components that can ingest, compute and distribute data across a distributed Linux estate. You will be involved in building services that must respond predictably under load, propagate risk metrics across internal systems and support a front office that depends on accuracy and speed.You will work in an engineering culture that values clarity, simplicity and strong design. Expect to collaborate with colleagues across regions, refine performance at a system level and contribute to a platform that is constantly evolving as trading strategies and data volumes grow.What you bring
Strong C++ development experience in Linux environments, ideally 4 to 6 years.
A deep grounding in algorithms, multithreading and performance optimisation.
Experience contributing to large scale or distributed systems.
Familiarity with modern messaging technologies such as Kafka, AMPS or QPID.
Awareness of service-oriented patterns and how to build clean interfaces between compute components.
Exposure to Python or bash for tooling and automation. Any experience with Q or KDB is valuable.
A collaborative approach and the curiosity to explore new engineering techniques.
This is an opportunity to build high impact software in a research driven trading environment. If you want to work on complex, data intense systems that reward strong engineering, get in touch to explore the role with Ncounter.
Director - Lead Software Engineer (Java, Equities)
Huxley Associates
London
In office
Leader
£120,000 - £180,000
RECENTLY POSTED
java
processing-js
kubernetes
kafka
python
docker
Location: London Division: Investment Banking Type: Full-timeAbout the RoleWe are seeking a Director-level Lead Software Engineer to join our Equities Technology team within the Front Office. This is a hands-on, independent contributor role where you will design and build high-performance systems that support our equities trading business. You will work closely with traders, quants, and other technologists to deliver innovative solutions in a fast-paced environment.Key Responsibilities
Lead the design and development of Java-based trading and risk platforms for equities.
Deliver low-latency, high-throughput systems for order management and execution.
Collaborate with front-office stakeholders to understand business requirements and translate them into technical solutions.
Ensure best practices in software architecture, performance optimization, and scalability.
Mentor junior engineers and contribute to technical strategy while remaining hands-on in coding.
Requirements
Expert-level Java development skills with experience in multi-threading, concurrency, and performance tuning.
Strong understanding of equities trading workflows, market data, and order execution.
Proven experience building front-office systems in an investment banking environment.
Solid knowledge of distributed systems, messaging (e.g., Kafka), and real-time processing.
Degree in Computer Science, Engineering, or related field.
Nice to Have
Exposure to low-latency trading systems and algorithmic execution.
Familiarity with Python for scripting and data analysis.
Knowledge of cloud technologies and containerisation (Kubernetes, Docker).
What We Offer
Competitive Director-level compensation package.
Opportunity to work on mission-critical systems in a global investment bank.
Collaborative, high-performance culture with direct impact on trading outcomes.
To find out more about Huxley, please visit (url removed)Huxley, a trading division of SThree Partnership LLP is acting as an Employment Business in relation to this vacancy Registered office 8 Bishopsgate, London, EC2N 4BQ, United Kingdom Partnership Number OC(phone number removed) England and Wales
Software Engineer Full Stack .Net AWS JavaScript
client server
St Albans
Hybrid
Mid
£100,000
RECENTLY POSTED
aws
javascript
dot-net
react
redis
kubernetes
+5
Software Engineer / Full Stack Developer (C# .Net Core AWS JavaScript) St Albans / WFH to £110k Opportunity to progress your career in a mid-level, hands-on Software Engineer role at a technology driven trading company that invest in Sports betting markets, with a flat structure where you will get your voice heard and can make a real impact on the bottom line, earning significant bonuses. What’s in it for you: As a Software Engineer you will earn a competitive package: Salary to £110k Significant bonus 25 days holiday, rising to 30 after 2 years Enhanced parental leave Contributory pension scheme Private Medical MSDN subscription Discounts for gym membership, travel and cinema Sabbatical after 10 years of service Flexible working with 2 days work from home per week Your role: As a Software Engineer you’ll join an Agile development team to design and develop new features and enhancements to complex Payments and client systems within a microservices environment (300 services). You’ll be working with a modern tech stack using C# .Net Core, AWS, Kubernetes, Kafka, Redis and TypeScript / Angular; using the right tool for the job, you’ll be able to pick up new technologies and make recommendations for improvements. WFH Policy: You’ll join colleagues in St. Albans, Hertfordshire (parking available), 10 minute walk from local station, with the flexibility to work from home twice a week in a hybrid model. About you: You have strong C# .Net Core backend development skills You have JavaScript experience, combined with Angular or React You have experience with AWS and microservices You have a thorough understanding of Computer Science fundamentals such as OOP, Design Patterns, Data Structures, Algorithms You enjoy collaborating, learning new things and sharing knowledge You are degree educated in Computer Science or closely related discipline Apply now to find out more about this Software Engineer / Developer (C# .Net Core AWS JavaScript) opportunity. TPBN1_UKTJ
Senior Backend Engineer - Platform Security
Ventula Consulting
London
Hybrid
Senior
£130,000 - £160,000
RECENTLY POSTED
processing-js
aws
kubernetes
c++
restful
kafka
+3
Senior Backend EngineerWe are seeking a deeply technical and security-minded Senior Backend Engineer to join a newly-founded, high-impact AI joint venture. Backed by five of the world’s leading telecommunications giants, our mission is to restore trust in global voice communication.This is not a standard Back End role. We are seeking a foundational engineer to own our single greatest strategic asset: our unique, privileged access to network-level intelligence via the GSMA CAMARA API standard. This is our right to win, and you will be the engineer responsible for building the bridge to it.You will be the Critical Path Owner for Track Zero, the 30-day foundational sprint to validate and integrate the first-ever CAMARA-based signals (like sim-swap and device-roaming-status) from our telco founders. Your success is the Go/No-Go gate for our MWC 2026 launch. You will be directly responsible for building the out-of-band data path that enables our flagship Telco-Verified Security Shield and its sub-500ms Time-to-Trust metric, our core differentiator that no over-the-top competitor can replicate.This position offers a unique opportunity to define a new category of network-aware security, working directly with the world’s leading carriers to turn their network data into a Real Time defense against global fraud.Key ResponsibilitiesTelco Integration & Architecture
Own and build the Security Signal Ingestion path, the secure, low-latency, and out-of-band data channel connecting to our founding members’ network API gateways.
Architect and implement a carrier-agnostic, vendor-agnostic connector layer to consume RESTful APIs from a heterogeneous global landscape of telco partners and IMS vendors (eg, Nokia, Ericsson, Mavenir).
Serve as the primary technical liaison to the engineering teams at our telco founders (Deutsche Telekom, Singtel, SKT, etc.), working hand-in-glove to navigate, validate, and productionize their new CAMARA network APIs.
Design and build the high-throughput microservices that will query, ingest, normalize, and cache network signals (eg, sim-swap, device-roaming-status) to be used in our Real Time Scam_Score model.
Implement a mandatory Zero Trust security model for this critical integration, our most sensitive asset. This includes mTLS, least-privilege IAM, and network micro-segmentation.
System Ownership & Performance
Serve as the Critical Path Owner for Track Zero, our 30-day sprint to validate and integrate Real Time signals from at least two telco partners, culminating in a Go/No-Go demo.
Ensure all network API integrations meet the stringent P99 latency budgets (eg, < 150ms) required to support our sub-500ms Time-to-Trust product goal.
Collaborate with the platform team to build a parallel development path using mocked data to mitigate risks of network API delays.
Define and own the data contracts and pipelines that feed this “ground-truth” network data from the integration layer to our core AI Service Bus (Apache Kafka).
Cross-functional Collaboration
Work closely with the Scam Detection Service and AI/ML teams to define the feature vectors and data payloads needed from the network to power our proprietary machine learning models.
Partner with product and leadership to define the Phase 2 (post-MWC) roadmap for co-developing new, proprietary network APIs (like Caller_Velocity from CDRs) that will become our long-term, indefensible moat.
Document integration architectures, data schemas, and security controls to create setup guides for our Founding Member partners.
Collaborate with our external InfoSec vendor to ensure the integration layer is continuously validated and hardened against threats.
Required QualificationsEducation & Experience
Bachelor’s degree in Computer Science, Engineering, or a related field.
7+ years of hands-on experience in Back End engineering, with a proven track record of building and maintaining high-performance, distributed systems in production.
Required Technical Skills
A minimum of 5 years of production experience with C++ or Go (Golang).
Strong, demonstrable experiefnce with Real Time, low-latency data processing. You obsess over milliseconds and understand the trade-offs.
Proficiency with cloud platforms (AWS, GCP, or Azure) and containerization technologies (Kubernetes, Docker).
Deep understanding of API design (REST, gRPC, Webhooks) and API security (OAuth 2.0, mTLS, JWTs).
Knowledge of (or deep, demonstrable curiosity about) telecommunications protocols and architectures. You must be comfortable talking to network engineers.
Experience with high-throughput messaging or streaming platforms (eg, Kafka, Pulsar).
This is a permanent position with hybrid working of two days a week in the central London office and the rest WFH. The salary is very much Dependent on experience with a guide between £130k-£160K basic + package.
Senior Backend Engineer (Golang)
Ventula Consulting
London
Hybrid
Senior
£130,000 - £160,000
RECENTLY POSTED
goland
android
aws
kubernetes
c++
restful
+3
OverviewWe are seeking a deeply technical and security-minded Senior Backend Engineer to join a newly-founded, high-impact AI joint venture. Backed by five of the world’s leading telecommunications giants, our mission is to restore trust in global voice communication. Our mission is to restore trust in voice communication by building a network-native intelligence layer to stop fraud and create a new category of secure, AI-powered user experiences. We are a Phase 2 company, meaning we are building the foundational engineering team that will define our culture and our success.This role is the Core Logic Owner for our flagship AI experience: the Personal Welcome Manager. While our Security Shield product leverages unique, pre-call telco data to provide sub-500ms trust indicators, the Welcome Manager is our AI innovation showcase and primary monetization driver. This product features an AI agent that screens unknown callers as they ring, provides a live transcript, and allows for mid-call user intervention.As the Senior Backend Engineer for Application & AI Orchestration, your central challenge will be solving the cumulative latency puzzle. You will be singularly responsible for orchestrating a best-in-class, buy, don’t build AI stack (STT LLM TTS) to deliver a seamless, natural, and responsive voice-to-voice (V2V) experience, targeting a sub-1000ms latency from the moment a caller stops speaking to the moment our AI begins its response. You won’t just be integrating APIs; you will be architecting the Real Time, high-throughput nervous system of our core AI product.Key ResponsibilitiesAI Orchestration & Core Logic
Own the end-to-end Back End architecture for the Personal Welcome Manager, our flagship conversational AI product.
Design, build, and optimize the core AI orchestration pipeline (STT - LLM - TTS) to meet our 1000ms V2V latency target.
Serve as the technical lead for our “Buy, Don’t Build” AI strategy, integrating, managing, and evaluating third-party AI services from providers like Vapi, Deepgram, and Cartesia.
Implement the core application logic for all Welcome Manager features, including intent-capture, post-call summarization with entity extraction, and error handling.
API & Application Development
Design, build, and scale the mission-critical, high-availability RESTful APIs that serve our mobile clients (Android-first).
Architect and manage the Real Time streaming data flows (audio and text) required for the live transcription feature, utilizing technologies like WebSocket.
Implement the Back End logic for user-facing features, such as mid-call intervention (allowing a user to accept a call during pre-screening).
System Architecture & Quality
Collaborate on the design of our cloud-native, microservices-based architecture to ensure all application services are scalable, resilient, and cost-efficient.
Work with the Platform & Security engineer to ensure all services adhere to our mandatory Zero Trust security architecture and protect sensitive user data.
Implement comprehensive monitoring, logging, and alerting for the entire AI pipeline, with a focus on our key latency (V2V), quality, and cost metrics.
Cross-functional Collaboration
Partner closely with the Senior Android Engineers to define robust API contracts and ensure a seamless, low-latency experience for the live transcript and mid-call intervention features.
Collaborate with the Senior Backend Engineer (Telco Integration) to consume and orchestrate the proprietary network signals that power our Security Shield.
Work with the Product Manager and UX/UI Designer to translate complex product requirements and the 1000ms UX flow into a concrete technical reality.
Collaborate with the DevOps Engineer on CI/CD pipelines, Kubernetes-based service deployment, and infrastructure scaling.
Required QualificationsEducation & Experience
Bachelor’s degree in Computer Science or a related technical field, or equivalent practical experience.
7+ years of hands-on experience in Back End development, building and maintaining high-availability, high-throughput systems in production.
Technical & Backend Skills
Expert-level proficiency and at least 5 years of production experience with C++ OR Go (Golang) .
Strong, proven experience designing, building, and scaling high-availability RESTful APIs for mobile or web clients.
Deep experience with distributed systems, microservices architecture, and event-driven patterns.
Demonstrated experience with Real Time applications and data streaming technologies (eg, WebSockets, gRPC, Kafka).
Experience integrating third-party AI/ML APIs (eg, LLMs, STT/TTS, or other complex services) into production systems.
Experience with containerization (Docker, Kubernetes) and deploying/managing services on a major cloud platform (AWS, GCP, or Azure).
This is a permanent position with hybrid working of two days a week in the central London office and the rest WFH. The salary is very much dependent on experience with a guide between £130k-£160K basic + package.
Seniority LevelDirector
Industry
Telecommunications
Information Services
Technology, Information and Media
Employment TypeFull-time
Job Functions
Information Technology
Skills
REST
Senior Go Developer
Fruition Group
London
In office
Senior
Private salary
RECENTLY POSTED
aws
kubernetes
cassandra
kafka
docker
grpc
+1
London, UK 6 Month ContractAn incredible opportunity for an experienced Senior Go Developer with strong backend engineering skills to join a prestigious tech client on a contract basis. Known for its high bar for engineering quality, this isn’t your typical banking or fintech gig - contractors here are genuinely embedded, involved, and making meaningful impact from day one.As a Senior Go Developer, you’ll play a key role in building mission-critical, data-driven services that power core parts of the platform. You’ll take ownership across the full development lifecycle, from system design and implementation to optimisation and release.You’ll be working in an open, fast-moving environment where ideas get tested quickly and good engineering practices are taken seriously. If you like seeing your work go live within days - not weeks - you’ll fit right in. Contractors are treated as part of the team, contributing to technical decisions, stand-ups, and everything in between.Reporting into an Engineering Manager, you’ll be hands-on with technologies like AWS, Kubernetes, Docker, and Kafka, and will collaborate closely with product and infrastructure teams to ship well-architected, scalable microservices.This is a team that values clean systems, clear thinking, and code that lasts. You’ll have the freedom to shape how things are built and the trust to get on with it - whether that’s improving infrastructure, shipping new features, or quietly refactoring something that needs it.Senior Go Developer - Key Requirements:
Significant professional experience in software development, with a strong focus on backend systems
Proficiency in Go / Golang and proven expertise in AWS, Kubernetes, and Docker
Experience with end-to-end software engineering, including system design and architecture
Hands-on experience working on complex, data-intensive applications
A product-focused mindset and familiarity with working in technology-driven organisations or start-ups
Experience with Kafka, Cassandra, gRPC, and microservices architecture will also be beneficial, as well as experience contributing to open-source projects
If you’re a Senior Go Developer looking for a fast-paced, collaborative contract role where your work will ship quickly and matter to real users - apply now. Our client is ready to move quickly for the right person.We are an equal opportunities employer and welcome applications from all suitably qualified persons regardless of their race, sex, disability, religion/belief, sexual orientation, or age.
Senior C++ Contract Engineer - SQL, Unix/Linux, Oracle, Kafka, Finance, C#, GUI
Scope AT Limited
London
Hybrid
Senior
£700/day - £750/day
RECENTLY POSTED
c++
kafka
csharp
sql
Our financial services client is looking for an experienced Senior C++ Software Engineer to help grow and enhance their platform.As a Senior C++ Software Engineer, you will be involved in:
Analysis of user requirements and translation into solution design
Estimating and breaking down tasks into manageable chunks
Implementation of new features and feature enhancements
Leading code reviews and enforcing best practice within a small, agile, focused team
A background in financial services is essentialHybrid role inside IR35, Central London based.By applying to this job you are sending us your CV, which may contain personal information. Please refer to our Privacy Notice to understand how we process this information. In short, in order to supply you with work finding services, we will hold and process your personal data, and only with your express permission we will share this personal data with a client (or a third party working on behalf of the client) by email or by upload to the Client/third parties vendor management system. By giving us permission to send your CV to a client, this constitutes permission to share the personal data that would be necessary to consider your application, interview you (Phone/video/face to face) and if successful hire you.Scope AT acts as an employment agency for Permanent Recruitment and an employment business for the supply of temporary workers. By applying for this job you accept the Terms and Conditions, Data Protection Policy, Privacy Notice and Disclaimers which can be found at our website.
Senior Go Developer
Fruition Group
Multiple locations
Hybrid
Senior
Private salary
RECENTLY POSTED
aws
kubernetes
cassandra
kafka
docker
grpc
+1
6 months ContractLondon Hybrid or Fully RemoteAn exciting opportunity for a highly skilled Senior Go Developer to join a leading technology business on a contract basis. This organisation is recognised for its engineering excellence and is seeking a Senior Go Developer to help scale distributed systems and deliver high-performance solutions.In this role, the Senior Go Developer will design, develop, and implement data-intensive applications across the full engineering life cycle. You’ll architect and deliver microservices-based systems using Go (Golang), AWS, Kubernetes, Docker, and Kafka, working closely with cross-functional teams to build scalable, reliable, and resilient platforms.You’ll also play a key role in optimising system performance, improving reliability, and ensuring scalability, while contributing to code reviews, design discussions, and knowledge sharing across the engineering function.Senior Go Developer - Key Requirements:
Strong commercial experience in Back End development
Advanced skills in Go (Golang), with proven expertise in AWS, Kubernetes, and Docker
End-to-end software engineering experience, including system design and architecture
Background in complex, large-scale, data-driven applications
Product-focused approach, ideally within fast-paced tech organisations (start-ups or scale-ups)
Knowledge of Kafka, Cassandra, gRPC, and microservices is a strong advantage
Open-source contributions are beneficial
If you’re a Senior Go Developer looking for a challenging 6-month contract with a forward-thinking tech company, apply today.We are an equal opportunities employer and welcome applications from all suitably qualified persons regardless of their race, sex, disability, religion/belief, sexual orientation, or age.
Senior Snowflake Data Engineer - Remote - £competitive
Tenth Revolution Group
London
Fully remote
Senior
£75,000 - £85,000
RECENTLY POSTED
snowflake
processing-js
aws
git
kafka
python
+4
Senior Snowflake Data Engineer - Remote - competitiveAbout the Role We are looking for an experienced Senior Snowflake Data Engineer to join a dynamic team working on cutting-edge data solutions. This is an exciting opportunity to design, build, and optimise high-performance data pipelines using Snowflake, dbt, and modern engineering practices. If you are passionate about data engineering, test-driven development, and cloud technologies, we’d love to hear from you.Key Responsibilities
Design, develop, and optimise scalable data pipelines in Snowflake.
Build and maintain dbt models with robust testing and documentation.
Apply test-driven development principles for data quality and schema validation.
Optimise pipelines to reduce processing time and compute costs.
Develop modular, reusable transformations using SQL and Python.
Implement CI/CD pipelines and manage deployments via Git.
Automate workflows using orchestration tools such as Airflow or dbt Cloud.
Configure and optimise Snowflake warehouses for performance and cost efficiency.
Required Skills & Experience
7+ years in data engineering roles.
3+ years hands-on experience with Snowflake.
2+ years production experience with dbt (mandatory).
Advanced SQL and strong Python programming skills.
Experience with Git, CI/CD, and DevOps practices.
Familiarity with ETL/ELT tools and cloud platforms (AWS, Azure).
Knowledge of Snowflake features such as Snowpipe, streams, tasks, and query optimisation.
Preferred Qualifications
Snowflake certifications (SnowPro Core or Advanced).
Experience with dbt Cloud and custom macros.
Exposure to real-time streaming (Kafka, Kinesis).
Familiarity with data observability tools and BI integrations (Tableau, Power BI).
What We Offer
Opportunity to work with modern data technologies and large-scale architectures.
Professional development and certification support.
Collaborative, engineering-focused culture.
Competitive salary and benefits package.
Interested? Apply now with your CV highlighting your Snowflake, dbt, and DevOps experience.
Cloudera Administrator - Inside Ir35 - Stratford
Intuition IT Solutions Ltd
London
Hybrid
Mid - Senior
£425/day
RECENTLY POSTED
linux
prometheus
ansible
grafana
kafka
python
+2
Role: Cloudera Administrator Duration: 3-5 months (freelance/contract) Location: Stratford, London - 3 days per week on-site, 2 days remoteProject ContextThe Cloudera Administrator will support and maintain the organisation’s Hadoop/Cloudera platform, ensuring high availability, performance, and compliance. The role is part of a data engineering & platform team working on platform stabilisation, optimisation, and upgrades.Responsibilities
Administer, configure, and maintain Cloudera Data Platform (CDP)/Cloudera Manager clusters.
Monitor cluster health, performance, resource utilisation, and capacity planning.
Troubleshoot platform issues and ensure SLAs for uptime and reliability.
Manage services such as HDFS, YARN, Hive, Impala, Kafka, Spark, Ranger, Knox, ZooKeeper, etc.
Handle cluster upgrades, patching, backups, security hardening, and user provisioning.
Implement governance, auditing, and access control following best practices.
Automate operational tasks using Scripting (Python, Bash) or Ansible.
Work closely with Data Engineering and Infrastructure teams on deployments and incident resolution.
Required Skills & Experience
Strong experience as a Cloudera Administrator in production environments.
Solid knowledge of Hadoop ecosystem services (HDFS, YARN, Hive, Impala, Spark ).
Expertise with Cloudera Manager, cluster configuration, and life cycle management.
Strong Linux administration skills (RedHat/CentOS/Ubuntu).
Good understanding of networking, Kerberos, security policies, and RBAC.
Experience with monitoring tools and automation (Prometheus, Grafana, Ansible, Shell Scripts).
Ability to troubleshoot complex distributed system issues.
Excellent communication skills and comfort working onsite with technical teams.
Work Environment
Hybrid model: 3 days per week on-site in Stratford (London).
Collaborative engineering environment with data engineers, architects, and operations teams.
Potential extension depending on platform roadmap.
Senior Data Engineer - Insurance - Remote
Michael Page
London
Fully remote
Senior
£80,000 - £120,000
kafka
python
java
sql
hadoop
scala
Senior Data EngineerThe Senior Data Engineer will play a crucial role in designing, implementing, and maintaining scalable data pipelines and infrastructure. This position is ideal for those with strong technical expertise and a passion for working in the Insurance / Financial services industry.Client DetailsSenior Data EngineerThe employer is a medium-sized organisation operating in the F sector. They focus on delivering innovative solutions and maintaining a strong reputation for excellence in analytics and data-driven decision-making.DescriptionSenior Data Engineer
Develop and maintain robust and scalable data pipelines and ETL processes.
Optimise data workflows and ensure efficient data storage solutions.
Collaborate with analytics and engineering teams to meet business objectives.
Ensure data integrity and implement best practices for data governance.
Design and implement data models to support analytical and reporting needs.
Monitor and troubleshoot data systems to ensure reliability and performance.
Evaluate and implement new tools and technologies to improve data infrastructure.
Provide technical guidance and mentorship to junior team members.
ProfileSenior Data EngineerA successful Senior Data Engineer should have:
Experience within the Insurance industry
Strong proficiency in programming languages such as Python, Java, or Scala.
Experience with cloud platforms like Azure.
Knowledge of big data technologies such as Hadoop, Spark, or Kafka.
Proficiency in SQL and database management systems.
Familiarity with data warehousing concepts and tools.
Ability to work collaboratively with cross-functional teams.
A solid understanding of data security and privacy standards.
A degree in Computer Science, Engineering, or a related field.
Job OfferSenior Data Engineer
Competitive salary ranging from 80,000 to 120,000 (Experience depending).
Equity options as part of the compensation package.
Comprehensive benefits package.
Opportunity to work remotely.
Be part of a collaborative and innovative team in the Insurance sector.
If you are passionate about data engineering and are excited to work in a challenging and rewarding role, we encourage you to apply today!
Senior Backend Engineer (Telco Integration Lead) Golang
Ventula Consulting Limited
Multiple locations
Hybrid
Senior
£100,000
goland
aws
kubernetes
restful
kafka
python
+3
Senior Backend Engineer (Telco Integration Lead)We are seeking a deeply technical and security-minded Senior Backend Engineer to join a newly-founded, high-impact AI joint venture. Backed by five of the world’s leading telecommunications giants, our mission is to restore trust in global voice communication.This is not a standard backend role. We are seeking a foundational engineer to own our single greatest strategic asset: our unique, privileged access to network-level intelligence via the GSMA CAMARA API standard. This is our right to win, and you will be the engineer responsible for building the bridge to it.You will be the Critical Path Owner for Track Zero, the 30-day foundational sprint to validate and integrate the first-ever CAMARA-based signals (like sim-swap and device-roaming-status) from our telco founders. Your success is the Go/No-Go gate for our MWC 2026 launch. You will be directly responsible for building the out-of-band data path that enables our flagship Telco-Verified Security Shield and its sub-500ms Time-to-Trust metric, our core differentiator that no over-the-top competitor can replicate.This position offers a unique opportunity to define a new category of network-aware security, working directly with the world’s leading carriers to turn their network data into a real-time defense against global fraud.Key ResponsibilitiesTelco Integration & Architecture?Own and build the Security Signal Ingestion path, the secure, low-latency, and out-of-band data channel connecting to our founding members’ network API gateways.?Architect and implement a carrier-agnostic, vendor-agnostic connector layer to consume RESTful APIs from a heterogeneous global landscape of telco partners and IMS vendors (e.g., Nokia, Ericsson, Mavenir).?Serve as the primary technical liaison to the engineering teams at our telco founders (Deutsche Telekom, Singtel, SKT, etc.), working hand-in-glove to navigate, validate, and productionize their new CAMARA network APIs.?Design and build the high-throughput microservices that will query, ingest, normalize, and cache network signals (e.g., sim-swap, device-roaming-status) to be used in our real-time Scam_Score model.?Implement a mandatory Zero Trust security model for this critical integration, our most sensitive asset. This includes mTLS, least-privilege IAM, and network micro-segmentation.System Ownership & Performance?Serve as the Critical Path Owner for Track Zero, our 30-day sprint to validate and integrate real-time signals from at least two telco partners, culminating in a Go/No-Go demo.?Ensure all network API integrations meet the stringent P99 latency budgets (e.g., < 150ms) required to support our sub-500ms Time-to-Trust product goal.?Collaborate with the platform team to build a parallel development path using mocked data to mitigate risks of network API delays.?Define and own the data contracts and pipelines that feed this “ground-truth” network data from the integration layer to our core AI Service Bus (Apache Kafka).Cross-functional Collaboration?Work closely with the Scam Detection Service and AI/ML teams to define the feature vectors and data payloads needed from the network to power our proprietary machine learning models.?Partner with product and leadership to define the Phase 2 (post-MWC) roadmap for co-developing new, proprietary network APIs (like Caller_Velocity from CDRs) that will become our long-term, indefensible moat.?Document integration architectures, data schemas, and security controls to create setup guides for our Founding Member partners.?Collaborate with our external InfoSec vendor to ensure the integration layer is continuously validated and hardened against threats.Required QualificationsEducation & Experience?Bachelor’s degree in Computer Science, Engineering, or a related field.?7+ years of hands-on experience in backend engineering, with a proven track record of building and maintaining high-performance, distributed systems in production.Required Technical Skills?A minimum of 5 years of production experience with Go (Golang). Experience in other languages (e.g., Python, Node.js) is valued, but will not replace this core Go requirement.?Strong, demonstrable experiefnce with real-time, low-latency data processing. You obsess over milliseconds and understand the trade-offs.?Proficiency with cloud platforms (AWS, GCP, or Azure) and containerization technologies (Kubernetes, Docker).?Deep understanding of API design (REST, gRPC, Webhooks) and API security (OAuth 2.0, mTLS, JWTs).?Knowledge of (or deep, demonstrable curiosity about) telecommunications protocols and architectures. You must be comfortable talking to network engineers.?Experience with high-throughput messaging or streaming platforms (e.g., Kafka, Pulsar).This is a permanent position with hybrid working of two days a week in the central London office and the rest WFH. The salary is very much dependant on experience with a guide between £110k-£140K basic + package.
Page 1 of 2

Frequently asked questions

What types of Apache Kafka jobs are available in London?
In London, you can find a wide range of Apache Kafka job opportunities including Kafka Developer, Kafka Engineer, Data Engineer, DevOps Engineer, and Streaming Data Architect roles across various industries such as finance, technology, and media.
What are the typical skills and qualifications required for Apache Kafka jobs in London?
Employers generally look for experience with Apache Kafka clusters, proficiency in Java or Scala, knowledge of stream processing tools like Kafka Streams or Apache Flink, understanding of distributed systems, and familiarity with cloud platforms such as AWS or Azure.
What is the average salary for Apache Kafka professionals in London?
Salaries for Apache Kafka roles in London vary depending on experience and seniority, but typically range from £50,000 to £90,000 per year. Senior or specialized positions can command salaries above £100,000.
Are there remote or flexible working options for Apache Kafka positions in London?
Yes, many companies offering Apache Kafka jobs in London provide remote, hybrid, or flexible working arrangements, especially following the rise of remote work policies in the tech industry.
How can I stay updated on the latest Apache Kafka job openings in London?
You can stay informed by regularly visiting IT job boards like Haystack, subscribing to job alerts, and following relevant LinkedIn groups or company career pages focused on Apache Kafka roles in London.