Make yourself visible and let companies apply to you.
Roles
Splunk Jobs in Sheffield
Overview
Looking for top Splunk jobs in Sheffield? Discover the latest opportunities in one of the UK’s leading tech hubs. Whether you’re a Splunk developer, engineer, or administrator, our Sheffield Splunk job board connects you with employers seeking your skills. Start your next career move today with Haystack – your go-to source for Splunk vacancies in Sheffield.
Lead Data Engineer
Stealth IT Consulting Limited
Sheffield
Hybrid
Senior
£400/day - £445/day
RECENTLY POSTED

Location: Hybrid (60% Office / 40% Remote) Sheffield
Contract: 6 Month Contract (Extension possible)
Rate: £400+ per day inside IR35

Role Overview

We are seeking an experienced Lead Data Engineer to design, develop, and optimise enterprise-scale data platforms for large, regulated organisations, ideally in banking, financial services, or other regulated sectors. This role requires hands-on technical expertise, leadership, and a consulting mindset to deliver scalable, resilient data solutions while promoting best practices and operational excellence.

Key Responsibilities

  • Lead the design, development, and optimisation of enterprise data engineering platforms
  • Build and maintain robust ETL/ELT pipelines integrating large, complex datasets
  • Work with structured, semi-structured, and unstructured data across SQL and NoSQL technologies
  • Develop solutions using Hadoop, Spark, and Splunk in large-scale environments
  • Write maintainable Python code, applying object-oriented and functional programming principles
  • Implement and maintain CI/CD pipelines, automated testing, and version control
  • Collaborate with BI, Analytics, and downstream teams to support reporting and insights
  • Pair program and mentor other engineers to promote knowledge sharing and code quality
  • Define and maintain technical test plans including unit and integration tests
  • Promote SRE principles to ensure service resilience, sustainability, and recoverability

Essential Skills & Experience

  • Proven experience as a Lead or Senior Data Engineer in enterprise-scale environments
  • Hands-on expertise with Hadoop, Spark, and Splunk
  • Advanced Python development skills
  • Experience designing and optimising high-performance data pipelines
  • Strong understanding of CI/CD, source control, and automated testing
  • Analytical, problem-solving, and leadership skills
  • Experience working in regulated, enterprise environments (e.g., banking, fintech, government)
  • Agile delivery experience (Scrum/Kanban)

Consulting & Soft Skills

  • Ability to mentor and uplift team performance
  • Strong communication and stakeholder engagement skills
  • Collaborative, delivery-focused mindset with high accountability
  • Knowledge of control, compliance, and regulatory requirements
  • Up-to-date awareness of modern tools, cybersecurity, and data privacy regulations
  • Champions innovation, advanced technologies, and best practices

If this role aligns with your skills and experience, we’d love to hear from you. Apply today to be considered.

Lead Data Engineer
Vallum Associates Limited
Sheffield
Remote or hybrid
Senior
Private salary
RECENTLY POSTED

We are seeking a Lead Data Engineering Consultant with proven experience in leading and developing data engineering platforms. The ideal candidate will possess hands-on expertise in the following areas: 1. Extensive enterprise experience withHadoop, Spark, and Splunk. 2. Proficiency in object-oriented and functional scripting, particularly inPython. 3. Skilled in handling raw, structured, semi-structured, and unstructured data (SQL and NoSQL). 4. Experience integrating large, disparate datasets using modern tools and frameworks. 5. Strong background in building and optimizingETL/ELT data pipelines. 6. Familiarity with source control and implementing Continuous Integration, Delivery, and Deployment via CI/CD pipelines. 7. Experience supporting and collaborating with BI and Analytics teamsin fast-paced environments. 8. Ability to pair program and work effectively with other engineers. 9. Excellent analytical and problem-solving abilities. 10. Knowledge of agile methodologies such as Scrum or Kanban is a plus. 11. Comfortable representing the team in standups and problem-solving sessions. 12. Capable of driving the creation of technical test plans and maintaining records, including unit and integration tests, within automated test environments to ensure high code quality. 13. Promote SRE (Site Reliability Engineering) culture by addressing challenges through data engineering. 14. Ensure service resilience, sustainability, and adherence to recovery time objectives for all delivered software solutions.

SPLUNK Enterprise and ITSI Expert
Experis
Sheffield
Hybrid
Mid - Senior
£470/day - £520/day

Location: 3 days on site in either Sheffield/Birmingham/London
Duration: 30/11/2026
Rate 529

MUST BE PAYE THROUGH UMBRELLA
"Key Responsibilities

  • Design, deploy, and operate Splunk Enterprise and ITSI for hybrid Kubernetes/OpenShift environments.
  • Onboard data at scale (HEC, Universal Forwarder/Deployment Server), align to CIM, and enforce RBAC, retention, and cost guardrails.
  • Build ITSI service decompositions, KPIs/multi-KPI thresholds, NEAP policies, glass tables, deep dives, and service health scoring.
  • Create OpenShift-focused exec/ops views: cluster health (API/etcd), node readiness/pressure, pod restart hotspots, network/storage errors, capacity and quota/bursting visibility.
  • Tune search and platform performance: workload rules, concurrency, DMA, summary indexing, and scheduling hygiene.
  • Implement alerting, enrichment, routing to ITSM/ChatOps, suppression windows, maintenance schedules, and runbook automation.
  • Govern ingest and security: allow/deny lists, PII handling, TLS, token governance, index/role mapping, and data quality SLAs.
  • Integrate upstream sources and pipelines: OpenTelemetry, Prometheus exporters, Fluentd/Fluent Bit/Vector, Kafka, CMDB/ITSM enrichments, AIOps/ML anomaly detection.

Required Skills

    • Splunk Enterprise: SPL mastery, CIM alignment, KV/lookups/macros, saved searches, index/retention/RBAC design, search performance tuning.
    • Splunk ITSI: Service trees, KPIs, adaptive/time-based thresholds, NEAP tuning, glass tables, deep dives, Service Analyzer configuration.
    • OpenShift/Kubernetes observability: Cluster/control-plane metrics, kube events/logs, workload/node/network/storage correlation, capacity and noisy-neighbor detection.
    • Data pipelines & collectors: OpenTelemetry (OTLP), Prometheus scraping, Fluentd/Fluent Bit/Vector, Kafka (TLS), HEC/UF/DS onboarding.
    • Reliability & SLOs: Golden signals, rollout/rollback health checks, SLO/KPI mapping to namespaces/apps, executive and ops dashboards.
    • Performance & cost optimization: Workload rules, DMA, summary indexing, schedule optimization, license/cost guardrails.
    • Security & compliance: TLS/mTLS, token and cert hygiene, PII controls, auditability, role/index mappings.
    • Automation & integrations: ITSM/ChatOps routing, runbooks, CMDB enrichment, webhook/AIOps integrations."
Splunk and OpenShift Observability Engineer
CBSbutler Holdings Limited trading as CBSbutler
Multiple locations
Remote or hybrid
Mid - Senior
£400/day - £490/day

We’re looking for a Splunk & OpenShift Observability Engineer to design, deploy, and optimise enterprise-grade monitoring across hybrid Kubernetes and OpenShift environments.

This is a high-impact role where you’ll shape observability strategy, enhance service intelligence, and ensure platform reliability at scale - balancing performance, cost efficiency, and security governance.

You’ll work at the intersection of platform engineering, observability, and service intelligence, helping to transform raw telemetry into actionable insight. This is an opportunity to influence reliability strategy, improve operational maturity, and deliver measurable value across a modern cloud-native estate.

What You’ll Be Doing

  • Design, deploy, and operate Splunk Enterprise and ITSI across hybrid Kubernetes/OpenShift platforms
  • Onboard and normalise data at scale (HEC, Universal Forwarder, Deployment Server), aligning to CIM standards
  • Build and optimise ITSI service models: service trees, KPIs, adaptive thresholds, NEAP policies, glass tables, deep dives, and health scoring
  • Deliver OpenShift-focused executive and operational dashboards, including:
  • Cluster/API/etcd health
  • Node readiness and resource pressure
  • Pod restart trends and noisy-neighbour detection
  • Network and storage error visibility
  • Capacity, quota, and burst analysis
  • Optimise search and platform performance (workload rules, DMA, summary indexing, scheduling hygiene, concurrency tuning)
  • Implement intelligent alerting and automated routing into ITSM and ChatOps platforms, including enrichment, suppression windows, and maintenance scheduling
  • Govern data ingestion and security controls (RBAC, retention, PII handling, TLS, token governance, index and role mapping)
  • Integrate telemetry pipelines including OpenTelemetry, Prometheus, Fluentd/Fluent Bit/Vector, Kafka, CMDB and AIOps/ML solutions
  • Drive SLO/KPI alignment, golden signal monitoring, rollout/rollback health validation, and executive reporting

What You’ll Bring

  • Deep expertise in Splunk Enterprise (SPL mastery, CIM alignment, saved searches, macros, KV stores, index/retention/RBAC design, performance tuning)
  • Strong experience with Splunk ITSI (service trees, KPIs, adaptive/time-based thresholds, NEAP tuning, Service Analyzer configuration)
  • Proven OpenShift/Kubernetes observability experience across control-plane metrics, events, logs, workload correlation, and capacity management
  • Hands-on experience with telemetry pipelines (OpenTelemetry/OTLP, Prometheus exporters, Fluentd/Fluent Bit/Vector, Kafka with TLS, HEC/UF/DS onboarding)
  • Strong understanding of reliability engineering principles (golden signals, SLO design, namespace/application KPI mapping)
  • Experience optimising performance and licensing costs using workload rules, DMA, and summary indexing
  • Solid security and compliance knowledge (TLS/mTLS, certificate/token hygiene, PII controls, auditability, role/index mapping)
  • Automation and integration expertise across ITSM, ChatOps, webhooks, CMDB enrichment, and AIOps tooling
OpenShift Architecture and Migration Design Specialist
Infoplus Technologies UK Ltd
Sheffield
Remote or hybrid
Senior - Leader
£450/day - £520/day
+4

Skills: OCP, Ansible, IaC

We are seeking an experienced OpenShift Architecture and Migration Design Specialist to lead the design, planning, and execution of OpenShift architectures and migration strategies. The ideal candidate will have expertise in designing robust, scalable, and secure OpenShift environments, as well as creating and implementing migration plans for transitioning workloads and applications to OpenShift. Experience with VMware and Pure Storage is essential to ensure seamless integration with existing infrastructure.________________________________________Key Responsibilities:1.Architecture Design:oDesign the target architecture for OpenShift, including cluster topology, networking, and storage solutions.oDefine and implement best practices for OpenShift cluster setup, including multi-zone and multi-region deployments.oEnsure the architecture supports high availability, fault tolerance, and disaster recovery.2.Migration Design and Optimization:oAssess existing infrastructure, applications, and workloads to determine migration readiness.oDevelop detailed migration plans, including strategies for containerization, workload transfer, and data migration.oImplement migration processes, ensuring minimal downtime and disruption to business operations.oIdentify and mitigate risks associated with the migration process.3.VMware and Pure Storage Integration design:oDesign and implement OpenShift solutions that integrate seamlessly with VMware virtualized environments.oLeverage VMware tools (e.g., vSphere, vCenter, NSX) to optimize OpenShift deployments.oConfigure and manage Pure Storage solutions (e.g., FlashArray, FlashBlade) to provide high-performance, scalable storage for OpenShift workloads.oEnsure compatibility and performance optimization between OpenShift, VMware, and Pure Storage.4.CI/CD Pipelines and DevOps Workflows:oDesign and implement CI/CD pipelines tailored for the OpenShift environment.oIntegrate DevOps workflows with OpenShift-native tools and third-party solutions.oAutomate deployment, scaling, and monitoring processes to streamline application delivery.5.Scalability and Security:oEnsure the architecture and migration plans are scalable to meet future growth and workload demands.oImplement security best practices, including role-based access control (RBAC), network policies, and encryption.oConduct regular security assessments and audits to maintain compliance with organizational standards.6.Collaboration and Documentation:oWork closely with development, DevOps, and operations teams to align architecture and migration plans with business needs.oProvide detailed documentation of the architecture, migration strategies, workflows, and configurations.oOffer technical guidance and training to teams on OpenShift architecture, migration, and best practices.________________________________________Required Skills and Qualifications:Strong experience in designing and implementing OpenShift architectures and migration strategies.In-depth knowledge of Kubernetes, containerization, and orchestration.Expertise in VMware tools and technologies (e.g., vSphere, vCenter, NSX).Hands-on experience with Pure Storage solutions (e.g., FlashArray, FlashBlade).Expertise in networking concepts (e.g., ingress, load balancing, DNS) and storage solutions (e.g., persistent volumes, dynamic provisioning).Hands-on experience with CI/CD tools (e.g., Jenkins, Github, ArgoCD) and DevOps workflows.Strong understanding of high availability, scalability, and security principles in cloud-native environments.Proven experience in workload and application migration to OpenShift or similar platforms.Proficiency in scripting and automation (e.g., Bash, Python, Ansible, Terraform).Excellent problem-solving and communication skills.________________________________________Preferred Qualifications:OpenShift certifications (e.g., Red Hat Certified Specialist in OpenShift Administration).Experience with multi-cluster and hybrid cloud OpenShift deployments.Familiarity with monitoring and logging tools (e.g., oTel, Grafana, Splunk stack).Knowledge of OpenShift Operators and Helm charts.Experience with large-scale migration projects.

Data Engineer Lead (Openshift)
Infoplus Technologies UK Ltd
Sheffield
Remote or hybrid
Senior
£450/day - £480/day

Key Responsibilities: Design, implement, and maintain data pipelines to ingest and process OpenShift telemetry (metrics, logs, traces) at scale. Stream OpenShift telemetry via Kafka (producers, topics, schemas) and build resilient consumer services for transformation and enrichment. Engineer data models and routing for multi-tenant observability; ensure lineage, quality, and SLAs across the stream layer. Integrate processed telemetry into Splunk for visualization, dashboards, alerting, and analytics to achieve Observability Level 4 (proactive insights). Implement schema management (Avro/Protobuf), governance, and versioning for telemetry events. Build automated validation, replay, and backfill mechanisms for data reliability and recovery. Instrument services with OpenTelemetry; standardize tracing, metrics, and structured logging across platforms. Use LLMs to enhance observability capabilities (e.g., query assistance, anomaly summarization, runbook generation). Collaborate with platform, SRE, and application teams to integrate telemetry, alerts, and SLOs. Ensure security, compliance, and best practices for data pipelines and observability platforms. Document data flows, schemas, dashboards, and operational runbooks.
Required Skills: Hands-on experience building streaming data pipelines with Kafka (producers/consumers, schema registry, Kafka Connect/KSQL/KStream). Proficiency with OpenShift/Kubernetes telemetry (OpenTelemetry, Prometheus) and CLI tooling. Experience integrating telemetry into Splunk (HEC, UF, sourcetypes, CIM), building dashboards and alerting. Strong data engineering skills in Python (or similar) for ETL/ELT, enrichment, and validation. Knowledge of event schemas (Avro/Protobuf/JSON), contracts, and backward/forward compatibility. Familiarity with observability standards and practices; ability to drive toward Level 4 maturity (proactive monitoring, automated insights). Understanding of hybrid cloud and multi-cluster telemetry patterns. Security and compliance for data pipelines: secret management, RBAC, encryption in transit/at rest. Good problem-solving skills and ability to work in a collaborative team environment. Strong communication and documentation skills.

CGEMJP00330718 Lead Data Engineer
CBSbutler Holdings Limited trading as CBSbutler
Sheffield
Hybrid
Senior
£430/day

Role Title: Lead Data Engineer Location: Sheffield/hybrid (3 days on site) Duration: 9 months Rate: £430 per day inside ir35 We are seeking a Lead Data Engineering Consultant with proven experience in leading and developing data engineering platforms. Experience required: Extensive enterprise experience with Hadoop, Spark, and Splunk. Proficiency in object-oriented and functional scripting, particularly in Python. Skilled in handling raw, structured, semi-structured, and unstructured data (SQL and NoSQL). Experience integrating large, disparate datasets using modern tools and frameworks. Strong background in building and optimizing ETL/ELT data pipelines. Familiarity with source control and implementing Continuous Integration, Delivery, and Deployment via CI/CD pipelines. Experience supporting and collaborating with BI and Analytics teams in fast-paced environments. Ability to pair program and work effectively with other engineers. Excellent analytical and problem-solving abilities. Knowledge of agile methodologies such as Scrum or Kanban is a plus. Comfortable representing the team in standups and problem-solving sessions. Capable of driving the creation of technical test plans and maintaining records, including unit and integration tests, within automated test environments to ensure high code quality. Promote SRE (Site Reliability Engineering) culture by addressing challenges through data engineering. Ensure service resilience, sustainability, and adherence to recovery time objectives for all delivered software solutions.Soft Skills (Consultant): Demonstrated ability and enthusiasm for enhancing team performance. Strong active listening and effective communication skills. Self-mastery, with a focus on positive mindsets and professional behaviours. Maintains up-to-date expertise in current tools, technologies, and key areas such as cybersecurity, data privacy, consent, and data residency regulations. Engages with industry groups and external vendors to represent and advance HSBC's interests and influence. Takes accountability for ensuring control and compliance throughout the engineering process. Champions innovation and the adoption of advanced technologies and best practices within the domain.If you are interested in this role or wish to apply, please feel free to submit your CV

CGEMJP Lead Data Engineer
CBSbutler Holdings Limited trading as CBSbutler
Sheffield
Hybrid
Senior
£430/day

Role Title: Lead Data Engineer

Location: Sheffield/hybrid (3 days on site)

Duration: 9 months

Rate: 430 per day inside ir35

We are seeking a Lead Data Engineering Consultant with proven experience in leading and developing data engineering platforms.

Experience required:

  • Extensive enterprise experience with Hadoop, Spark, and Splunk.
  • Proficiency in object-oriented and functional scripting, particularly in Python.
  • Skilled in handling raw, structured, semi-structured, and unstructured data (SQL and NoSQL).
  • Experience integrating large, disparate datasets using modern tools and frameworks.
  • Strong background in building and optimizing ETL/ELT data pipelines.
  • Familiarity with source control and implementing Continuous Integration, Delivery, and Deployment via CI/CD pipelines.
  • Experience supporting and collaborating with BI and Analytics teams in fast-paced environments.
  • Ability to pair program and work effectively with other engineers.
  • Excellent analytical and problem-solving abilities.
  • Knowledge of agile methodologies such as Scrum or Kanban is a plus.
  • Comfortable representing the team in standups and problem-solving sessions.
  • Capable of driving the creation of technical test plans and maintaining records, including unit and integration tests, within automated test environments to ensure high code quality.
  • Promote SRE (Site Reliability Engineering) culture by addressing challenges through data engineering.
  • Ensure service resilience, sustainability, and adherence to recovery time objectives for all delivered software solutions.

Soft Skills (Consultant):

  • Demonstrated ability and enthusiasm for enhancing team performance.
  • Strong active listening and effective communication skills.
  • Self-mastery, with a focus on positive mindsets and professional behaviours.
  • Maintains up-to-date expertise in current tools, technologies, and key areas such as cybersecurity, data privacy, consent, and data residency regulations.
  • Engages with industry groups and external vendors to represent and advance HSBC’s interests and influence.
  • Takes accountability for ensuring control and compliance throughout the engineering process.
  • Champions innovation and the adoption of advanced technologies and best practices within the domain.

If you are interested in this role or wish to apply, please feel free to submit your CV.

Lead Data Engineer - Hadoop - Spark - Python
Square One Resources
Sheffield
Hybrid
Senior
£600/day - £617/day

Job Title: Lead Data Engineer - Hadoop, Spark, Pytthon
Location: Sheffield - 3 days per week in the office
Salary/Rate: Up to 617 per day inside IR35
Start Date: 02/03/2026
Job Type: Contract until November

We are seeking a Lead Data Engineering Consultant with proven experience in leading and developing data engineering platforms.

The ideal candidate will possess hands-on expertise in the following areas:

  • Extensive enterprise experience with Hadoop, Spark, and Splunk.
  • Proficiency in object-oriented and functional scripting, particularly in Python.
  • Skilled in handling raw, structured, semi-structured, and unstructured data (SQL and NoSQL).
  • Experience integrating large, disparate datasets using modern tools and frameworks.
  • Strong background in building and optimizing ETL/ELT data pipelines.
  • Familiarity with source control and implementing Continuous Integration, Delivery, and Deployment via CI/CD pipelines.
  • Experience supporting and collaborating with BI and Analytics teams in fast-paced environments.
  • Ability to pair program and work effectively with other engineers.
  • Excellent analytical and problem-solving abilities.
  • Knowledge of agile methodologies such as Scrum or Kanban is a plus.
  • Comfortable representing the team in standups and problem-solving sessions.
  • Capable of driving the creation of technical test plans and maintaining records, including unit and integration tests, within automated test environments to ensure high code quality.
  • Promote SRE (Site Reliability Engineering) culture by addressing challenges through data engineering.
  • Ensure service resilience, sustainability, and adherence to recovery time objectives for all delivered software solutions.

If you are interested in this opportunity, please apply now with your updated CV in Microsoft Word/PDF format.

Disclaimer
Notwithstanding any guidelines given to level of experience sought, we will consider candidates from outside this range if they can demonstrate the necessary competencies.
Square One is acting as both an employment agency and an employment business, and is an equal opportunities recruitment business. Square One embraces diversity and will treat everyone equally. Please see our website for our full diversity statement.

Page 1 of 1
Frequently asked questions
In Sheffield, you can find a variety of Splunk roles including Splunk Engineer, Splunk Administrator, Splunk Developer, and Splunk Analyst positions across different industries such as finance, healthcare, and technology.
Most Splunk jobs in Sheffield require proficiency with Splunk software, experience in IT operations or data analysis, and relevant certifications like Splunk Core Certified User or Splunk Certified Power User. Additional skills in scripting, networking, or cybersecurity can be beneficial.
To apply, simply browse the available Splunk job listings in Sheffield on our platform, click on the job you’re interested in, and follow the application instructions provided by the employer. You may need to upload your CV and cover letter.
Yes, many employers offer remote or hybrid working arrangements for Splunk roles based in Sheffield. You can filter job listings on our site to show only remote or flexible opportunities.
We update our job board regularly with new Splunk positions as soon as employers post them. We recommend checking back frequently or setting up job alerts to stay informed about the latest opportunities.