Location: Hybrid (60% Office / 40% Remote) Sheffield
Contract: 6 Month Contract (Extension possible)
Rate: £400+ per day inside IR35
Role Overview
We are seeking an experienced Lead Data Engineer to design, develop, and optimise enterprise-scale data platforms for large, regulated organisations, ideally in banking, financial services, or other regulated sectors. This role requires hands-on technical expertise, leadership, and a consulting mindset to deliver scalable, resilient data solutions while promoting best practices and operational excellence.
Key Responsibilities
Essential Skills & Experience
Consulting & Soft Skills
If this role aligns with your skills and experience, we’d love to hear from you. Apply today to be considered.
We are seeking a Lead Data Engineering Consultant with proven experience in leading and developing data engineering platforms. The ideal candidate will possess hands-on expertise in the following areas: 1. Extensive enterprise experience withHadoop, Spark, and Splunk. 2. Proficiency in object-oriented and functional scripting, particularly inPython. 3. Skilled in handling raw, structured, semi-structured, and unstructured data (SQL and NoSQL). 4. Experience integrating large, disparate datasets using modern tools and frameworks. 5. Strong background in building and optimizingETL/ELT data pipelines. 6. Familiarity with source control and implementing Continuous Integration, Delivery, and Deployment via CI/CD pipelines. 7. Experience supporting and collaborating with BI and Analytics teamsin fast-paced environments. 8. Ability to pair program and work effectively with other engineers. 9. Excellent analytical and problem-solving abilities. 10. Knowledge of agile methodologies such as Scrum or Kanban is a plus. 11. Comfortable representing the team in standups and problem-solving sessions. 12. Capable of driving the creation of technical test plans and maintaining records, including unit and integration tests, within automated test environments to ensure high code quality. 13. Promote SRE (Site Reliability Engineering) culture by addressing challenges through data engineering. 14. Ensure service resilience, sustainability, and adherence to recovery time objectives for all delivered software solutions.
Location: 3 days on site in either Sheffield/Birmingham/London
Duration: 30/11/2026
Rate 529
MUST BE PAYE THROUGH UMBRELLA
"Key Responsibilities
Required Skills
We’re looking for a Splunk & OpenShift Observability Engineer to design, deploy, and optimise enterprise-grade monitoring across hybrid Kubernetes and OpenShift environments.
This is a high-impact role where you’ll shape observability strategy, enhance service intelligence, and ensure platform reliability at scale - balancing performance, cost efficiency, and security governance.
You’ll work at the intersection of platform engineering, observability, and service intelligence, helping to transform raw telemetry into actionable insight. This is an opportunity to influence reliability strategy, improve operational maturity, and deliver measurable value across a modern cloud-native estate.
What You’ll Be Doing
What You’ll Bring
Skills: OCP, Ansible, IaC
We are seeking an experienced OpenShift Architecture and Migration Design Specialist to lead the design, planning, and execution of OpenShift architectures and migration strategies. The ideal candidate will have expertise in designing robust, scalable, and secure OpenShift environments, as well as creating and implementing migration plans for transitioning workloads and applications to OpenShift. Experience with VMware and Pure Storage is essential to ensure seamless integration with existing infrastructure.________________________________________Key Responsibilities:1.Architecture Design:oDesign the target architecture for OpenShift, including cluster topology, networking, and storage solutions.oDefine and implement best practices for OpenShift cluster setup, including multi-zone and multi-region deployments.oEnsure the architecture supports high availability, fault tolerance, and disaster recovery.2.Migration Design and Optimization:oAssess existing infrastructure, applications, and workloads to determine migration readiness.oDevelop detailed migration plans, including strategies for containerization, workload transfer, and data migration.oImplement migration processes, ensuring minimal downtime and disruption to business operations.oIdentify and mitigate risks associated with the migration process.3.VMware and Pure Storage Integration design:oDesign and implement OpenShift solutions that integrate seamlessly with VMware virtualized environments.oLeverage VMware tools (e.g., vSphere, vCenter, NSX) to optimize OpenShift deployments.oConfigure and manage Pure Storage solutions (e.g., FlashArray, FlashBlade) to provide high-performance, scalable storage for OpenShift workloads.oEnsure compatibility and performance optimization between OpenShift, VMware, and Pure Storage.4.CI/CD Pipelines and DevOps Workflows:oDesign and implement CI/CD pipelines tailored for the OpenShift environment.oIntegrate DevOps workflows with OpenShift-native tools and third-party solutions.oAutomate deployment, scaling, and monitoring processes to streamline application delivery.5.Scalability and Security:oEnsure the architecture and migration plans are scalable to meet future growth and workload demands.oImplement security best practices, including role-based access control (RBAC), network policies, and encryption.oConduct regular security assessments and audits to maintain compliance with organizational standards.6.Collaboration and Documentation:oWork closely with development, DevOps, and operations teams to align architecture and migration plans with business needs.oProvide detailed documentation of the architecture, migration strategies, workflows, and configurations.oOffer technical guidance and training to teams on OpenShift architecture, migration, and best practices.________________________________________Required Skills and Qualifications:Strong experience in designing and implementing OpenShift architectures and migration strategies.In-depth knowledge of Kubernetes, containerization, and orchestration.Expertise in VMware tools and technologies (e.g., vSphere, vCenter, NSX).Hands-on experience with Pure Storage solutions (e.g., FlashArray, FlashBlade).Expertise in networking concepts (e.g., ingress, load balancing, DNS) and storage solutions (e.g., persistent volumes, dynamic provisioning).Hands-on experience with CI/CD tools (e.g., Jenkins, Github, ArgoCD) and DevOps workflows.Strong understanding of high availability, scalability, and security principles in cloud-native environments.Proven experience in workload and application migration to OpenShift or similar platforms.Proficiency in scripting and automation (e.g., Bash, Python, Ansible, Terraform).Excellent problem-solving and communication skills.________________________________________Preferred Qualifications:OpenShift certifications (e.g., Red Hat Certified Specialist in OpenShift Administration).Experience with multi-cluster and hybrid cloud OpenShift deployments.Familiarity with monitoring and logging tools (e.g., oTel, Grafana, Splunk stack).Knowledge of OpenShift Operators and Helm charts.Experience with large-scale migration projects.
Key Responsibilities: Design, implement, and maintain data pipelines to ingest and process OpenShift telemetry (metrics, logs, traces) at scale. Stream OpenShift telemetry via Kafka (producers, topics, schemas) and build resilient consumer services for transformation and enrichment. Engineer data models and routing for multi-tenant observability; ensure lineage, quality, and SLAs across the stream layer. Integrate processed telemetry into Splunk for visualization, dashboards, alerting, and analytics to achieve Observability Level 4 (proactive insights). Implement schema management (Avro/Protobuf), governance, and versioning for telemetry events. Build automated validation, replay, and backfill mechanisms for data reliability and recovery. Instrument services with OpenTelemetry; standardize tracing, metrics, and structured logging across platforms. Use LLMs to enhance observability capabilities (e.g., query assistance, anomaly summarization, runbook generation). Collaborate with platform, SRE, and application teams to integrate telemetry, alerts, and SLOs. Ensure security, compliance, and best practices for data pipelines and observability platforms. Document data flows, schemas, dashboards, and operational runbooks.
Required Skills: Hands-on experience building streaming data pipelines with Kafka (producers/consumers, schema registry, Kafka Connect/KSQL/KStream). Proficiency with OpenShift/Kubernetes telemetry (OpenTelemetry, Prometheus) and CLI tooling. Experience integrating telemetry into Splunk (HEC, UF, sourcetypes, CIM), building dashboards and alerting. Strong data engineering skills in Python (or similar) for ETL/ELT, enrichment, and validation. Knowledge of event schemas (Avro/Protobuf/JSON), contracts, and backward/forward compatibility. Familiarity with observability standards and practices; ability to drive toward Level 4 maturity (proactive monitoring, automated insights). Understanding of hybrid cloud and multi-cluster telemetry patterns. Security and compliance for data pipelines: secret management, RBAC, encryption in transit/at rest. Good problem-solving skills and ability to work in a collaborative team environment. Strong communication and documentation skills.
Role Title: Lead Data Engineer Location: Sheffield/hybrid (3 days on site) Duration: 9 months Rate: £430 per day inside ir35 We are seeking a Lead Data Engineering Consultant with proven experience in leading and developing data engineering platforms. Experience required: Extensive enterprise experience with Hadoop, Spark, and Splunk. Proficiency in object-oriented and functional scripting, particularly in Python. Skilled in handling raw, structured, semi-structured, and unstructured data (SQL and NoSQL). Experience integrating large, disparate datasets using modern tools and frameworks. Strong background in building and optimizing ETL/ELT data pipelines. Familiarity with source control and implementing Continuous Integration, Delivery, and Deployment via CI/CD pipelines. Experience supporting and collaborating with BI and Analytics teams in fast-paced environments. Ability to pair program and work effectively with other engineers. Excellent analytical and problem-solving abilities. Knowledge of agile methodologies such as Scrum or Kanban is a plus. Comfortable representing the team in standups and problem-solving sessions. Capable of driving the creation of technical test plans and maintaining records, including unit and integration tests, within automated test environments to ensure high code quality. Promote SRE (Site Reliability Engineering) culture by addressing challenges through data engineering. Ensure service resilience, sustainability, and adherence to recovery time objectives for all delivered software solutions.Soft Skills (Consultant): Demonstrated ability and enthusiasm for enhancing team performance. Strong active listening and effective communication skills. Self-mastery, with a focus on positive mindsets and professional behaviours. Maintains up-to-date expertise in current tools, technologies, and key areas such as cybersecurity, data privacy, consent, and data residency regulations. Engages with industry groups and external vendors to represent and advance HSBC's interests and influence. Takes accountability for ensuring control and compliance throughout the engineering process. Champions innovation and the adoption of advanced technologies and best practices within the domain.If you are interested in this role or wish to apply, please feel free to submit your CV
Role Title: Lead Data Engineer
Location: Sheffield/hybrid (3 days on site)
Duration: 9 months
Rate: 430 per day inside ir35
We are seeking a Lead Data Engineering Consultant with proven experience in leading and developing data engineering platforms.
Experience required:
Soft Skills (Consultant):
If you are interested in this role or wish to apply, please feel free to submit your CV.
Job Title: Lead Data Engineer - Hadoop, Spark, Pytthon
Location: Sheffield - 3 days per week in the office
Salary/Rate: Up to 617 per day inside IR35
Start Date: 02/03/2026
Job Type: Contract until November
We are seeking a Lead Data Engineering Consultant with proven experience in leading and developing data engineering platforms.
The ideal candidate will possess hands-on expertise in the following areas:
If you are interested in this opportunity, please apply now with your updated CV in Microsoft Word/PDF format.
Disclaimer
Notwithstanding any guidelines given to level of experience sought, we will consider candidates from outside this range if they can demonstrate the necessary competencies.
Square One is acting as both an employment agency and an employment business, and is an equal opportunities recruitment business. Square One embraces diversity and will treat everyone equally. Please see our website for our full diversity statement.