Job Description
My client, a global financial services organisation, is hiring a Data Ops Engineer to join their Technology team. This is a critical role focused on building and maintaining data pipelines that support both trade and communications surveillance systems across a highly regulated environment.
- Competitive salary (DOE)
- Hybrid working
- Flexible working options
- Opportunity to work on large-scale, business-critical data platforms
- Strong career progression within a global organisation
You will be part of a cross-functional team responsible for delivering robust, scalable, and compliant data solutions across structured and unstructured datasets. The role sits at the intersection of data engineering and operations, ensuring data pipelines are reliable, auditable, and aligned with regulatory requirements.
As a Data Ops Engineer, you will design, build, and optimise end-to-end data pipelines across multiple data sources including transactional, reference, market, and communications data. You will implement data quality, validation, and reconciliation processes to ensure completeness, accuracy, and timeliness of data ingestion.
You will work closely with engineering, data, and compliance teams to ensure data governance, lineage, and auditability standards are met. The role also involves building cloud infrastructure, monitoring pipelines, resolving data issues, and supporting surveillance data initiatives including persisting alert data into data lake environments for analytics.
The Role
- Design, build, and maintain scalable end-to-end data pipelines (ETL/ELT)
- Implement data quality checks, validation rules, and reconciliation processes
- Ensure visibility of data completeness and processing issues
- Identify critical data elements and implement failover and recovery strategies
- Build and manage cloud infrastructure using Infrastructure-as-Code (Terraform/CDK)
- Develop and maintain CI/CD pipelines and automated testing frameworks
- Monitor, troubleshoot, and resolve data anomalies across pipelines
- Collaborate with analysts, developers, and business stakeholders to translate requirements
- Implement data governance, lineage, and auditability frameworks
- Ensure compliance with regulatory, legal, and security standards
- Optimise performance and cost in collaboration with cloud and infrastructure teams
The Person
- Strong experience building and maintaining ETL/ELT data pipelines
- Proficiency in Python or Java, and strong SQL skills
- Experience with data pipeline frameworks (e.g. Airflow, dbt, Spark)
- Hands-on experience with AWS ecosystem
- Strong understanding of CI/CD and software engineering best practices
- Experience working in Data Engineering or DataOps roles
- Excellent problem-solving skills in fast-paced environments
- Strong communication skills with both technical and non-technical stakeholders
- Experience working in Agile delivery environments
- Degree in Computer Science, Engineering, Data Science, or related field
Desirable Experience
- Experience with event-driven and streaming architectures (Kafka, Kinesis)
- Knowledge of market data, trade/order systems, or financial services
- Experience with AWS services such as EKS, Lambda, S3, Glue, DynamoDB, Step Functions
- Familiarity with Terraform or CDK (Infrastructure-as-Code)
- Experience with monitoring and observability tools (e.g. Grafana)
- Experience with communications platforms (e.g. enterprise messaging systems)
- Understanding of data governance, security, and regulatory compliance
- Familiarity with tools such as GitLab, data cataloguing or BI platforms
- Relevant certifications in cloud or DataOps