Make yourself visible and let companies apply to you.
Roles

Delta Lake Jobs

Overview

Discover the best Delta Lake jobs on Haystack, your go-to IT job board for data engineering careers. Whether you're an experienced data engineer or looking to break into the world of Delta Lake, we connect you with top employers seeking skilled professionals in building scalable, reliable data lakes. Explore current Delta Lake job opportunities and take the next step in your data career today!
Filters applied
Delta Lake
Search
Salary
Location
Remote preference
Role type
Seniority
Tech stack
Sectors
Contract type
Company size
Visa sponsorship
Data Architect
Amtis Professional Ltd
Birmingham
Hybrid
Mid - Senior
£90,000
RECENTLY POSTED
processing-js
fabric
delta-lake
vault
microsoft-azure
Data Architect – Azure & DatabricksPermanentUp to £90,000Overview: We’re looking for a Data Architect to design and deliver modern data solutions across an Azure ecosystem. You’ll play a key role in shaping the data strategy, owning the architecture of scalable, secure, and high-performing data platforms that enable advanced analytics and business insights.Key Responsibilities:
Lead the design and architecture of end-to-end data platforms using Azure services (e.g., Azure Data Lake, Data Factory, Synapse, Fabric, Key Vault).
Architect and optimise Databricks environments for data engineering & analytics.
Develop data models, standards, and best practices ensuring data quality, governance, and reliability.
Collaborate with data engineers, analysts, and business stakeholders to translate requirements into scalable solutions.
Oversee data integration, ingestion, and transformation pipelines across batch and streaming workloads.
Ensure security, compliance, and cost optimisation throughout the data estate.
Skills & Experience:
Strong experience architecting cloud-native data solutions on Microsoft Azure.
Hands-on expertise with Databricks (Delta Lake, Spark, notebooks, cluster management).
Deep understanding of data modelling, warehousing, and distributed data processing.
Experience with Python/SQL for data engineering and solution design.
Familiarity with CI/CD, DevOps, and Infrastructure-as-Code (Terraform/ARM/Bicep) is beneficial.
If this role sounds of interest, please apply with a copy of your CV and contact information.
Azure Data Engineer
Opus Recruitment Solutions
Bristol
Hybrid
Mid - Senior
£350/day - £450/day
RECENTLY POSTED
azure-databricks
delta-lake
sql
Azure Data Engineer | £(Apply online only) per day | Bristol | Hybrid | 6‑Month Initial Term |A large‑scale data transformation programme is underway, and our client is looking for an experienced Azure Data Engineer to support the rebuild of their cloud data platform. This role is hands‑on and delivery‑focused — you’ll be designing and developing Azure‑native data pipelines, working extensively with Databricks, and shaping scalable data models across the Microsoft ecosystem. The role would require you to be on site in Bristol 3 days per week.What you’ll be doingBuild, enhance and maintain data pipelines using Azure Databricks, Data Factory, and Delta LakeDevelop and optimise Lakehouse components and cloud‑based data flowsCreate robust data models to support analytics, MI and downstream reportingAssist in migrating legacy warehouse assets into a modern Azure environmentContribute to cloud architecture decisions, data standards and best‑practice engineering patternsWhat you’ll bringStrong hands‑on experience across Azure Data Services (ADF, ADLS, Synapse, Databricks)Excellent SQL skills, with experience in performance tuning and optimisationSolid understanding of data modelling (star schema, medallion, ETL frameworks)Ability to work with complex, inconsistent or legacy data sourcesExperience building scalable, production‑ready pipelines in a cloud environmentAzure Data Engineer | £(Apply online only) per day | Bristol | Hybrid | 6‑Month Initial Term
Senior Data Engineer
Inspire People
Coventry
Hybrid
Senior
£60,000
RECENTLY POSTED
fabric
terraform
python
talend
delta-lake
scala
HM Land Registry (HMLR) is undertaking one of the largest transformation programmes in government, modernising the digital systems that support over £7 trillion of property ownership. As a Senior Data Engineer, you will help establish a new data engineering capability, contributing to the development of reliable data pipelines and products that improve data access, integrity and value across the organisation. Your work will support programmes that shape how HMLR manages and uses its data for years to come. Salary up to £60,800, 29% employer pension contribution plus full Civil Service benefits. Flexible, hybrid working from Plymouth, Croydon or Coventry.About the roleThis role has come to fruition as HMLR embarks on a significant modernisation of its core services and data infrastructure. With new funding secured and a dedicated Data Engineering capability being formed for the first time, there is a crucial need to build strong, reliable data systems that can support future services and national programmes.As a Senior Data Engineer, you’ll design and deliver robust data systems, pipelines and products that support analytics and operational decision-making. Working in agile teams, you’ll provide technical leadership, guide colleagues and help shape solutions across the organisation. You’ll also support opportunity discovery, develop prototypes and production-ready solutions, and continually improve data engineering practices and systems in production.If you would like to find out more about the role, the Data Engineering capability and what it’s like to work at HMLR, a Hiring Manager Q&A session where you can virtually ‘meet the team’ will be held via Teams onTuesday, 6th of January at 12:30pm. Follow the apply link to register your interest.Key ResponsibilitiesIdentify opportunities to reuse and optimise data flows, including building streaming systems, managing databases and improving code performance.Lead the development and maintenance of data engineering solutions, advising teams as a subject matter expert and ensuring alignment with HMLR standards and approved technologies.Collaborate with senior colleagues to understand where data engineering adds value, supporting strategic and operational decision-making.Support and guide junior team members, contribute to the data engineering community and advocate for data quality, maintainability and reusable components.Continuously improve data systems and maintain awareness of best practice and emerging approachesEssential SkillsExperience with large-scale analytics engines (e.g. Spark/PySpark) and scripting languages such as Python or Scala.Hands-on use of cloud data stacks (e.g. SageMaker Notebooks, S3, Glue, Athena) and modern storage formats or frameworks (e.g. Parquet, Delta Lake, Fabric).Experience with DevOps/DataOps tooling and practices (e.g. Terraform) and testing data pipelines, including end-to-end, data quality, monitoring, unit and contract testing.Experience managing the full data lifecycle, including development, analysis, modelling, integration and metadata management.Communicating clearly with technical and non-technical stakeholders, leading discussions in multidisciplinary teams and managing differing viewpointsProfiling data, analysing source systems and creating data models that suit different organisational needDesirable SkillsIBM DB2/QREPBMC Control-MExperience of Talend Data Integration (e.g. ETL build and deployment using Talend Cloud.Experience of leadershipExperience of metadata managementExperience of testingLocationExpectation is to be working from any of the advertised locations 60% of your time across the month (typically three days per week). Hours are flexible and condensed hours are an option.Locations available: Croydon, Coventry, PlymouthSalaryCivil Service Grade: SEODependent upon assessment at interview, your starting salary will be one of the following:Developing: £49,600Proficient: £55,200Accomplished: £60,800BenefitsOver 29% employer pension contributionAnnual leave of 28.5 days per year plus 8 public holidaysA clear progression pathway including personalised training and development plansExpensed accreditations with dedicated training daysFlexi-time scheme (you decide what working hours work best for you)Opportunity to work condensed hoursSocial and sports clubsAccess to an Employee Assistance Programme for counselling and supportInterest-free season ticket loanCycle to Work scheme (salary sacrifice)HMLR has a strong and positive culture, a commitment to inclusivity, a focus on continuous learning and development, and flexible ways of working.Further InformationApplication deadline: 11:55pm Thursday 8th of January 2026Please apply with a CV that provides evidence against the essential skills.HMLR does not hold a UK Visa & Immigration (UKVI) Skilled Worker Licence and is unable to sponsor individuals for Skilled Worker sponsorship.If you are a motivated and experienced data professional who enjoys leading on complex data challenges, shaping technical approaches and working collaboratively in agile teams, this is your chance to make a significant impact. Join HM Land Registry and play a crucial role in developing and guiding the data capabilities that support property ownership and public services across England and Wales. Apply now in complete confidence.TPBN1_UKTJ
Data Platform Engineer
McCabe & Barton
London
Hybrid
Mid - Senior
£100,000
RECENTLY POSTED
delta-lake
sql
Data Platform Engineer Permanent Hybrid (3 days in the office, 2 days WFH) LondonMcCabe & Barton are partnering with a leading financial services client to recruit an experiencedData Platform Engineer. This is an excellent opportunity to join a forward-thinking team driving innovation with modern cloud-based data technologies.Role OverviewAs a Data Platform Engineer, you will design, build, and maintain scalable cloud-based data infrastructure usingAzureandDatabricks. Youll play a key role in ensuring that data pipelines, architecture, and analytics environments are reliable, performant, and secure.Key ResponsibilitiesPlatform Development & Maintenance
Design and implement data pipelines usingAzure Data Factory,Databricks, and related Azure services.
Build ETL/ELT processes to transform raw data into structured, analytics-ready formats.
Optimise pipeline performance and ensure high availability of data services.
Infrastructure & Architecture
Architect and deploy scalable data lake solutions usingAzure Data Lake Storage.
Implement governance and security measures across the platform.
LeverageTerraformor similar IaC tools for controlled and reproducible deployments.
Databricks Development
Develop and optimise data jobs usingPySparkorScalawithin Databricks.
Implement themedallion architecture(bronze, silver, gold layers) and useDelta Lakefor reliable data transactions.
Manage cluster configurations and CI/CD pipelines for Databricks deployments.
Monitoring & Operations
Implement monitoring solutions usingAzure Monitor,Log Analytics, and Databricks tools.
Optimise performance, ensure SLAs are met, and establish disaster recovery and backup strategies.
Collaboration & Documentation
Partner with data scientists, analysts, and business stakeholders to deliver effective solutions.
Document technical designs, data flows, and operational procedures for knowledge sharing.
Essential Skills & Experience
5+ years of experience withAzure services(Azure Data Factory, ADLS, Azure SQL Database, Synapse Analytics).
Strong hands-on expertise inDatabricks,Delta Lake, andcluster management.
Proficiency inSQLandPythonfor pipeline development.
Familiarity withGit/GitHuband CI/CD practices.
Understanding ofdata modelling,data governance, andsecurityprinciples.
Desirable Skills
Experience withTerraformor other Infrastructure-as-Code tools.
Familiarity withAzure DevOpsor similar CI/CD platforms.
Experience with data quality frameworks and testing.
Azure Data Engineer or Databricks certifications.
Please apply with an updated CV if you align to the key skills required!
Senior Data Engineer/ PowerBI
Head Resourcing
Glasgow
Hybrid
Senior
£60,000 - £80,000
RECENTLY POSTED
powerbi
processing-js
fabric
unity-3d
git
python
+4
Lead Data Engineer - Azure & Databricks Lakehouse Glasgow (3/4 days onsite) | Exclusive Role with a Leading UK Consumer Business A rapidly scaling UK consumer brand is undertaking a major data modernisation programme-moving away from legacy systems, manual Excel reporting and fragmented data sources into a fully automated Azure Enterprise Landing Zone + Databricks Lakehouse. They are building a modern data platform from the ground up using Lakeflow Declarative Pipelines, Unity Catalog, and Azure Data Factory, and this role sits right at the heart of that transformation. This is a rare opportunity to join early, influence architecture, and help define engineering standards, pipelines, curated layers and best practices that will support Operations, Finance, Sales, Logistics and Customer Care. If you want to build a best-in-class Lakehouse from scratch-this is the one. ? What You’ll Be Doing Lakehouse Engineering (Azure + Databricks) Engineer scalable ELT pipelines using Lakeflow Declarative Pipelines, PySpark, and Spark SQL across a full Medallion Architecture (Bronze ? Silver ? Gold). Implement ingestion patterns for files, APIs, SaaS platforms (e.g. subscription billing), SQL sources, SharePoint and SFTP using ADF + metadata-driven frameworks. Apply Lakeflow expectations for data quality, schema validation and operational reliability. Curated Data Layers & Modelling Build clean, conformed Silver/Gold models aligned to enterprise business domains (customers, subscriptions, deliveries, finance, credit, logistics, operations). Deliver star schemas, harmonisation logic, SCDs and business marts to power high-performance Power BI datasets. Apply governance, lineage and fine-grained permissions via Unity Catalog. Orchestration & Observability Design and optimise orchestration using Lakeflow Workflows and Azure Data Factory. Implement monitoring, alerting, SLAs/SLIs, runbooks and cost-optimisation across the platform. DevOps & Platform Engineering Build CI/CD pipelines in Azure DevOps for notebooks, Lakeflow pipelines, SQL models and ADF artefacts. Ensure secure, enterprise-grade platform operation across Dev ? Prod, using private endpoints, managed identities and Key Vault. Contribute to platform standards, design patterns, code reviews and future roadmap. Collaboration & Delivery Work closely with BI/Analytics teams to deliver curated datasets powering dashboards across the organisation. Influence architecture decisions and uplift engineering maturity within a growing data function. ? Tech Stack You’ll Work With Databricks: Lakeflow Declarative Pipelines, Workflows, Unity Catalog, SQL Warehouses Azure: ADLS Gen2, Data Factory, Key Vault, vNets & Private Endpoints Languages: PySpark, Spark SQL, Python, Git DevOps: Azure DevOps Repos, Pipelines, CI/CD Analytics: Power BI, Fabric ? What We’re Looking For Experience 5-8+ years of Data Engineering with 2-3+ years delivering production workloads on Azure + Databricks. Strong PySpark/Spark SQL and distributed data processing expertise. Proven Medallion/Lakehouse delivery experience using Delta Lake. Solid dimensional modelling (Kimball) including surrogate keys, SCD types 1/2, and merge strategies. Operational experience-SLAs, observability, idempotent pipelines, reprocessing, backfills. Mindset Strong grounding in secure Azure Landing Zone patterns. Comfort with Git, CI/CD, automated deployments and modern engineering standards. Clear communicator who can translate technical decisions into business outcomes. Nice to Have Databricks Certified Data Engineer Associate Streaming ingestion experience (Auto Loader, structured streaming, watermarking) Subscription/entitlement modelling experience Advanced Unity Catalog security (RLS, ABAC, PII governance) Terraform/Bicep for IaC Fabric Semantic Model / Direct Lake optimisation
Senior Azure Data Engineer
Youngs Employment Services
London
Hybrid
Senior
£70,000 - £80,000
RECENTLY POSTED
processing-js
r
python
azure-databricks
delta-lake
sql
+1
Hybrid - Work From Home and West LondonCirc £70,000 - £80,000 + Range of benefitsA well-known and prestigious business is looking to add a Senior Azure Data Engineer to their data team. This is an exciting opportunity for a Data Engineer that’s not just technical, but also enjoys directly engaging and collaborating with stakeholders from across business functions. Having nearly completed the process of migrating data from their existing on-prem databases to an Azure Cloud based platform, the Senior Data Engineer will play a key role in helping make best use of the data by gathering and agreeing requirements with the business to build data solutions that align accordingly. Working with diverse data sets from multiple systems and overseeing their integration and optimisation will require raw development, management and optimisation of data pipelines using tools in the Azure Cloud. Our client has expanded rapidly in recent years, they’re an iconic business with a special work environment that’s manifested a strong and positive culture amongst the whole workforce. This is a hybrid role where the postholder can work from home 2 or 3 days per week, the other days will be based onsite in West London just a few minutes walk from a Central Line tube station.The key responsibilities for the post include;* Develop, construct, test and maintain data architectures within large scale data processing systems.* Develop and manage data pipelines using Azure Data Factory, Delta Lake and Spark.* Utilise Azure Cloud architecture knowledge to design and implement scalable data solutions.* Utilise Spark, SQL, Python, R, and other data frameworks to manipulate data and gain a thorough understanding of the dataset’s characteristics.* Interact with API systems to query and retrieve data for analysis.* Collaborate with business users / stakeholders to gather and agree requirements.To be considered for the post you’ll need at least 5 years experience ideally with 1 or 2 years at a senior / lead level. You’ll need to be goal driven and able to take ownership of work tasks without the need for constant supervision. You’ll be engaging with multiple business areas so the ability to communicate effectively to understand requirements and build trusted relationships is a must. It’s likely you’ll have most, if not all the following:* Experience as a Senior Data Engineer or similar* Strong knowledge of Azure Cloud architecture and Azure Databricks, DevOps and CI/CD.* Experience with PySpark, Python, SQL and other data engineering development tools.* Experience with metadata driven pipelines and SQL serverless data warehouses.* Knowledge of querying API systems.* Experience building and optimising ETL pipelines using Databricks.* Strong problem-solving skills and attention to detail.* Understanding of data governance and data quality principles.* A degree in computer science, engineering, or equivalent experience.Salary will be dependent on experience and likely to be in the region of £70,000 - £80,000 although client may consider higher for outstanding candidate. Our client can also provide a vibrant, rewarding, and diverse work environment that supports career development.Candidates must be authorised to work in the UK and not require sponsoring either now or in the future. For further information, please send your CV to Wayne Young at Young’s Employment Services Ltd. Young’s Employment Services acts in the capacity of both an Employment Agent and Employment Business
Data Engineer
Head Resourcing
Glasgow
Hybrid
Mid
£45,000 - £55,000
fabric
unity-3d
terraform
git
graphql
python
+5
Mid-Level Data Engineer (Azure / Databricks)NO VISA REQUIREMENTSLocation: Glasgow (3+ days) Reports to: Head of IT My client is undergoing a major transformation of their entire data landscape-migrating from legacy systems and manual reporting into a modern Azure + Databricks Lakehouse. They are building a secure, automated, enterprise-grade platform powered by Lakeflow Declarative Pipelines, Unity Catalog and Azure Data Factory. They are looking for a Mid-Level Data Engineer to help deliver high-quality pipelines and curated datasets used across Finance, Operations, Sales, Customer Care and Logistics.What You’ll DoLakehouse Engineering (Azure + Databricks)
Build and maintain scalable ELT pipelines using Lakeflow Declarative Pipelines, PySpark and Spark SQL.
Work within a Medallion architecture (Bronze ? Silver ? Gold) to deliver reliable, high-quality datasets.
Ingest data from multiple sources including ChargeBee, legacy operational files, SharePoint, SFTP, SQL, REST and GraphQL APIs using Azure Data Factory and metadata-driven patterns.
Apply data quality and validation rules using Lakeflow Declarative Pipelines expectations.
Curated Layers & Data Modelling
Develop clean and conforming Silver & Gold layers aligned to enterprise subject areas.
Contribute to dimensional modelling (star schemas), harmonisation logic, SCDs and business marts powering Power BI datasets.
Apply governance, lineage and permissioning through Unity Catalog.
Orchestration & Observability
Use Lakeflow Workflows and ADF to orchestrate and optimise ingestion, transformation and scheduled jobs.
Help implement monitoring, alerting, SLAs/SLIs and runbooks to support production reliability.
Assist in performance tuning and cost optimisation.
DevOps & Platform Engineering
Contribute to CI/CD pipelines in Azure DevOps to automate deployment of notebooks, Lakeflow Declarative Pipelines, SQL models and ADF assets.
Support secure deployment patterns using private endpoints, managed identities and Key Vault.
Participate in code reviews and help improve engineering practices.
Collaboration & Delivery
Work with BI and Analytics teams to deliver curated datasets that power dashboards across the business.
Contribute to architectural discussions and the ongoing data platform roadmap.
Tech You’ll Use
Databricks: Lakeflow Declarative Pipelines, Lakeflow Workflows, Unity Catalog, Delta Lake
Azure: ADLS Gen2, Data Factory, Event Hubs (optional), Key Vault, private endpoints
Languages: PySpark, Spark SQL, Python, Git
DevOps: Azure DevOps Repos & Pipelines, CI/CD
Analytics: Power BI, Fabric
What We’re Looking ForExperience
Commercial and proven data engineering experience.
Hands-on experience delivering solutions on Azure + Databricks.
Strong PySpark and Spark SQL skills within distributed compute environments.
Experience working in a Lakehouse/Medallion architecture with Delta Lake.
Understanding of dimensional modelling (Kimball), including SCD Type 1/2.
Exposure to operational concepts such as monitoring, retries, idempotency and backfills.
Mindset
Keen to grow within a modern Azure Data Platform environment.
Comfortable with Git, CI/CD and modern engineering workflows.
Able to communicate technical concepts clearly to non-technical stakeholders.
Quality-driven, collaborative and proactive.
Nice to Have
Databricks Certified Data Engineer Associate.
Experience with streaming ingestion (Auto Loader, event streams, watermarking).
Subscription/entitlement modelling (e.g., ChargeBee).
Unity Catalog advanced security (RLS, PII governance).
Terraform or Bicep for IaC.
Fabric Semantic Models or Direct Lake optimisation experience.
Why Join?
Opportunity to shape and build a modern enterprise Lakehouse platform.
Hands-on work with Azure, Databricks and leading-edge engineering practices.
Real progression opportunities within a growing data function.
Direct impact across multiple business domains.
Azure Data Engineer - £500 - Hybrid
Tenth Revolution Group
Newcastle upon Tyne
Hybrid
Mid - Senior
£450/day - £550/day
processing-js
fabric
terraform
github
git
kafka
+7
Azure Data Engineer - £500PD - Hybrid We are seeking an Azure Data Engineer with strong experience in Databricks to design, build, and optimize scalable data pipelines and analytics solutions on the Azure cloud platform. The ideal candidate will have hands-on expertise across Azure data services, data modeling, ETL/ELT development, and collaborative engineering practices. Key Responsibilities * Design, develop, and maintain scalable data pipelines using Azure Databricks (Python, PySpark, SQL). * Build and optimize ETL/ELT workflows that ingest data from various on-prem and cloud-based sources. * Work with Azure services including Azure Data Lake Storage, Azure Data Factory, Azure Synapse Analytics, Azure SQL, and Event Hub. * Implement data quality validation, monitoring, metadata management, and governance processes. * Collaborate closely with data architects, analysts, and business stakeholders to understand data requirements. * Optimize Databricks clusters, jobs, and runtimes for performance and cost efficiency. * Develop CI/CD workflows for data pipelines using tools such as Azure DevOps or GitHub Actions. * Ensure security best practices for data access, data masking, and role-based access control. * Produce technical documentation and contribute to data engineering standards and best practices. Required Skills and Experience * Proven experience as a Data Engineer working with Azure cloud services. * Strong proficiency in Databricks, including PySpark, Spark SQL, notebooks, Delta Lake, and job orchestration. * Strong SQL and data modeling skills (e.g., dimensional modeling, data vault). * Experience with Azure Data Factory or other orchestration tools. * Understanding of data lakehouse architecture and distributed computing principles. * Experience with CI/CD pipelines and version control (Git). * Knowledge of REST APIs, JSON, and event-driven data processing. * Solid understanding of data governance, data lineage, and security controls. * Ability to solve complex technical problems and communicate solutions clearly. Preferred Qualifications * Industry certifications (e.g., Databricks Data Engineer Associate/Professional, Azure Data Engineer Associate). * Experience with Azure Synapse SQL or serverless SQL pools. * Familiarity with streaming technologies (e.g., Spark Structured Streaming, Kafka, Event Hub). * Experience with infrastructure-as-code (Terraform or Bicep). * Background in BI or analytics engineering (Power BI, dbt) is a plus. To apply for this role please submit your CV or contact Dillon Blackburn on (phone number removed) or at (url removed). Tenth Revolution Group are the go-to recruiter for Data & AI roles in the UK offering more opportunities across the country than any other recruitment agency. We’re the proud sponsor and supporter of SQLBits, Power Platform World Tour, and the London Fabric User Group. We are the global leaders in Data & AI recruitment
Data Engineer
Hunter Selection
Birmingham
Hybrid
Senior
£50,000/day - £65,000/day
fabric
confluence
postgresql
powerbi
delta-lake
sql
Birmingham - Hybrid 2 days in office per week Azure Data Warehouse SQL - PostgreSQL Fabric I am working with a client who is on a journey to transform how data powers decision-making across the business. They are building an Azure data warehouse and are looking for a skilled Data Engineer to help them take it to the next level. Benefits: 26 days holiday plus bank holidays Health plan Annual bonus scheme Hybrid working Why You’ll Love This Role Play a central role in building a scalable, high-performing data ecosystem. Work with modern Microsoft technologies (Azure Data Factory, Microsoft Fabric). Collaborate with IT and business teams to deliver insights that drive smarter decisions. Influence how data is used across the entire organisation. What You’ll Do Oversee the data warehouse roadmap and the data warehouse itself Lead on quality and best practise Develop and maintain ETL processes and data integrations. Optimise data solutions for performance and scalability. Ensure data integrity, security, and compliance. Act as a subject matter expert on business data. Keep documentation up to date in Confluence. What We’re Looking For Strong experience in data engineering, ideally at senior level. Hands-on expertise with Azure Data Factory and Microsoft Fabric. Advanced SQL skills and data modelling experience. Familiarity with CI/CD pipelines and API integrations. Excellent problem-solving and communication skills. Bonus Points For Power BI experience. Knowledge of GDPR and ISO27001. Microsoft certifications (Fabric Data Engineer Associate, Azure Fundamentals). Financial Services experience This is an urgent vacancy, if you would like to be considered then please apply quoting reference AR(phone number removed) Azure Data Warehouse, Azure Data Factory, Microsoft Fabric, Fabric, SQL, PostgreSQL, API, CI/CD, PowerBI, ETL, ELT, Data Engineer, Azure fundamentals, Fabric Data Engineer, ADF, ADL, Data Lake, Delta Lake, Azure Data Warehouse, Azure Data Factory, Microsoft Fabric, Fabric, SQL, PostgreSQL, API, CI/CD, PowerBI, ETL, ELT, Data Engineer, Azure fundamentals, Fabric Data Engineer, ADF, ADL, Data Lake, Delta Lake, Azure Data Warehouse, Azure Data Factory, Microsoft Fabric, Fabric, SQL, PostgreSQL, API, CI/CD, PowerBI, ETL, ELT, Data Engineer, Azure fundamentals, Fabric Data Engineer, ADF, ADL, Data Lake, Delta Lake, Azure Data Warehouse, Azure Data Factory, Microsoft Fabric, Fabric, SQL, PostgreSQL, API, CI/CD, PowerBI, ETL, ELT, Data Engineer, Azure fundamentals, Fabric Data Engineer, ADF, ADL, Data Lake, Delta Lake, Azure Data Warehouse, Azure Data Factory, Microsoft Fabric, Fabric, SQL, PostgreSQL, API, CI/CD, PowerBI, ETL, ELT, Data Engineer, Azure fundamentals, Fabric Data Engineer, ADF, ADL, Data Lake, Delta Lake If you are interested in this position please click ‘apply’. Hunter Selection Limited is a recruitment consultancy with offices UK wide, specialising in permanent & contract roles within Engineering & Manufacturing, IT & Digital, Science & Technology and Service & Sales sectors. Please note as we receive a high level of applications we can only respond to applicants whose skills & qualifications are suitable for this position. No terminology in this advert is intended to discriminate against any of the protected characteristics that fall under the Equality Act 2010. For the purposes of the Conduct Regulations 2003, when advertising permanent vacancies we are acting as an Employment Agency, and when advertising temporary/contract vacancies we are acting as an Employment Business
Page 1 of 1

Frequently asked questions

What types of Delta Lake jobs are available on this platform?
Our platform features a variety of Delta Lake job opportunities, including Data Engineer, Big Data Developer, Data Architect, and Analytics Engineer roles focused on building and maintaining Delta Lake data pipelines.
What skills are commonly required for Delta Lake positions?
Employers typically look for proficiency in Apache Spark, Delta Lake, Scala or Python, SQL, ETL processes, cloud platforms like AWS or Azure, and experience with data lakes and lakehouses.
Can I find remote Delta Lake jobs here?
Yes, we list a variety of remote Delta Lake job opportunities alongside onsite and hybrid positions to accommodate different work preferences.
How can I improve my chances of getting hired for a Delta Lake role?
Enhance your profile by highlighting your experience with Delta Lake implementations, Spark workflows, and relevant cloud technologies. Pursuing certifications and contributing to open-source Delta Lake projects can also boost your visibility.
Does the job board offer resources to learn more about Delta Lake?
While our primary focus is job listings, we occasionally share blogs, tutorials, and webinars related to Delta Lake to help you stay updated and sharpen your skills.