Develop, maintain, and optimise ETL processes for data extraction, transformation, and loading
Create and manage data models and data warehousing solutions
Write and maintain structured queries as well as content scraping projects
Utilise programming languages like Python, SQL for data processing tasks
Enablement of integration of apps / services and connecting to internal and third-party APIs
Collaborate with cross-functional teams to ensure seamless integration of data processes
Optimise data pipelines for performance and efficiency.
Work closely with data scientists and analysts to support their data needs.
Build pipelines to transform raw, unstructured data into useful information by leveraging AI and LLMs
Proven experience in data engineering and proficient in designing and implementing scalable data architectures.
Strong experience with ETL processes, data modelling, and data warehousing (we use airflow, dbt and redshift)
Expertise in database technologies, both relational (SQL) and NoSQL.
Knowledge of cloud platforms (AWS)
Solid understanding of data security measures and compliance standards.
Excellent Python experience
Collaborative skills to work closely with data scientists and analysts
Ability to optimize data pipelines for performance and efficiency.
Ability to build, test and maintain tasks and projects
Experience with version control systems, like Git.