As we enter the stage of rapid commercialisation and customer account growth, we have a number of exciting new offerings to launch to customers. We’re looking for an exceptional person to help us continuously deliver features that provide value to our customers. Our ideal engineer would be an individual who loves to engage with interesting software problems, with an interest in data related development and the passion to build and shape the future within a collaborative, community-based environment. We operate a highly agile development approach, giving wide scope to be involved with hands-on system-design, test driven development, deployment and operations.
Our data sources and problems are many and varied. We have some simple but high throughput data sources (e.g. over 5,000,000,000 rows a day and growing rapidly), complex unstructured and semi structured data and complex application data from our various microservices.
Our aim is to allow our business and our customers to answer increasingly complex questions, and gain new insights, based on our data, additional external data, and things we can learn and models we can extract from our data.
What you’ll do
Build and maintain data pipelines that deliver key data and insights to the business and our customers
Advocate for best data practices throughout the organisation
Integrate new data sources into the data platform through APIs, CDC or bulk data transfer
Build and maintain testing and documentation frameworks for our data sources
Work with the business to scope and deliver new data engineering projects and requirements
Maintain and build on our existing data infrastructure and tools
Support the internationalisation of our data infrastructure as we continue to grow globally
Contribute to the software engineering and data engineering culture here at Kraken
Collaborate regularly with colleagues with many different professional specialities, including software engineers and data scientists, to create innovative solutions that delight our customers and colleagues
Work as part of a globally distributed team of engineers, regularly seeking feedback and growing your skills as technical professionals.
What you’ll need
In depth industry experience in a data engineering capacity
Experience with data processing and/or analytics technologies e.g. Databricks, dbt, AWS Glue, Spark, Airflow, Redshift, SQL, Parquet/delta (don’t worry, we don’t expect or want you to have them all, and experience with other technologies doing the same jobs is also interesting).
Industry experience in software development & design
A drive to get things done in a collaborative, agile development environment
An interest in working with large data sets, both processing and analysing, and building data products for our customers
A proven ability to perform and communicate well in a fast-paced environment
Excellent analytical and multitasking skills
It would be great if you had
Experience with BI and/or Analytics tools, e.g. Athena, Tableau, Quicksight
Experience working with data lakes and data at scale
Experience migrating data between platforms at scale
Experience with AWS or similar cloud providers, and serverless technologies e.g. AWS Lambda, Kinesis, DynamoDB, API Gateway
Experience developing, securing or operating cloud scale applications or infrastructure; ideally terraform or cloudformation
Experience working with a team distributed across multiple continents and timezones