Our Gross Margin team is responsible for producing key financial outputs to report on the profitability of the business and the different products we offer to customers.
To do this, the Gross Margin pipeline produces a highly granular view of consumption, revenue and costs for every meter and half hour on supply since Octopus started back in 2016. By running this regularly, we can break down the progression of historical and forecasted financial reporting by meter configuration (e.g. smart vs dumb meters), agreement type (e.g. fixed vs variable customers), billing status, cost types (e.g. wholesale and distribution) and consumption type (estimated vs actual).
Due to the growing scale of our customer base, the complexity of the UK electricity and gas markets, and the shift towards tracking consumption at a half-hourly level using smart meter data, we are therefore looking for someone to take on this great opportunity to combine data engineering and financial analytics to deliver business critical need.
What you’ll do
Ensuring the Gross Margin team gets the financial outputs they need on a weekly/monthly basis to support accounting and audit requirements;
Finding, troubleshooting and solving problems leading to shifting trends or irregularities in the gross margin numbers caused by changes in Kraken (our customer management and billing system), industry or coding logic;
Working with a wide variety of teams across Kraken and OE Business to align and improve logic where possible, particularly with regards to customer edge cases;
Comparing volumes to flows and costs to invoices to make sure the output aligns with how the industry is settling Octopus Energy;
Designing and implementing easy-to-use processes that allow for flexible adjustments to the GM output for spontaneous accounting requirements;
Expanding the use cases of the output and making it easily accessible so other UK teams can benefit from its granular detail and computing investment;
Setting up dashboards to automate reporting, track data integrity and monitor variances against industry and Kraken;
Improving the pipeline architecture and integrating automated tests to make it faster and more resilient to upstream data changes;
Supporting gross margin accountants and analysts to level up their coding skills.
Who Are We Looking For:
Strong aptitude with SQL and Python;
Keen eye for detail to find root causes in the pipeline leading to shifts in output numbers;
Ability to quickly understand new domain areas and visualise data effectively;
Ability to translate technical code logic for a variety of business stakeholders;
Team player excited at the idea of ownership across lots of different projects and tools;
Passion for driving towards Net Zero;
Drives knowledge sharing and documentation for a more effective platform;
Experience with dbt, Airflow and Spark would be a plus.
Our Data Stack:
This is the tool suite we currently use and will help you learn in this role:
SQL-based pipelines built with dbt on Databricks
Analysis via Python jupyter notebooks
Pyspark in Databricks workflows for heavy lifting
Streamlit and Python for dashboarding
Airflow DAGs with Python for ETLNotion for data documentation