Senior Data Engineer - We're scaling fast, so come join us!! - remote

Paddle
Posted 3 years ago  • London, UK
Stack Overflow

About the Team:

The Data Engineering team at Paddle is a new team responsible for building and owning the core data infrastructure and data processing pipelines supporting Paddle’s data products and business insights. The team supports all areas of the business from both from a data warehousing perspective as well as helping deliver solutions on our data streaming infrastructure. The team will initially be made up of two individuals but will scale in subsequent months. It is a huge opportunity to build something from ground up.

Your Role:

Reporting into the Data Engineering Team Lead, you will be responsible for the delivery of technical solutions to implement Paddle’s data systems. You will collaborate with the wider engineering team and support analysts decentralized across the organization. 

What you’ll do:

  • Leverage your experience and skills to establish the best architecture.
  • Work closely with decentralized analysts (Commercial, Finance, etc.) to identify requirements and develop the necessary data solutions to deliver against those requirements.
  • Build, maintain and run efficient data pipelines.
  • Apply data transformation logic including advanced aggregations and data wrangling techniques.
  • Practise DevOps, you’re responsible for getting your code to production and maintaining it.
  • Explore and use the right tools for the job, backing your choices constructively.
  • Help design a stable platform to support phenomenal growth.

We'd love to hear from you if you have

  • You are a skilled Data or Software Engineer with significant proven experience working in a fast paced growing company and with a passion for Data.
  • Solid development background with Python.
  • Good experience working with IaC tools (we use Terraform).
  • You champion designing and building systems to handle high traffic at scale in a cloud-based environment in AWS. Experience with Jenkins, Kibana, Grafana &Prometheus highly desirable.
  • Good understanding of data modelling. Experience with Redshift and/or Snowflake is a plus.
  • Experience with batch processing frameworks, preferably familiarity with DBT, Apache Airflow, or similar a plus.
  • Experience with Fivetran, Matillion, Stitch, or similar (we use Fivetran) a plus.
  • Experience with message brokers and stream processing technologies e.g. Kinesis
  • Familiarity with BI tools such as Looker, Tableau, Sisense or similar
  • Strong attention to detail to highlight and address data quality issues