Join our Data Engineering team and take full ownership for architecting, building and maintaining pipelines for various data sources that power and enable Slido’s operations. By doing so, you will contribute to the larger mission of our team: to use Slido data to help solve the problems of its product, users and people.
Your role
- You will be responsible for architecting, building maintaining our various data sources that are essential for Slido’s operations
- Be the driver in using data to solve problems for our users and the wider Slido team
- Work alongside our team on projects related to large data processing, machine learning and analytics
- You will contribute to various products from small internal tools to large-scale web applications used by hundreds of thousands
- Look for new solutions for transforming our customers’needs into useful software
- Learn by doing and do by (not just machine) learning
Your profile
- You are an experienced engineer with passion and curiosity for large data processing, Big Data (whatever that may mean) and/or Machine Learning
- You have previously worked with Python (but Go or Rust experience works just as well) and know your way around the UNIX command-line environment, version control (Git) and CI/CD
- You are familiar with OLTP (e.g. MySQL, PostgreSQL, …) and OLAP (e.g. AWS Redshift, Google BigQuery, Snowflake, AWS Athena) databases
- You can read and/or even write TypeScript and PHP (no, we don’t use these for data processing but they can be found at Slido)
- You have already utilized different machine Learning algorithms (from Random Forests to Deep Neural Networks) and frameworks (like Scikit-Learn, TensorFlow or PyTorch) or are eager to do so in your next large(r) project
- Bonus points if you have previously interacted with any of the following: cloud computing (preferably on AWS) and container technologies (Docker, Kubernetes), workflow orchestrators (Airflow, Luigi, Prefect, …), data formats such as Parquet, Avro or ORC, distributed systems for large data (such as Hadoop, BigTable, Cassandra, …) and testing methodologies applied to data processing
Why join us
- We are a team of 170+ people who are passionate about what they do and care about each other
- You have the opportunity to have an impact on a world-class product used by thousands of people around the world
- We share knowledge within our team of talented programmers, you will grow much faster
- We use techniques of agile software development
- We are fans of the Lean Startup methodology, we love Jira, Slack
- We have a strong culture of Freedom &Responsibility