**What you will do**
- Leveraging your experience in building and maintaining complex data pipelines, you will drive the development of our analytics platform currently built on AWS Firehose/Kafka, S3, Athena, and Airflow / EMR / PySpark / TimescaleDB.
- We are looking for someone who is eager to:
- Collaborate with other developers to ship new features
- Be in charge of the overall architecture of data pipelines
- Ensure that we have the right tests and structure in place to make sure that we can move quickly without breaking everything
- Share his/her knowledge of data engineering principles and best practices with the team
- Keep learning new technologies and be on the look-out for new ideas that we should try out
**What we are looking for**
- A Spark expert
- Experience with complex data pipelines and orchestration in the Cloud
- Quality-oriented mindset: testing, code reviews, code quality, etc.
- Awareness of performance considerations
- A passion for simple, maintainable and readable code that balances pragmatism and performance