Senior Data Engineer - remote

Jam.gg
Posted 1 year ago
We Work Remotely
Jam.gg is a social and low tech-friendly cloud gaming platform. It has been designed to be accessible to everyone: available directly from a web browser, it does not require a high-speed internet connection to provide a seamless multiplayer game experience. Jam.gg is all about bringing people together through games, with a touch of childhood playfulness and creativity. 

We are led by an experienced team of ex Google, Twitter, Amazon, Docker, EA, King and other top tech companies. Jam.gg is a YCombinator company and backed by top VC firms and LEGO Ventures and has already established itself as a new go-to platform for cloud gaming in some countries.

This is an incredible opportunity to join a booming company. Driven by a strong inclusive culture, we welcome self-starting, fast learning, talented people wishing to start and manage unique, and challenging projects where collaboration (internal and external) is everything.

We are looking for a talented Senior Data Engineer to join our growing team, be the main person of interest in driving our data culture and lead data initiatives around our key activities:

  • Jam.gg (cloud gaming platform) : data analysis support for product and marketing teams, maintenance and evolution of existing data infrastructure.
  • JamLand (upcoming new mobile game) :  Design, build and deploy the foundations of our data infrastructure from scratch.

Location: The candidate must be based in Europe within +/- 2 hours of CET time zone.
Start date: As soon as possible

What you will be doing:
Own &Drive data culture
  • Share data-driven actionable insights at bi-weekly All Hands
  • Collaborate with analysts and other stakeholders to understand their data needs and requirements, and provide technical solutions and insights.
  • Develop tutorials for data consumers in the organization and provide support on the usage of our analytics tools with the aim of increasing the autonomy of data users.
  • Mentor and provide technical guidance to junior profiles in the data team.
  • Write documentation on our data processes for both technical and non-technical users.
Design, develop, deploy and maintain data infrastructure
  • Design, implement and maintain scalable data pipelines and workflows on GCP to process and analyze large volumes of data in real-time and batch modes.
  • Maintain and continuously improve our data warehouse (BigQuery), data lake (Cloud Storage) and  data marts (Metabase).
  • Develop, deploy, manage and orchestrate data microservices and pipelines that allow for the processing of both internal and external data into our data warehouse.
  • Stream event-driven trackers using third party tools (Segment, Rudderstack)  or our own APIs and infrastructure (Cloud Functions, Pub/Sub, Dataflow, Dataproc, API gateways) .
Ensure data quality and compliance
  • Develop and maintain data quality and monitoring processes to ensure consistency and accuracy.
  • Parse and examine logs so as to identify potential problems in our service that can have downstream effects on our data generation.
  • Ensure data security, compliance and privacy requirements are met by implementing appropriate data governance and access controls.

What we are looking for:
  • Experience in startup is a plus
  • Experience in video game and/or content industries are a plus
  • Master’s degree in Computer Science, Statistics, Informatics, Information Systems or another quantitative field and 5+ years of experience in a Data Engineering role.
  • Advanced working SQL and NoSQL knowledge and experience. As a bonus, experience with Firestore and BigQuery.
  • Hands-on experience in building and maintaining a data infrastructure in GCP using tools such as BigQuery, Cloud Storage, Cloud Functions, Pub/Sub, and Dataflow. This one is a bonus, we are also interested in talented people whose experience is in AWS. 
  • Experience in large-scale data processing and analytics using Apache Spark and BigQuery.
  • Experience in managing and orchestrating numerous data pipelines using orchestration tools such as Airflow.
  • Familiarity with containerization, orchestration, and deployment using Docker and Kubernetes.
  • A successful history of manipulating, processing and extracting value from large and disconnected datasets.
  • Strong programming skills in Python and shell scripting. Bonus: familiarity with Javascript (React), Go and/or Unity.
  • Strong autonomy, project management and organizational skills.
  • Experience supporting and working with cross-functional teams in a dynamic environment.

Benefits:
  • Unlimited holiday leave (minimum 5 weeks).
  • Monthly well-being allowance (mental well-being, sports, massage, etc.).
  • Home office allowance.
  • Fully remote &flexible working hours.
  • Equal pay policy.
  • Equal maternity and paternity leave (18 weeks) after 1 year of seniority.
  • Maternity/Paternity subsidy of 3k euros after 1 year of seniority.
  • Stock option plan.
  • Health insurance compensation on a one-to-one basis, depending on geographical location &company's policy.
  • Additional benefits depending on the geographical location.