Position: Senior Backend Engineer
Total Comp: $150K - $250K (dependent on experience level and work location)
Total Comp comprises of: Base salary, Cisco RSUs + 12% performance bonus + 401k matching (up to 4.5%) + sign-on bonus
Location: Fully Remote anywhere in the US or Canada. Partial onsite work is available in SF, San Jose, Chicago, or Austin (post covid)
About the role
In near real time, Cisco Meraki collects massive amounts of data from its devices all over the world. We write nearly 7 million data points each second globally every day. With this data, we help to power Dashboard and allow our customers insight into the state of their networks. As a member of the Data Engineering team, you will develop, scale, and maintain our dashboard and the lower-layer ETL pipelines that power it. You will work with many different people within Engineering and throughout Cisco Meraki to help build the infrastructure and data pipelines to power their data-driven decisions. You will also help scale our systems to handle ever increasing amounts of data points and requests.
What you will do:
- Design, build, and maintain scalable data pipelines that ingest all the data that runs our dashboard.
- Build systems that process raw data and create intuitive and interesting insights for our customers.
- Design systems to ensure fast, reliable, and scalable delivery of data across our cloud.
- Work with various groups within Cisco Meraki to understand their and our customers’data requirements and requests.
About you
- Experience and passion for analyzing, scaling, and debugging large systems.
- Excitement for working with large data sets in real-time.
- Basic understanding of SQL, including experience working with one or more relational databases (e.g., PostgreSQL or MySQL).
- Experience in object-oriented and/or functional programming languages (e.g. Scala, Ruby, Go).
- You take a focused, organized approach to development, testing, and quality.
- You're passionate about what you're doing and ignite people around you.
Bonus points for:
- Experience working with a real-time compute and streaming infrastructure (Kafka, Flink, Storm, Spark, etc).
- Experience with microservice architectures
- Experience or willing to work in an agile environment (Scrum, Kanban, etc.).
- Personal projects or contributions to open-source projects.