Senior Data Engineer - remote

Virtuous
Posted 2 years ago
About Us

At Virtuous, we are committed to helping charities reimagine generosity. We believe that charitable giving is about personal connections, not sales transactions. Generosity is driven by our passions and relationships – and givers want to feel like they are part of a movement bigger than themselves. We are the Generosity Operating System at the heart of charity. We are the Donor Management System that is putting the joy back in fundraising.

Position Summary

Virtuous is looking for an experienced, highly motivated Senior Data Engineer to join our burgeoning Data Operations Team. The position will report to the Director of Data Operations. The ideal candidate will have direct experience building data pipelines and architecting data lakes &data warehouses supporting key business functions and promoting data visibility &insights across all teams. 

This position should excite someone who is ready to take ownership of all aspects of data warehousing and, alongside the Director of Data Operations, provide long-term strategic direction of Virtuous’s data operations and reporting capabilities. For a candidate to be successful, they will need to be able to manage and translate terabytes of complex structured and unstructured data into actionable business metrics, enjoy collaborating with others, and demonstrate a passion for the work Virtuous is doing and its mission.

Candidates willing to commute and work out of our downtown Phoenix, AZ office are preferred, though we are accepting resumes for candidates working remote from other states.

Responsibilities

  • Own, design, deploy and optimize all aspects of data pipelines, data lakes, data warehouses and data marts.
  • Translate complex business concepts &reporting needs into data warehousing models that enable a self-service BI reporting structure for all Virtuous Teams
  • Publish and maintain documentation, data dictionaries, and best practices for consumption by the layperson
  • Optimize ETL &reporting processes and capabilities with an emphasis on security, accuracy, and extensibility while minimizing latency
  • Implement automated data validation / QA processes to ensure 100% accuracy in reporting outputs and foster trust across all teams

Requirements

  • 5+ years of direct experience building data pipelines and architecting data lakes / warehouses or related fields
  • Authoritative in ETL optimization, designing, coding, and tuning big data processes using Apache Spark, R, Python, C# and/or similar technologies
  • Expert in writing and optimizing SQL 
  • Strong written, verbal and interpersonal communication skills with an ability to communicate key insights from complex analyses in summarized business terms
  • Experience assembling terabytes of complex datasets that meet non-functional and functional business requirements
  • Ability to Identify, design, and implement internal process improvements including redesigning infrastructure for greater scalability, optimizing data delivery, and automating manual processes  
  • Experience with agile development, sprint planning, and estimating story points
  • Startup environment/SaaS experience preferred 
  • Experience with Power BI, Tableau or similar BI tools
  • Independent, self-starter who thrives in a fast pace environment
What We Offer

- Hybrid schedule for local employees within Arizona (3 days in office, 2 from home)
- Work from home for employees outside of Arizona
- 401k with match
- Unlimited PTO
- Paid volunteer time 
- Medical/Dental/Vision BenefitsDependents are also eligible for coverage
- HSA/FSA offerings
- OneMedical, Talkspace, &Teladoc Memberships
- Fun company outings and events