Data Quality Engineer - remote

Scrapinghub
Posted 4 years ago
Stack Overflow

Data QA is an important function within Scrapinghub. The Data QA team works to ensure that the quality and usability of the data scraped by our web scrapers meets and exceeds the expectations of our enterprise clients. 

Are you passionate about data and data quality and integrity?

Do you enjoy using programming languages and tools to analyse and manipulate data, detect data quality issues, and visualise your findings?

Are you highly customer-focused with excellent attention to detail?

Owing to growing business and the need for ever more sophisticated Data QA, we are looking for a talented Data Scientist to join our team. As a Scrapinghub Engineer, you will take primarily automated and semi-automated data wrangling, data manipulation, and data visualisation techniques and apply them in the verification and validation of data quality as it pertains to data extracted from the web. 

Job Responsibilities:

  • Understand customer web scraping and data requirements;map these requirements to custom scripts in your language/tool of choice, with a view to establishing the degree of data quality and uncovering data quality issues.
  • Draw conclusions about data quality by producing descriptive and inferential statistics, summaries, and visualisations.
  • Supplement existing manual QA and schema validation techniques with advanced data wrangling and manipulation.
  • As needed, perform complementary manual and semi-automated verification.
  • Collaborate with developers to further troubleshoot and pinpoint solutions.
  • Present findings and conclusions to stakeholders at various levels (other members of the QA department, developers, project managers, account managers, customers).
  • Write high-quality, well-structured code that is maintainable and extensible.
  • Manage code using GitHub, BitBucket and other version control approaches as applicable.

Required Skills:

    • Highly proficient in one or more of Pandas, SQL, R, Excel.
    • BS degree in Computer Science, Engineering, Mathematics, or equivalent.
    • Demonstrable programming knowledge and experience, minimum of 3 years (please provide code samples in your application - ideally pertaining to data analysis - via a link to GitHub or other publicly-accessible service).
    • Background in data profiling.
    • Strong analytical skills with unstructured data.
    • Experience in data management, data integration and data quality verification.
    • Experience in data quality visualisation and the visualisation of data quality issues.
    • Ability to work with very large datasets (into the millions of records).
    • Strong knowledge of software QA methodologies, tools, and processes.
    • Excellent level of written and spoken English;confident communicator;able to communicate on both technical and non-technical levels with various stakeholders on all matters of QA.
    • Outstanding attention to detail and ability to meet deadlines.

Desired Skills:

    • Prior experience in a Data QA role (where the focus was on verifying data quality, rather than testing application functionality).
    • Familiarity with Jupyter and JupyterLab.
    • Experience with dashboard and monitoring tools such as Grafana, Kibana, FineReport etc.
    • Experience building your own dashboards.
    • Interest in and flair for Data Science concepts as they pertain to data analysis and data validation (machine learning, inferential statistics etc.);if you have ideas, mention them in your application.
    • Experience with Spark, BigQuery, and other big data technologies.
    • Knowledge of and experience in other technologies that support a modern cloud-based 

Since you have read this far, the role clearly interests you. In your cover letter, describe in detail what appeals to you about the role, and some of your previous experience (ideally with public links to code or examples) in the area of data analysis, data visualisation, and/or data quality verification.