DATA AND BACKEND ENGINEER - remote

OTTOmate ltd
Posted 3 years ago
We Work Remotely
We are looking for a Software Engineer that will work on collecting, storing, processing and analyzing several sets of data related to the independent (mainly digital) music industry. 

You will be working with our chosen frameworks and solutions, finding the optimal solutions and integrating them with the architecture of the whole system via micro service architecture whilst improving and modernising the overall platform. 

We have undertaken a big project called OTTO GLIDER which focuses on branching out our suite of products and services to serve the industry in the best and most flexible way.  We are looking for a passionate individual keen on making data transparency slick and clean so that distributors and any size of independent label can process and deliver reporting to artists. 

Qualifications / Skills 
  • ETL applications and microservice architecture 
  • Good AWS knowledge (in particular Lambda, EC2 and ingestion pipelines)
  • Spark proficiency
  • Parquet
  • Vast software architecture knowledge
  • Docker
  • Python/PHP senior level
  • MySQL, PostgreSQL
  • Flask 
  • REST API’s development
  • Java/Spring
  • OpenAPI
  • BigQuery
Our technology and tools 
We are currently using the following languages and tools
  • MongoDB, MySQL and Parquet
  • Google Cloud Platform
  • Python, Scala and PHP
  • Spark, AWS Glue, Lambda
  • Laravel 8.0
  • Jira, Slack and RT ticketing system for ticket tracking and support 
Responsibilities
  • Selecting and integrating any Big Data tools and frameworks required to provide requested capabilities
  • Monitoring performance and advising any necessary infrastructure changes
  • Create script to automate ingestion of data sets and generation of aggregated results using Spark
  • Create script to automate management of big data processing task
  • Develop a web service API that allows ingestion and processing of sets of data
  • Document the solutions implemented and the management procedures
  • Provide basic visualizations for generated aggregated data
Your skills and experience
You are proficient in designing and architecting data intensive applications. 
  • 5+ years of experience in software development ideally in the music industry or in data-driven enterprises
  • Hands-on experience setting and monitoring data pipelines
  • Proficient understanding of distributed computing principles
  • Setup and Management of processing cluster, with all included services
  • Ability to solve any ongoing issues with operating the cluster
  • Expert with AWS cloud services;EC2, IAM, EKS, Developer Tools, etc.
  • Experience of Apache Spark Processing Model 
  • Good knowledge of Big Data querying tools based on Spark
  • Knowledge with integration of data from multiple data sources
  • Experience with SQL and NoSQL databases
  • Knowledge of any of big data cloud platforms Cloudera/MapR/Hortonworks/Google /AWS
  • A previous experience in the music industry dealing with metadata and databases is a strong plus. 
  • Agile and thorough with documentation and following, improving code discipline
  • You know how to work remotely and efficiently, both independently and in group 
  • Perfectly proficient in written English, Spanish a plus