Create and maintain optimal and scalable data pipeline architecture
Enhance the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL, Amazon Redshift and other data warehouses.
Create and implement data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader.
Work with data and analytics experts to strive for greater functionality in our data systems.
Required Skills and Qualifications
3+ years of experience in data engineering and/or cloud/web engineering or related experience
Experience building, maintaining and optimizing data pipelines and overall data platform
Experience with data orchestration frameworks and tools such as AWS Lambda, Cloud Functions, and schedulers such as Airflow
Experience with implementing data collection strategies, data modelling, and data storage implementing ETL / ELT solutions
Knowledge of at least one popular programming language used for statistical analysis, such as Python and R.
Advanced SQL knowledge, ideally in MySQL, PostgreSQL or similar.
Experience working with Data Lakes, Data Warehouses and Data Mesh.
Experience with cloud-based infrastructure providers such as Amazon AWS, Microsoft Azure, or Google Cloud Platform.
Additional Skills
Knowledge of reporting and data visualization best practices
Familiarity with business intelligence tools such as Redash, PowerBI, Tableau, Periscope, Mode Analytics
Familiarity with DBT or similar analytics workflow tools