3+ years of enterprise software engineering experience with object oriented design, coding and testing patterns, as well as, experience in engineering (commercial or open source) software platforms and large-scale data infrastructure solutions.
3+ years of software engineering and architecture experience within a cloud environment (Azure, AWS).
3+ years of enterprise data engineering experience within any Big Data environment (preferred).
3+ years of software development experience using Python.
2+ years of experience working in an Agile environment (Scrum, Lean or Kanban).
3+ years of experience working in large-scale data integration and analytics projects, including using cloud (e.g. AWS Redshift, S3, EC2, Glue, Kinesis, EMR) and data-orchestration (e.g. Oozie, Apache Airflow) technologies
3+ years of experience in implementing distributed data processing pipelines using Apache Spark.
3+ years of experience in designing relational/NoSQL databases and data warehouse solutions
2+ years of experience in writing and optimizing SQL queries in a business environment with large- scale, complex datasets
2+ years of Unix/Linux operating system knowledge (including shell programming).
1+ years of experience in automation/configuration management tools such as Terraform, Puppet or Chef.
1+ years of experience in container development and management using Docker.