Expert-level SQL knowledge with strong complex query writing skills
- 4-5 years of experience with big data – preferably in a large complex organization
- 4-5 years of Data Engineering/Data Analytics/Business Intelligence
- 4-5 years of legacy ETL experience
- 4-5 years of hands-on Pipeline Development
- Experience with Apache Spark
- 4-5 years Years Experience with RDBMS such as Oracle, DB2 MySQL, Teradata Hive
- 4-5 years Years experience with NoSQL databases such as HBase, DynamoDB, MongoDB, Cassandra
- Experience with Cloud/Virtual warehouse systems such as Snowflake/Redshift.
- Distributed computational framework experience required
- AWS solutions and data experience preferred - Glue
- Any related experience in the distributed event streaming platforms such as Kafka, Kinesis, etc