Big data, Hadoop, Spark, Python, HIVE, GBQ, Kafka, Apache Airflow and GCP
Job Description:
12+ years overall experience with hands on development experience
Good hands-on experience in Big data ETL technologies using Spark, Python, HIVE
Good hands-on experience in streaming technologies like Kafka
Good hands-on experience in GBQ and Oracle
Good experience in driving large data ingestion pipeline projects
Good communication, analytical skills and onsite-offshore coordination aspects
Works with Enterprise Architects, CDOs, business partners, product development teams and senior designers to capture requirements, determine a solution that will integrate with downstream applications, review prototypes and develop iterative revisions