Professional Bigdata Hadoop development experience between 3-8 years is preferred.
Expertise with Big Data ecosystem services, such as Spark(Scala/Python), Hive, Kafka, Unix and experience with any cloud stack, preferably GCP(Big Query & DataProc), AWS(Glue,EMR,RedShift)
Object-oriented programming and component-based development with Java.
Experience in working with large cloud data lakes.
Experience with large-scale data processing, complex event processing, stream processing.