The candidate must be a self-starter, who can work under general guidelines in a fast-spaced environment.
Overall minimum of 4 to 10 year of software development experience and hands-on working knowledge on Big Data technologies such as Pyspark, Hive, Hadoop, Hbase, Spark, Nifi, SCALA, Kafka, Python
Excellent knowledge in Java , SQL & Linux Shell scripting
Strong communication, Interpersonal, Learning and organizing skills matched with the ability to manage stress, Time, and People effectively
Proven experience in co-ordination of many dependencies and multiple demanding stakeholders in a complex, large-scale deployment environment
Ability to manage a diverse and challenging stakeholder community
Co-ordinate SIT and UAT Testing. Take feedbacks and provide necessary remediation/recommendation in time.
Drive small projects individually, Co-ordinate change and deployment in time