Strong programming experience using Scala and Spark.
Implemented and in-depth knowledge of various Java/ J2EE/ EAI patterns .
Design/ Architected and implemented complex projects dealing with the considerable data size (GB/ PB) and with high complexity.
Hands-on experience on working with batch processing / Real time systems using technologies / tools like Solr, Hadoop, NoSQL DB's (Cassandra/ HBase/ Mongo etc), Spark, Storm, Kafka etc.,
Strong experience in Performance monitoring, evaluation and improvements.
Sound knowledge about the clustered deployment Architecture and should be capable of providing deployment solutions based on customer needs.
Capable of working as an individual contributor and within team too.
Excellent knowledge & delivery experience with the various Hadoop flavors (CDH/ HDP/ MapR) technology stack.
Experience in designing/ architecting/ delivering Big Data solutions in private and public Cloud (AWS/ GCP/ Azure).
Role & Responsibilities
Design generic frameworks for distributed data processing
Anticipate on technological evolutions.
Coach and work closely with the technical teams in the development of the technical architecture.
Ensure the technical directions and choices.
Design/ Architect/ Implement various solutions arising out of the large data processing (GB's/ PB's) over various NoSQL, Hadoop and MPP based products.
Driving various Architecture and design calls with customers.
Working with team and providing guidance on implementation details.
Responsible for Timely and quality deliveries.
Fulfill organization responsibilities - Sharing knowledge and experience within the other groups in the org., conducting various technical sessions and trainings.
Experience Guidelines:
Experience with strong hands-on knowledge in Big Data tech stack
Strong technical knowledge in processing tools like Spark / Storm etc...
Ability to modularize work and distribute with-in team.
Ability to provide necessary coaching to bring juniors up-to speed on the technology