UG :- - Not Required
PG :- - Not Required
No of position :- ( 1 )
Post :- 8th Jun 2022
Design, document, and implement data lake and data stream processing.
Support the testing, deployment, and support of data processes.
Design and implement support tools for data processes.
Benchmark systems, analyze bottlenecks and propose solutions to eliminate them.
Articulate and align fellow teams to data process designs.
Enjoy being challenged by and solving complex problems.
Identify and resolve conflicts or ambiguities.
Be involved in an on/offshore model where handoffs between the resources will be required to ensure the success of the program.
Required qualifications to be successful in this role
Data Stack experience: Spark, Kafka, HBase, Hive, MongoDB
Proficiency in Scala is required, as Scala is used both in the Data Stack and in stand alone modules (Web Services)
Experience architecting and deploying highly scalable distributed systems
Experience across the full software lifecycle; have a DevOps approach
Experience working on Linux systems
Experience using standard SDLC tools like Jira, Git, Jenkins etc.
Works well in a team environment
A good fit will:
Enjoy being challenged by and solving complex problems
Have good written and verbal communication skills
Have patience to "bring others along"
Be able to assist in documenting requirements
Be able to identify and resolve conflicts or ambiguities
Desired qualifications/non-essential skills required
Experience working with build pipelines
Experience with an iterative approach to coding
Team player and self driven