Looking for candidates with strong experience in software development, especially in Big Data development technologies including Java/Scala/Python Spark/Hive/Hadoop.
Qualifications:
- BE/B.Tech/MCA/MS-IT/CS/B.Sc/BCA or any other degrees in related fields
- Experience in working on Hadoop Distribution, good understanding of core concepts and best practices
- Good experience in building/tuning Spark pipelines in Scala/Python
- Good experience in writing complex Hive queries to derive business critical insights
- Good Programming experience with Java/Python/Scala
- Understanding of Data Lake vs Data Warehousing concepts
- Experience with AWS Cloud, exposure to Lambda/EMR/Kinesis will be good to have
- Experience in NoSQL Technologies - MongoDB, Dynamo DB
Roles and Responsibilities:
- Design and implement solutions for problems arising out of large-scale data processing
- Attend/drive various architectural, design and status calls with multiple stakeholders
- Ensure end-to-end ownership of all tasks being aligned
- Design, build maintain efficient, reusable reliable code
- Test implementation, troubleshoot correct problems
- Capable of working as an individual contributor and within team too
- Ensure high quality software development with complete documentation and traceability
- Fulfil organizational responsibilities (sharing knowledge experience with other teams/ groups)
- Conduct technical training(s)/session(s), write whitepapers/case studies/blogs etc.