Required Skills

hive Technical training spark SCALA Architectural design Hadoop Data processing MongoDB big data Python

Work Authorization

  • Citizen

Preferred Employment

  • Full Time

Employment Type

  • Direct Hire

education qualification

  • UG :- - Not Required

  • PG :- - Not Required

Other Information

  • No of position :- ( 1 )

  • Post :- 19th Jul 2022

JOB DETAIL

Looking for candidates with strong experience in software development, especially in Big Data development technologies including Java/Scala/Python Spark/Hive/Hadoop.

Qualifications:

  • BE/B.Tech/MCA/MS-IT/CS/B.Sc/BCA or any other degrees in related fields
  • Experience in working on Hadoop Distribution, good understanding of core concepts and best practices
  • Good experience in building/tuning Spark pipelines in Scala/Python
  • Good experience in writing complex Hive queries to derive business critical insights
  • Good Programming experience with Java/Python/Scala
  • Understanding of Data Lake vs Data Warehousing concepts
  • Experience with AWS Cloud, exposure to Lambda/EMR/Kinesis will be good to have
  • Experience in NoSQL Technologies - MongoDB, Dynamo DB

Roles and Responsibilities:

  • Design and implement solutions for problems arising out of large-scale data processing
  • Attend/drive various architectural, design and status calls with multiple stakeholders
  • Ensure end-to-end ownership of all tasks being aligned
  • Design, build maintain efficient, reusable reliable code
  • Test implementation, troubleshoot correct problems
  • Capable of working as an individual contributor and within team too
  • Ensure high quality software development with complete documentation and traceability
  • Fulfil organizational responsibilities (sharing knowledge experience with other teams/ groups)
  • Conduct technical training(s)/session(s), write whitepapers/case studies/blogs etc.

Company Information