Required Skills

Big Data Pyspark python java Big data Spark AWS

Work Authorization

  • Citizen

Preferred Employment

Employment Type

  • Direct Hire

education qualification

  • UG :- - Not Required

  • PG :- - Not Required

Other Information

  • No of position :- ( 1 )

  • Post :- 29th Apr 2022

JOB DETAIL

Experience in working on Hadoop Distribution, good understanding of core concepts and best practices.
Good experience in building/tuning Spark pipelines in Scala/Python.
Good experience in writing complex Hive queries to derive business-critical insights.
Good Programming experience with Java or Python or Scala or Pyspark or SQL.
Understanding of Data Lake vs Data Warehousing concepts.
Experience in NoSQL Technologies – MongoDB, Dynamo DB.
Good to have Experience in AWS Cloud, exposure to Lambda/EMR/Kinesis.

Skills Required:Spark, Hive, Hadoop with Scala OR Java OR Python OR Pyspark OR SQL.

Roles:
Design and implement solutions for problems arising out of large-scale data processing.
Attend/drive various architectural, design and status calls with multiple stakeholders.
Ensure end-to-end ownership of all tasks being aligned.
Design, build & maintain efficient, reusable & reliable code .
Test implementation, troubleshoot & correct problems.
Capable of working as an individual contributor and within team too.
Ensure high quality software development with complete documentation and traceability.
Fulfil organizational responsibilities (sharing knowledge & experience with other teams / groups).
Conduct technical training(s)/session(s), write whitepapers/ case studies / blogs etc.

Company Information