Required Skills

Big Data Hadoop Hive Python Spark

Work Authorization

  • Us Citizen

  • Green Card

  • EAD (OPT/CPT/GC/H4)

  • H1B Work Permit

Preferred Employment

  • Corp-Corp

Employment Type

  • Consulting/Contract

education qualification

  • UG :- - Not Required

  • PG :- - Not Required

Other Information

  • No of position :- ( 1 )

  • Post :- 7th Jan 2021

JOB DETAIL

Job Title

 Big Data Hadoop Consultant (Hive, Python, Spark)

 Location

 New York, New York

 Duration

 6 Months


 Job Description
Must Have Skills
• Must have at least more than 5 years of Hive, Python and Spark development experience

Nice to Have Skill
• Preferrably it would be a plus if the candidates are working in a Financial/Banking domain/industry
• Knowledge in Scala and Impala is a plus

Detailed Job Description
• Design and Build distributed, scalable, and reliable data pipelines that ingest and process data at scale and in real-time.
• Data Analysis and Data exploration
• Explore new data sources and data from new domains
• Productionalize real time/Batch ML models on Python/Spark
• Evaluate big data technologies and prototype solutions to improve our data processing architecture.


Minimum years of experience
5

Certifications Needed :No

Top 3 responsibilities you would expect the Subcon to shoulder and execute


Interview Process (Is face to face required?)
No

Any additional information you would like to share about the project specs/ nature of workThe project involves customer data integration and building propensity models on stock plan participant data using events, call email interaction data. The resource should handle end to end project delivery right from requirements gathering, design, coding, testing deployment.

Company Information