Required Skills

data engineering Hadoop Kudu StreamSets MapReduce Presto

Work Authorization

  • US Citizen

  • Green Card

  • EAD (OPT/CPT/GC/H4)

  • H1B Work Permit

Preferred Employment

  • Corp-Corp

  • W2-Permanent

  • W2-Contract

  • Contract to Hire

Employment Type

  • Consulting/Contract

education qualification

  • UG :- - Not Required

  • PG :- - Not Required

Other Information

  • No of position :- ( 1 )

  • Post :- 5th Sep 2023

JOB DETAIL

Primary Skill Set

 

· 10+ years of experience working on data engineering teams related to BigData and Teradata

· Expert knowledge of SQL, NoSQL and Python

· Good experience with Apache Spark ecosystem and components Experience with Hive Hadoop HBase and SPARK framework

· More exposure to different big data integration & ETL methodologies and tools is a plus

· Preference will be given to those who have worked in the Big Data Lakes design & integration

· Preferable candidate should have worked in the Batch near real time & real time streaming integrations

· Expected to be well versed with different scheduling tools in the Big Data /Hadoop eco system

· Experience in Hadoop, Kudu, StreamSets, MapReduce, Presto, HDFS, Zookeeper, NOSQL Hive, Pig, Hue, Solr, Sqoop, AWS, Oozie , Kafka, spark, Hbase, Linux/UNIX Shell Scripting

· Experience in Project, source code management and trouble reporting tools (JIRA, Git, Slack)

Company Information