Required Skills

Hadoop Developer

Work Authorization

  • US Citizen

  • Green Card

  • EAD (OPT/CPT/GC/H4)

  • H1B Work Permit

Preferred Employment

  • Corp-Corp

  • W2-Permanent

  • W2-Contract

  • Contract to Hire

Employment Type

  • Consulting/Contract

education qualification

  • UG :- - Not Required

  • PG :- - Not Required

Other Information

  • No of position :- ( 1 )

  • Post :- 2nd Nov 2023

JOB DETAIL

Need 11+ years of experience candidates.

Need than 3 years of experience in the PySpark ETL 

Below are the JD and key requirements of tolls and technologies for 2 positions -
    5-6 years of development experience with Oracle and Big Data Hadoop platform on Data Warehousing and/or Data Integration projects in an agile environment.
    Understanding of business requirements as well as technical aspects.
    Good knowledge of Big Data, Hadoop, Hive, Impala database, data security, and dimensional model design.
    6-8 Strong Exprriend in the -  Sqoop, PySpark, Spark, HDFS, Hive, Impala, StreamSets, and Kudu technologies.
    Strong knowledge in analyzing data in data warehouse environment with Cloudera Bigdata Technologies (Hadoop, MapReduce, Sqoop, PySpark, Spark, HDFS, Hive, Impala, StreamSets, Kudu, Oozie, Hue, Kafka, Yarn, Python, Flume, Zookeeper, Sentry, Cloudera Navigator) and Oracle SQL/PL-SQL.
    Strong knowledge in writing complex SQL queries (Oracle and Hadoop (Hive/Impala etc.)).
    Knowledge in analyzing the log files, error files for any data ingestion failures.
    Experience in writing Python/Impala scripts 
    Tokenization or Data masking knowledge 
    Experience working in Medicaid and healthcare domain is preferred. 
    Participate in Team activities, Design discussions, stand ups, sprint planning and execution meetings with team.
    Perform data analysis, data profiling and data quality assessment in various layers using big data/Hadoop/Hive/Impala and Oracle SQL queries.

Company Information