Required Skills

HDFS Hive Spark Scala Sqoop Kafka NiFi

Work Authorization

  • US Citizen

  • Green Card

  • EAD (OPT/CPT/GC/H4)

  • H1B Work Permit

Preferred Employment

  • Corp-Corp

Employment Type

  • Consulting/Contract

education qualification

  • UG :- - Not Required

  • PG :- - Not Required

Other Information

  • No of position :- ( 1 )

  • Post :- 5th May 2022

JOB DETAIL

  1. 10+ years of overall experience in architecting and building large scale, distributed big data solutions in the capacity of software architect, solution architect, or engineering leader.
  2.  Proven track record of consistently architecting, designing, developing and/or implementing an end—to-end Big Data Lake, Business Intelligence – Tableau/Power BI/Qlik, Data Science project.
  3. Solid experience in building Big Data Pipelines (Batch and NRT) on Hadoop Ecosystem including HDFS, Hive, Spark,  Scala, Sqoop, Kafka, NiFi, and real time streaming technologies and host of big data open source stack. Well versed with performance tuning of compute heavy workloads.
  4. Working experience in Cloudera distribution and AWS/Azure/GCP Cloud. Solid understanding of Data Quality, Data Governance and Data Security.
  5. Experience in big data solutions like Impala, Oozie, Flume, Sqoop or ZooKeeper. Proficient in Python/R, Java, Scala, Ruby or C++.
  6. Experience with one of the large cloud-computing infrastructure solutions like AWS Redshift, Snowflake, SQL DW, BigQuery.
  7. Good experience in database and hands-on experience in big data technologies such as Hadoop/Hive/Spark/MongoDB, with experience in data ingestion, data wrangling, data virtualization technologies such as StreamSets, Trifacta, Denodo will be a big plus.
  8. Strong analytical skills – ability to develop an idea into solution, define features, qualitative and quantitative analysis.
  9. Knowledge of Healthcare/Pharmaceutical industry experience is an added advantage.
  10. To be able to work in a fast-paced agile development environment.

Company Information