Required Skills

big data Hadoop cluster Hadoop v2 MapReduce HDFS Spark-Streaming building stream Big Data querying tools Pig Hive Impala Spark. NoSQL databases HBase Cassandra MongoDB Kafka Cloudera ETL

Work Authorization

  • Us Citizen

  • Green Card

  • EAD (OPT/CPT/GC/H4)

  • H1B Work Permit

Preferred Employment

  • Corp-Corp

Employment Type

  • Consulting/Contract

education qualification

  • UG :- - Not Required

  • PG :- - Not Required

Other Information

  • No of position :- ( 1 )

  • Post :- 13th Nov 2020

JOB DETAIL

Strong big data lead with 10+years of experience and should have end to end understanding of the application.

Management of Hadoop cluster, with all included services and Ability to solve any ongoing issues with operating the cluster.

Proficiency with Hadoop v2, MapReduce, HDFS

Experience with building stream-processing systems, using solutions such as Storm or Spark-Streaming

Good knowledge of Big Data querying tools, such as Pig, Hive, and Impala.

Experience with Spark.

Experience with integration of data from multiple data sources

Experience with NoSQL databases, such as HBase, Cassandra, MongoDB

Knowledge of various ETL techniques and frameworks; Apache Spark will be added advantage.

Experience with various messaging systems, such as Kafka or RabbitMQ

Proficient understanding of distributed computing principles

Experience with Cloudera/MapR/Hortonwork

Company Information