Required Skills

PySpark Spark Python Scala Hive

Work Authorization

  • US Citizen

  • Green Card

  • EAD (OPT/CPT/GC/H4)

  • H1B Work Permit

Preferred Employment

  • Corp-Corp

  • W2-Permanent

  • W2-Contract

  • Contract to Hire

Employment Type

  • Consulting/Contract

education qualification

  • UG :- - Not Required

  • PG :- - Not Required

Other Information

  • No of position :- ( )

  • Post :- 5th Mar 2024

JOB DETAIL

  • Degree in Computer Science, Applied Mathematics, Engineering, or any other technology related field (or equivalent work experience)
  • 5+ years of experience working as a big data engineer (Required) must be able to articulate use cases supported and outcomes driven.
  • Strong in a any cloud like AWS / GCP.
  • Expertise in Java language.
  • knowledge of the following (expectation is to demonstrate these skills live during the interview): PySpark, Spark, Python, Scala, Hive, Pig, and MapReduce
  • Experience in SQL.
  • Proven experience designing, building, and operating enterprise grade data streaming use cases leveraging one of the following: Kafka, Spark Streaming, Storm, and/or Flink
  • Large scale data engineering and business intelligence delivery experience
  • Design of large-scale enterprise level big data platforms
  • Experience working with and performing analysis using large data sets
  • Proven and Demonstrated experience working with or on a mature, self-organized agile team

Company Information