Required Skills

PySpark Datawarehouse

Work Authorization

  • US Citizen

  • Green Card

  • EAD (OPT/CPT/GC/H4)

  • H1B Work Permit

Preferred Employment

  • Corp-Corp

Employment Type

  • Consulting/Contract

education qualification

  • UG :- - Not Required

  • PG :- - Not Required

Other Information

  • No of position :- ( 1 )

  • Post :- 24th Aug 2022

JOB DETAIL

  • 8+ years implementing data pipelines or data-intensive assets using Python, Java, or Scala
  • 5+ years using distributed data processing engines such as Apache Spark, Hive
  • 2+ years creating modular data transformation using an orchestration engine like airflow or equivalent such as Nifi
  • 4+ years building cloud-native solutions in AWS, especially with s3, Glue, Lambda, Step functions, EMR, EC2, or Azure
  • Experience creating re-distributable and portable data assets using containers or cloud-native services
  • Hands-on experience building decision support systems or advanced analytics solutions
  • Able to architect and own the execution of an end-to-end technical Workstream.
  • Experience designing and implementing REST APIs is a plus
  • Expert understanding and familiarity with Continuous Delivery practices
  • Experience delivering solutions through an Agile delivery methodology
  • Ability to understand complex systems and solve challenging analytical problems
  • Comfort with ambiguity and rapid changes common in early-stage product development

Company Information