Required Skills

Spark PySpark Java Scala

Work Authorization

  • US Citizen

  • Green Card

  • EAD (OPT/CPT/GC/H4)

  • H1B Work Permit

Preferred Employment

  • Corp-Corp

  • W2-Permanent

  • W2-Contract

  • Contract to Hire

Employment Type

  • Consulting/Contract

education qualification

  • UG :- - Not Required

  • PG :- - Not Required

Other Information

  • No of position :- ( 1 )

  • Post :- 6th Nov 2023

JOB DETAIL

  • Work closely with data scientists, data architects, ETL developers, other IT counterparts, and business partners to identify, capture, collect, and format data from the external sources, internal systems, and the data warehouse to extract features of interest
  • Contribute to the evaluation, research, experimentation efforts with batch and streaming data engineering technologies in a lab to keep pace with industry innovation
  • Work with data engineering related groups to inform on and showcase capabilities of emerging technologies and to enable the adoption of these new technologies and associated techniques

Qualifications

What makes you a dream candidate?

  • Experience with ingesting various source data formats such as JSON, Parquet, SequenceFile, Cloud Databases, MQ, Relational Databases such as Oracle
  • Experience with Cloud technologies (such as Azure, AWS, GCP) and native tool sets such as Azure ARM Templates, Hashicorp Terraform, AWS Cloud Formation
  • Experience with Azure cloud services to include but not limited to Synapse Analytics, Data Factory, Databricks, Delta Lake
  • Understanding of cloud computing technologies, business drivers and emerging computing trends
  • Thorough understanding of Hybrid Cloud Computing: virtualization technologies, Infrastructure as a Service, Platform as a Service and Software as a Service Cloud delivery models and the current competitive landscape

Experience:

  • High School Diploma or equivalent required
  • Bachelor’s Degree in related field or equivalent work experience required
  • 2-4 years of hands-on experience with software engineering to include but not limited to Spark, PySpark, Java, Scala and/or Python required
  • 2-4 years of hands-on experience with ETL/ELT data pipelines to process Big Data in Data Lake Ecosystems on prem and/or in the cloud required
  • 2-4 years of hands-on experience with SQL, data modeling and relational databases and no SQL databases required

 

Company Information