Required Skills

Databricks Apache Spark Scala programming Azure

Work Authorization

  • Us Citizen

  • Green Card

  • EAD (OPT/CPT/GC/H4)

Preferred Employment

  • Corp-Corp

Employment Type

  • Consulting/Contract

education qualification

  • UG :- - Not Required

  • PG :- - Not Required

Other Information

  • No of position :- ( 1 )

  • Post :- 14th Nov 2020

JOB DETAIL

Job Title:

Senior Data Engineer

 

JOB DESCRIPTION:

LOCATION:  SCHAUMBURG, IL
DURATION : 6 MONTHS - 
CONTRACT TO HIRE

MOI : PHONE AND VIDEO

WORK AUTH: USC, GC , GC-EAD ONLY

*Must pass a drug test and background check once offered position*

Description:

This person needs to figure out how to use Databricks / Apache Spark, to organize their big data sets for this initiative. Needing strong exp with Bid Data, working with large Data sets and tables, and need to be able to speak in-depth about this stuff… not just surface level. 

Projects the candidate will be working on:

  • Create and maintain data pipelines between on-premise data center, Azure Data Lake Storage, and Azure Synapse database using Databricks and Apache Spark/Scala.
  • This role is for a senior data engineer that will join a team responsible for managing a growing cloud-based data ecosystem consisting of a metadata driven data lake and databases that support real time analytics, extracts, and reporting.
  • The right candidate will have a solid background in data engineering and should have a few years of experience on a major cloud platform such as Azure

Top Responsibilities:

  • Building and maintaining a data processing framework on Azure using Databricks
  • Writing code in Apache Spark/Scala
  • Working with existing Databricks Delta Lake tables to optimize for CDC performance using techniques
  • Working with existing Databricks Notebooks to optimize or address performance concerns
  • Create new Databricks Notebooks or stand-alone Apache Spark/Scala code as needed
  • Willingness to learn existing on-premise data management tools as required, such as Ab Initio

Software tools/skills:

  • Databricks
  • Apache Spark
  • Scala programming
  • Azure.

Skills/attributes:

  • Data engineering experience - 5 years
  • Cloud platform experience – 2 years
  • Version Control (Git or equivalent) - 2 years.

Nice to have:

  • Data Integration Tools (Spark/Databricks or equivalent) 2 years
  • Version Control (Git or equivalent) 2 years
  • Scripting (Linux/Unix Shell scripting or equivalent) 2 years
  • Netezza experience.

Interview Process:

  • How many rounds - Maximum 2, possibly 1 depending on interviewer availability
  • Video vs. phone - One of the rounds should be video

 

Best Regards,

Vaibhav Rustagi

SoftSages Technology
Direct: 484-321-8306, Phone: 484 3218314 X 184, 

Email: vaibhav.rustagi@softsages.com

Address: 17 Mystic Lane, Ste 2A, Malvern, PA 19355
Website: www.softsages.com

Company Information