Required Skills

Databricks Apache Spark Azure Scala ETL

Work Authorization

  • Us Citizen

  • Green Card

  • EAD (OPT/CPT/GC/H4)

  • H1B Work Permit

Preferred Employment

  • Corp-Corp

Employment Type

  • Consulting/Contract

education qualification

  • UG :- - Not Required

  • PG :- - Not Required

Other Information

  • No of position :- ( 1 )

  • Post :- 14th Nov 2020

JOB DETAIL

Job Title         Senior Data Engineer
Location         Minneapolis Or Remote MN
Duration         6 months /Contract

JOB DESCRIPTION

 

Interview Mode Telephonic and Skype
USC/GC only
Description:

This person needs to figure out how to use Databricks / Apache Spark, to organize their big data sets for this initiative. 
Needing strong exp with Bid Data, working with large Data sets and tables, and need to be able to speak in-depth about this stuff… not just surface level. 

Projects the candidate will be working on:

  • Create and maintain data pipelines between on-premise data center, Azure Data Lake Storage, and Azure Synapse database using Databricks and Apache Spark/Scala.
  • This role is for a senior data engineer that will join a team responsible for managing a growing cloud-based data ecosystem consisting of a metadata driven data lake and databases that support real time analytics, extracts, and reporting.
  • The right candidate will have a solid background in data engineering and should have a few years of experience on a major cloud platform such as Azure

Top Responsibilities:

  • Building and maintaining a data processing framework on Azure using Databricks
  • Writing code in Apache Spark/Scala
  • Working with existing Databricks Delta Lake tables to optimize for CDC performance using techniques
  • Working with existing Databricks Notebooks to optimize or address performance concerns
  • Create new Databricks Notebooks or stand-alone Apache Spark/Scala code as needed
  • Willingness to learn existing on-premise data management tools as required, such as Ab Initio

Software tools/skills:

  • Databricks
  • Apache Spark
  • Scala programming
  • Azure.

Skills/attributes:

  • Data engineering experience - 5 years
  • Cloud platform experience – 2 years
  • Version Control (Git or equivalent) - 2 years.

Nice to have:

  • Data Integration Tools (Spark/Databricks or equivalent) 2 years
  • Version Control (Git or equivalent) 2 years
  • Scripting (Linux/Unix Shell scripting or equivalent) 2 years
  • Netezza experience.

Interview Process:

  • How many rounds - Maximum 2, possibly 1 depending on interviewer availability
  • Video vs. phone - One of the rounds should be video

I Really Appreciate Your Quick Response

Regards,

Hasleen Kaur

Technical Resource Specialist

InTime Infotech Inctime matters39962 Cedar Blvd., Ste 185, Newark CA 94560Contact# : 302-401-6677 x 328Direct : 302-401-6791Fax : 510-201-2367

Hangout:Hasleen.intime

Company Information