Required Skills

Data Engineer

Work Authorization

  • US Citizen

  • Green Card

  • EAD (OPT/CPT/GC/H4)

  • H1B Work Permit

Preferred Employment

  • Corp-Corp

  • W2-Permanent

  • W2-Contract

  • Contract to Hire

Employment Type

  • Consulting/Contract

education qualification

  • UG :- - Not Required

  • PG :- - Not Required

Other Information

  • No of position :- ( 1 )

  • Post :- 27th Feb 2024

JOB DETAIL

Azure Data Engineer Remote

Duration: 9-12 months+

Location: 100% Remote

Need DL/Visa and LinkedIn with submission!!

 

Heavy hands-on experience with Azure Databricks and Azure DevOps and SQL DBs

•            Experienced with Databrick’s lake house architecture and databricks ecosystem of tools, CLIs, and APIs.

•            Ability to develop production quality data pipelines that consume from a variety of sources (Databases, Web Pages, Rest APIs etc...)

•            Experience in designing solutions that are data quality focused.

•            Knowledgeable of design patterns, ELT patterns, data modeling patterns.

•            Ability to write production quality code using Python.

•            Experienced using an IDE for development and ability to debug their code with some level of proficiency.

•            Work experience supporting data science teams in the past. 

 

 

PRIMARY PURPOSE OF POSITION 

1) Support and refine Constellation’s data and analytics technology stack with an emphasis on improving reliability, scale, and availability. 

2) Assist in the design and management of enterprise grade data pipelines and data stores that will be used for developing sophisticated analytics programs, machine learning models, and statistical methods. 

3) Experience delivering data solutions via Agile methodologies and designing CI/CD workflows. 

 

PRIMARY DUTIES AND ACCOUNTABILITIES 

Item Accountability % 

1 Create and maintain optimal data pipeline architecture 20 

2 Assemble large, complex data sets that meet functional / non-functional business requirements. 20 

3 Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc. 20 

4 Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and Big Data technologies. 20 

5 Deliver automation & lean processes to ensure high quality throughput & performance of the entire data & analytics platform.? 10 

6 Work with data and analytics experts to strive for greater functionality in our analytics platforms. 10 

 

POSITION SPECIFICATIONS 

Minimum: Preferred: 

• Experience in building/operating/maintaining fault tolerant and scalable data processing integrations using Azure 

• Experienced using Azure Data Factory or Synapse Analytics 

• Experienced using Databricks & Apache Spark 

• Strong problem-solving skills with emphasis on optimization data pipelines 

• Excellent written and verbal communication skills for coordinating across teams 

• A drive to learn and master new technologies and techniques 

• Experienced in DevOps and Agile environments and using CI/CD pipelines. • Experience using Docker or Kubernetes is a plus 

• Demonstrated capabilities with cloud infrastructures and multi-cloud environments such as Azure, AWS, IBM cloud 

• Experience architecting transactional based data platforms

• Experience architecting real-time/event streaming data platforms (IoT)

Company Information