Required Skills

Python SQL Java Scala Zeppelin RStudio Spark RDDs and Data Frames

Work Authorization

  • Us Citizen

  • Green Card

  • EAD (OPT/CPT/GC/H4)

Preferred Employment

  • Corp-Corp

Employment Type

  • Consulting/Contract

education qualification

  • UG :- - Not Required

  • PG :- - Not Required

Other Information

  • No of position :- ( 1 )

  • Post :- 29th Jan 2021

JOB DETAIL

Overview

We are looking for a strong Hadoop data engineering talent to be a part of the data integration team for the company. We are building a new application integration platform which will allow bi-directional real time integration between SAP, MDM and Salesforce and also bringing new data into current Enterprise Data Lake. Role offers opportunity to be key part of a challenging, fast-paced environment and build ground-up core data integration offerings and shape the technology roadmap in our high-growth, knowledge-driven team.

 

Role Description

·         Designs and develops multiple, diversified applications using big data platform leveraging hybrid clouds.  These applications are primarily consumed for parsing, analyzing, discovering, and visualizing the potential business insights. 

·         Works with critical data stake holders and technical teams for optimal solutions and service delivery.  Provides strategic, tactical direction in the delivery of big data solutions

·         Works with various cross functional teams such as infrastructure, data and enterprise architects to build scalable, optimal, self-service analytics solutions, both on premise and in the cloud.

·         Ability to conduct data profiling, cataloging, and mapping for technical design and construction of technical data flows.

 

Qualifications

 

·         Bachelor’s degree in Computer Science, Mathematics, or Engineering 

·         7+ years of experience in Data and Analytics and Data Integrations

·         5+ years of experience in big data stack and solutions, including cloud technologies

·         5+ years of designing, implementing and successfully operationalizing large-scale data lakes solutions in production environments using Big data Stack (On-prem and Azure)

·         3+ years of experience in architecting and implementing end to end Azure cloud big data solutions.

·         3+ years of experience implementing Real-time Solutions & data integrations

·         Experience with Big Data Management Tool (Zaloni) is a Plus.

·         Hands on experience with building, optimizing the data pipe – CI/CD, integrated - build and deployment automation, configuration management, test automation solutions.

·         Professional training and certifications in various big data solutions (Preferred).

Solid understanding of Azure Cloud Stack including ADF Data flows, Event Hub, Databricks, HDInsight, Azure DevOps

Deep hands on experience with Hadoop, HIVE, HBase, Spark, Kafka, Snowflake, Python, R, SQL, Java, Scala, Zeppelin, RStudio, Spark RDDs and Data Frames, Ambari, Ranger, Kerberos, Atlas and Collibra etc.

Informatica MDM, BDQ, BDM, ETL architecture experience is a Plus

 

 

Company Information