Required Skills

Data Engineer

Work Authorization

  • US Citizen

  • Green Card

Preferred Employment

  • Corp-Corp

Employment Type

  • Consulting/Contract

education qualification

  • UG :- - Not Required

  • PG :- - Not Required

Other Information

  • No of position :- ( 1 )

  • Post :- 12th Sep 2022

JOB DETAIL

·     Create and maintain optimal data pipeline architecture,
·     Assemble large, complex data sets that meet functional / non-functional business requirements.
·     Identify, design, and implement internal process improvements: automating manual processes,
optimizing data delivery, re-designing infrastructure for greater scalability, etc.
·     Build the infrastructure required for optimal ingestion, transformation, and Publishing of data
from a wide variety of data sources using Python/Spark and AWS ‘big data’ technologies.
·     Build analytics tools that utilize the data pipeline to provide actionable insights into customer
acquisition, operational efficiency and other key business performance metrics.
·     Work with stakeholders including the Executive, Product, Data and Design teams to assist with
data-related technical issues and support their data infrastructure needs.
·     Keep our data separated and secure across national boundaries through multiple data centers
and AWS regions.
·     Create data tools for analytics and data scientist team members that assist them in building and
optimizing our product into an innovative industry leader.
·     Work with data and analytics experts to strive for greater functionality in our data systems
Experience Requirements:
·     Advanced working SQL,Python/PySpark knowledge and experience working with relational
databases, query authoring (SQL) as well as working familiarity with a variety of databases.
·     Experience building and optimizing ‘Cloud big data’ data pipelines, architectures and data sets.
·     Experience performing root cause analysis on internal and external data and processes to
answer specific business questions and identify opportunities for improvement.
·     Strong analytic skills related to working with unstructured datasets.
·     Build processes supporting data transformation, data structures, metadata, dependency and
workload management.
·     A successful history of understaing, processing and extracting value from large disconnected
datasets.
·     Working knowledge of message queuing, stream processing and highly scalable ‘big data’ data
stores.
·     Understating of various data sets Structured ,Semi Structured ,Data at rest /Motion

·     Experience in Data Modeling
·     Strong project management and organizational skills.
·     Experience supporting and working with cross-functional teams in a dynamic environment. and
knowledge of one or more from the below
· Experience with big data tools: Hadoop, Spark, Kafka, etc.
· Experience with relational SQL and NoSQL databases, including Postgres and
Cassandra.
· Experience with data pipeline and workflow management tools: Apache NiFi,AWS Step
Function ,Oozie,Azkaban, Luigi, Airflow, etc.
· Experience with AWS cloud services: EC2, EMR, RDS, Redshift
· Experience with stream-processing systems: AWS DMS, Kinesis, Spark-Streaming, etc.
· Experience with object-oriented/object function scripting languages: Python, Java, C++,
Scala, etc.

 

--

Company Information