Required Skills

Hadoop Spark Kafka

Work Authorization

  • US Citizen

  • Green Card

  • EAD (OPT/CPT/GC/H4)

  • H1B Work Permit

Preferred Employment

  • Corp-Corp

  • W2-Permanent

  • W2-Contract

  • Contract to Hire

Employment Type

  • Consulting/Contract

education qualification

  • UG :- - Not Required

  • PG :- - Not Required

Other Information

  • No of position :- ( 1 )

  • Post :- 20th Sep 2023

JOB DETAIL

 

Looking for data engineers who can build data pipelines from grounds up.
The candidate should be able to assemble large, complex data sets that meet functional / non-functional business requirements.Identify, design, and implement internal process improvements: automating manual     processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.Build pipelines to extraction, transformation, and loading of data from a wide variety of data sources using AWS Glue / EMR.
Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics.
Develop RestFul API to integrate with third party APIs using a framework.Experience building and optimizing big data data pipelines, architectures and data sets.
Strong analytic skills related to working with unstructured datasets.
Working knowledge of message queuing, stream processing, and highly scalable big data data stores.Experience with big data tools: Hadoop, Spark, Kafka, etc.
Experience with data pipeline and workflow management tools: AirflowExperience with AWS cloud services: EC2, EMR, AWS GlueExperience with stream-processing systems: Storm, Spark-Streaming, etc.
Experience with programming languag

Company Information