Required Skills

Data warehouse Apache Spark Scala Python Dataframe API RDD SQL

Work Authorization

  • Us Citizen

  • Green Card

  • EAD (OPT/CPT/GC/H4)

  • H1B Work Permit

Preferred Employment

  • Corp-Corp

Employment Type

  • Consulting/Contract

education qualification

  • UG :- - Not Required

  • PG :- - Not Required

Other Information

  • No of position :- ( 1 )

  • Post :- 12th Jan 2021

JOB DETAIL

Data Engineer

•        Must be from Data warehouse/Big data background.

•        Experience in advanced Apache Spark processing framework, spark programming languages such as Scala/Python/Advanced Java with sound knowledge in shell scripting.

•        Experience in working with Core Spark, Spark Streaming, Dataframe API, Data set API, RDD APIs & Spark SQL programming dealing with processing terabytes of data. Specifically, this experience must be in writing "Big Data" data engineering jobs for large scale data integration in AWS.

•        Advanced SQL experience using Hive/Impala framework including SQL performance tuning

•        Experience in writing spark streaming jobs integrating with streaming frameworks such as Apache Kafka or AWS Kinesis.

•        Create and maintain automated ETL processes with special focus on data flow, error recovery, and exception handling and reporting

•        Gather and understand data requirements, work in the team to achieve high quality data ingestion and build systems that can process the data, transform the data

•        Knowledge of using, setting up and tuning resource management framework such as standalone spark, Yarn or Mesos.

•        Experience in physical table design in Big Data environment

•        Experience working with external job schedulers such as autosys, aws data pipeline, airflow etc.

•        Experience working in Key/Value data store such as Hbase

•        Experience in AWS services such as EMR, Glue (server less architecture), S3, Athena, IAM, Lambda and Cloud watch is required.

Company Information