Required Skills

Hadoop Hive Spark Sqoopairflow to build ETL

Work Authorization

  • US Citizen

  • Green Card

  • EAD (OPT/CPT/GC/H4)

  • H1B Work Permit

Preferred Employment

  • Corp-Corp

Employment Type

  • Consulting/Contract

education qualification

  • UG :- - Not Required

  • PG :- - Not Required

Other Information

  • No of position :- ( 1 )

  • Post :- 11th Jul 2022

JOB DETAIL


Experience in Data Analyst, Data mining with large data sets of Structured and Unstructured data, Data Acquisition, Data Validation, Data modeling. Adept in statistical programming languages like Python, Scala, Apache Spark including Big Data technologies like Hadoop, Hive.
Extensively worked on Hadoop, Hive, Spark, Sqoopairflow to build ETL and Data Processing systems having various data sources, data targets and data formats.
Good experience in writing Spark applications using python and Scala


Nice to have skills:
Experience in Data Analyst, Data mining with large data sets of Structured and Unstructured data, Data Acquisition, Data Validation, Data modeling. Adept in statistical programming languages like Python, Scala, Apache Spark including Big Data technologies like Hadoop, Hive.
Extensively worked on Hadoop, Hive, Spark, Sqoopairflow to build ETL and Data Processing systems having various data sources, data targets and data formats.

Detailed Job Description:
Experience in Data Analyst, Data mining with large data sets of Structured and Unstructured data, Data Acquisition, Data Validation, Data modeling. Adept in statistical programming languages like Python, Scala, Apache Spark including Big Data technologies like Hadoop, Hive.
Extensively worked on Hadoop, Hive, Spark, Sqoop/airflow to build ETL and Data Processing systems having various data sources, data targets and data formats.
Good experience in writing Spark applications using python and Scala.
Candidates should possess good PySpark knowledge for ETL operations.


Minimum years of experience:
8-10 years

Company Information