Required Skills

Python Scala with Spark Kafka Hadoop ecosystem HIVE Looker/Quilk.

Work Authorization

  • Us Citizen

  • Green Card

  • EAD (OPT/CPT/GC/H4)

  • H1B Work Permit

Preferred Employment

  • Corp-Corp

Employment Type

  • Consulting/Contract

education qualification

  • UG :- - Not Required

  • PG :- - Not Required

Other Information

  • No of position :- ( 1 )

  • Post :- 21st Nov 2020

JOB DETAIL

  • 10 years of recent hands on experience in data engineering and pipeline development.
  • Programming experience, ideally in Python, Scala with Spark, Kafka, and a willingness to learn new programming languages to meet goals and objectives.
  • Experience in distributed computing and MapReduce paradigm is a must.
  • Understanding Hadoop ecosystem components like HIVE is must.
  • Knowledge of data cleaning, wrangling, visualization and reporting using tools like Looker/Quilk.
  • Experience processing large amounts of structured and unstructured data, including integrating data from multiple sources.

 

 

GOOD TO HAVE

  • Use tools like DBT (data ) and workflows like airflow/Perfect for data transforms and pipeline is a plus.
  • Knowledge of data mining, machine learning, natural language processing, or information retrieval is a plus.
  • Experience in production support and troubleshooting is a plus.
  • Strong knowledge of and experience with statistics.

 

SOFT SKILLS

  • A willingness to explore new alternatives or options to solve data mining issues, and utilize a combination of industry best practices, data innovations and your experience to get the job done.
  • You find satisfaction in a job well done and thrive on solving head-scratching problems.
  • A solid track record of data management showing your flawless execution and attention to detail.
  • Bachelor’s Degree or more in Computer Science or a related field.

Company Information