Required Skills

ETL Data Engineer Python Scala Spark Kafka

Work Authorization

  • Us Citizen

  • Green Card

  • EAD (OPT/CPT/GC/H4)

  • H1B Work Permit

Preferred Employment

  • Corp-Corp

Employment Type

  • Consulting/Contract

education qualification

  • UG :- - Not Required

  • PG :- - Not Required

Other Information

  • No of position :- ( 1 )

  • Post :- 26th Nov 2020

JOB DETAIL

Data Engineer,

Remote position 

#Immediate interview

 

5-10 years of recent hands on experience in data engineering and pipeline development.

Programming experience, ideally in Python, Scala with Spark, Kafka, and a willingness to learn new programming languages to meet goals and objectives.

Experience in distributed computing and MapReduce paradigm is a must.

Understanding Hadoop ecosystem components like HIVE is must.

Knowledge of data cleaning, wrangling, visualization and reporting using tools like Looker/Quilk.

Experience processing large amounts of structured and unstructured data, including integrating data from multiple sources.

GOOD TO HAVE SKILLS:

 

Use tools like DBT (data ) and workflows like airflow/Perfect for data transforms and pipeline is a plus.

Knowledge of data mining, machine learning, natural language processing, or information retrieval is a plus.

Experience in production support and troubleshooting is a plus.

Strong knowledge of and experience with statistics.

 

 

We need candidate with Hadoop, Spark , Scala and pyspark programming language with AWS EMR experience

 

Sign-Up for your account with PROHIRES POWERHOUSE Recruiting Portal to broadcast requirements & hotlists.

 

 

 

 

Veeresh, Adwait Algorithm <phmailadmin@scentricopowerhousesmtpdist2.email>

3:23 AM (6 hours ago)

 

 

 

 to me

 

 

Remove/unsubscribe  |  Update your contact and subscribed mailing list(s)  |  Subscribe to mailing list(s) to receive requirements & resumes


From:
Veeresh,
Adwait Algorithm
veeresh.arra@aalgorithm.com
Reply to:   veeresh.arra@aalgorithm.com
 

 

Role : Data Engineer

Intial Remmote position

JD :

5-10 years of recent hands on experience in data engineering and pipeline development.

Programming experience, ideally in Python, Scala with Spark, Kafka, and a willingness to learn new programming languages to meet goals and objectives.

Experience in distributed computing and MapReduce paradigm is a must.

Understanding Hadoop ecosystem components like HIVE is must.

Knowledge of data cleaning, wrangling, visualization and reporting using tools like Looker/Quilk.

Experience processing large amounts of structured and unstructured data, including integrating data from multiple sources.

GOOD TO HAVE SKILLS:

 

Use tools like DBT (data ) and workflows like airflow/Perfect for data transforms and pipeline is a plus.

Knowledge of data mining, machine learning, natural language processing, or information retrieval is a plus.

Experience in production support and troubleshooting is a plus.

Strong knowledge of and experience with statistics.

Company Information