Required Skills

Scala & Python. Java AI/ML ETL modules NoSQL Hadoop Spark Scala Hadoop Ecosystem Linux

Work Authorization

  • Us Citizen

  • Green Card

  • EAD (OPT/CPT/GC/H4)

  • H1B Work Permit

Preferred Employment

  • Corp-Corp

Employment Type

  • Consulting/Contract

education qualification

  • UG :- - Not Required

  • PG :- - Not Required

Other Information

  • No of position :- ( 1 )

  • Post :- 19th Nov 2020

JOB DETAIL


Design and development of java, Scala and spark based back end software modules, performance improvement and testing of these modules.

Scripting using python and shell scripts for ETL workflow. Design and development of back end big data frameworks that is built on top of Spark with features like Spark as a service, workflow and pipeline management, handling batch and streaming jobs;

Build comprehensive Big Data platform for data science and engineering that can run batch process and machine learning algorithms reliably

Design and development of data ingestion services that can ingest 10s of TB of data;

Coding for Big Data applications on clickstream, location and demographic data for behavior analysis using Spark / Scala & Java

Optimized resource requirements including number of executors, cores per executors, memory for Spark streaming and batch jobs

Expert level knowledge and experience in Scala, Java, Distributed Computing, Apache Spark, PySpark, Python, HBase, Kafka, REST based API, Machine Learning.

Development of AI/ML modules and algorithms for Verizon ML use 

Company Information