Required Skills

Scala Scalding Spark Hadoop Druid BigQuery Presto Zeppelin Tableau

Work Authorization

  • Us Citizen

  • Green Card

  • EAD (OPT/CPT/GC/H4)

  • H1B Work Permit

Preferred Employment

  • Corp-Corp

Employment Type

  • Consulting/Contract

education qualification

  • UG :- - Not Required

  • PG :- - Not Required

Other Information

  • No of position :- ( 1 )

  • Post :- 26th Nov 2020

JOB DETAIL

 

Required:

 

LinkedIn Profile

 

You’ll use technologies like Scala, Scalding, Spark, Hadoop, Druid, BigQuery, Presto, Zeppelin, Tableau, and Python as you process and aggregate vast amounts of data into traces, metrics, alerts, and visualizations that tell our engineers exactly when and where to find the most important performance bottlenecks.

Working with a multitude of internal customers and stakeholders across the entire SDLC

Building platforms and tools for other engineers

Working on very large scale distributed systems

Defining and analyzing application and system metrics

Strongly preferred, but not required:

Building out data pipelines with Scala, Spark, or Hadoop

Data analytics with SQL, BigQuery, Druid, Presto, and Tableau

Fundamentals of statistics

Performance engineering: tuning, regressions detection, and profiling

Company Information