Required Skills

(Python Java or Scala) with Spark. ETL frameworks

Work Authorization

  • Green Card

Preferred Employment

  • Corp-Corp

  • W2-Contract

Employment Type

  • Consulting/Contract

education qualification

  • UG :- - Not Required

  • PG :- - Not Required

Other Information

  • No of position :- ( 1 )

  • Post :- 8th Nov 2021

JOB DETAIL

Role : Sr. Data Architect with Data bricks

Location : Appleton, Wisconsin ( Remote till Covid)

 

JD :

  • 10 + years of experience in Data pipeline engineering for both batch and streaming applications.
  • Experience with data ingestion process, creating data pipelines and performance tuning with Snowflake and AWS.
  • Implementing SQL query tuning, cache optimization, and parallel execution techniques. Must be hands-on coding capable in at least a core language skill of (Python, Java or Scala) with Spark.
  • Expertise in working with distributed DW and Cloud services (like Snowflake, Redshift, AWS etc) via scripted pipeline Leveraged frameworks and orchestration like Airflow as required for ETL pipeline This role intersects with “Big data” stack to enable varied analytics, ML etc. Not just Datawarehouse type workload.
  • Experience handling large and complex sets of XML, JSON, Parquet and CSV from various sources and databases.
  • Solid grasp of database engineering and design Identify bottlenecks and bugs in the system.

Company Information