Required Skills

Snowflake SQL Python ADF Synapse Big Query Redshift Snowflake OLTP OLAP Dimensions Facts Data modeling

Work Authorization

  • US Citizen

  • Green Card

  • EAD (OPT/CPT/GC/H4)

  • H1B Work Permit

Preferred Employment

  • Corp-Corp

  • W2-Permanent

  • W2-Contract

  • Contract to Hire

Employment Type

  • Consulting/Contract

education qualification

  • UG :- - Not Required

  • PG :- - Not Required

Other Information

  • No of position :- ( 1 )

  • Post :- 8th Oct 2025

JOB DETAIL

  • Build and optimize data pipelines for efficient data ingestion, transformation and loading from various sources while ensuring data quality and integrity.
  • Design, develop, and deploy Spark program in databricks environment to process and analyze large volumes of data.
  • Experience of Delta Lake, DWH, Data Integration, Cloud, Design and Data Modelling.
  • Proficient in developing programs in Python and SQL
  • Experience with Data warehouse Dimensional data modeling.
  • Working with event based/streaming technologies to ingest and process data.
  • Working with structured, semi structured and unstructured data.
  • Optimize Databricks jobs for performance and scalability to handle big data workloads.   
  • Monitor and troubleshoot Databricks jobs, identify and resolve issues or bottlenecks.        
  • Implement best practices for data management, security, and governance within the Databricks environment. Experience designing and developing Enterprise Data Warehouse solutions.
  • Proficient writing SQL queries and programming including stored procedures and reverse engineering existing process.
  • Perform code reviews to ensure fit to requirements, optimal execution patterns and adherence to established standards.

Company Information