Required Skills

Data Engineer

Work Authorization

  • US Citizen

  • Green Card

  • EAD (OPT/CPT/GC/H4)

  • H1B Work Permit

Preferred Employment

  • Corp-Corp

  • W2-Permanent

  • W2-Contract

  • Contract to Hire

Employment Type

  • Consulting/Contract

education qualification

  • UG :- - Not Required

  • PG :- - Not Required

Other Information

  • No of position :- ( 1 )

  • Post :- 23rd Jun 2025

JOB DETAIL

  • 10+  years of experience in data engineering or related roles. 
  • Tools & Technologies: 
    • Expertise in SnapLogic for ETL/ELT workflows. 
    • Proficiency with dbt for data modeling and transformation. 
    • Strong knowledge of Snowflake, including performance tuning and best practices. 
    • Advanced programming skills in Python and PySpark for data engineering tasks. 
  • Cloud Platforms: Hands-on experience with Azure, including data-related services. 
  • Database Management: Strong SQL skills and experience working with relational and non-relational databases. 
  • Problem Solving: Excellent debugging, problem-solving, and analytical skills. 
  • Communication: Strong verbal and written communication skills to effectively collaborate with cross-functional teams. 



About the Role:
We are seeking a highly skilled Sr. Data Engineer with expertise in Snap Logicdbt (Data Build Tool)SnowflakePython/PySpark, and experience working with the Azure ecosystem. The ideal candidate will play a key role in designing, developing, and optimizing scalable data pipelines and architectures to support our business intelligence and analytics needs. Candidates with experience in the finance industry are strongly preferred.

Key Responsibilities: 

  • Data Pipeline Development: Design, develop, and maintain ETL/ELT processes using Snap Logic and dbt to extract, transform, and load data efficiently. 
  • Data Warehousing: Manage and optimize Snowflake environments, including schema design, query optimization, and workload management. 
  • Big Data Processing: Implement scalable data processing solutions using Python and PySpark to handle large datasets.  
  • Cloud Integration: Build and maintain data solutions within the Azure ecosystem, leveraging services such as Azure Data Factory, Azure Databricks, Azure Synapse Analytics, and Azure Blob Storage. 
  • Collaboration: Work closely with data analysts, data scientists, and business stakeholders to ensure data solutions align with business needs. 
  • Monitoring and Optimization: Implement monitoring, logging, and alerting mechanisms to ensure the reliability and performance of data pipelines. 
  • Documentation: Maintain comprehensive documentation for data models, pipelines, and processes to ensure knowledge sharing and compliance. 

 

Preferred Qualifications: 

  • Experience with other cloud platforms (AWS, GCP) is a plus. 
  • Familiarity with CI/CD pipelines for data workflows. 
  • Certification in SnowflakeAzure, or other relevant technologies. 
  • Knowledge of data governance and security best practices. 

Company Information