Required Skills

Strong Python Pyspark SQL with complex coding skills and optimization techniques. Need good understanding of Azure Powertools (Logic Apps CLI etc.) and databricks. Strong Programming and Problem solving skills

Work Authorization

  • H1B Work Permit

Preferred Employment

  • Corp-Corp

  • W2-Permanent

  • W2-Contract

  • Contract to Hire

Employment Type

  • Consulting/Contract

education qualification

  • UG :- - Not Required

  • PG :- - Not Required

Other Information

  • No of position :- ( 1 )

  • Post :- 15th Feb 2025

JOB DETAIL

Data Pipeline Development: Design, build, and maintain scalable, robust data pipelines using Python and Databricks to ingest, process, and transform large datasets from various sources (internal and external).
Databricks Integration: Leverage Databricks to create and optimize ETL workflows, manage data lakes, and perform complex data transformations.
Automation & Optimization: Automate data ingestion, processing, and transformation tasks to ensure consistency, accuracy, and efficiency in data processing workflows.
Collaborate with Data Teams: Work closely with data scientists, analysts, and business intelligence teams to understand data requirements and support data-driven initiatives.
Data Modeling & Architecture: Develop and optimize data models and schema designs to ensure efficient storage and querying in cloud environments.
Performance Tuning: Monitor the performance of data pipelines and implement improvements to ensure data processing is optimized for speed and scalability.
Cloud Infrastructure: Work with cloud technologies (e.g., AWS, Azure, or Google Cloud) to ensure that data pipelines are deployed and managed effectively.

Company Information