Required Skills

Azure Databricks MSSQL Lake Flow Python

Work Authorization

  • US Citizen

  • Green Card

  • EAD (OPT/CPT/GC/H4)

  • H1B Work Permit

Preferred Employment

  • Corp-Corp

  • W2-Permanent

  • W2-Contract

  • Contract to Hire

Employment Type

  • Consulting/Contract

education qualification

  • UG :- - Not Required

  • PG :- - Not Required

Other Information

  • No of position :- ( 1 )

  • Post :- 27th Nov 2024

JOB DETAIL

Design, construct, and maintain scalable data management systems using Azure Databricks, ensuring they meet end-user

expectations. Supervise the upkeep of existing data infrastructure workflows to ensure continuous service delivery. Create

data processing pipelines utilizing Databricks Notebooks, Spark SQL, Python and other Databricks tools. Oversee and lead the

module through planning, estimation, implementation, monitoring and tracking.

 

· Ability to work independently and multi-task effectively.

· Configure system settings and options and execute unit/integration testing.

· Develop end-user Release Notes, training materials and deliver training to a broad user base.

· Identify and communicate areas for improvement Demonstrate high attention to detail, should work in a dynamic

environment whilst maintaining high quality standards, a natural aptitude to develop good internal working

relationships and a flexible work ethic.

· Responsible for Quality Checks and adhering to the agreed Service Level Agreement (SLA) / Turn Around Time (TAT)

 

Over 8 + years of experience in data engineering, with expertise in Azure Databricks, MSSQL, Lake Flow, Python and

supporting Azure technology.

· Design, build, test, and maintain highly scalable data management systems using Azure Databricks.

· Create data processing pipelines utilizing Databricks Notebooks, Spark SQL.

· Integrate Azure Databricks with other Azure services like Azure Data Lake Storage, Azure SQL Data Warehouse.

· Design and implement robust ETL pipelines using Databricks, ensuring data quality and integrity.

· Design and implement effective data models, schemas and data governance using the Databricks environment.

· Develop and optimize PySpark/Python code for data processing tasks.

· Assist stakeholders with data-related technical issues and support their data infrastructure needs.

· Develop and maintain documentation for data pipeline architecture, development processes, and data governance.

· Data Warehousing: In-depth knowledge of data warehousing concepts, architecture, and implementation, including

experience with various data warehouse platforms.

· Data Quality – implement data quality rules using Databricks and external platforms like IDQ.

· Extremely strong organizational and analytical skills with strong attention to detail

· Strong track record of excellent results delivered to internal and external clients.

· Excellent problem-solving skills, with ability to work independently or as part of team.

· Strong communication and interpersonal skills, with ability to effectively engage with both technical and non-

technical stakeholders.

· Able to work independently without the needs for close supervision and collaboratively as part of cross-team

efforts.

Company Information