US Citizen
Green Card
EAD (OPT/CPT/GC/H4)
H1B Work Permit
Corp-Corp
W2-Permanent
W2-Contract
Contract to Hire
Consulting/Contract
UG :- - Not Required
PG :- - Not Required
No of position :- ( 1 )
Post :- 30th Aug 2025
Microsoft Fabric Expertise:
• Data Integration: Combining and cleansing data from various sources.
• Data Pipeline Management: Creating, orchestrating, and troubleshooting data pipelines.
• Analytics Reporting: Building and delivering detailed reports and dashboards to derive meaningful insights from large datasets.
• Data Visualization Techniques: Representing data graphically in impactful and informative ways.
• Optimization and Security: Optimizing queries, improving performance, and securing data
Azure Databricks Experience:
• Apache Spark Proficiency: Utilizing Spark for large-scale data processing and analytics.
• Data Engineering: Building and managing data pipelines, including ETL (Extract, Transform, Load) processes.
• Delta Lake: Implementing Delta Lake for data versioning, ACID transactions, and schema enforcement.
• Data Analysis and Visualization: Using Databricks notebooks for exploratory data analysis (EDA) and creating visualizations.
• Cluster Management: Configuring and managing Databricks clusters for optimized performance. (Ex: autoscaling and automatic termination)
• Integration with Azure Services: Integrating Databricks with other Azure services like Azure Data Lake, Azure SQL Database, and Azure Synapse Analytics.
Skill
Required / Desired
Amount
of Experience
• 6+ years of experience in data architecture and engineering.
Required
6
Years
• 2+ years hands-on experience with Azure Databricks and Spark.
Required
2
Years
Recent experience with Microsoft Fabric platform.
Required
2
Years
Azure Databricks Experience
Required
2
Years
Proficiency in SQL for querying and managing databases, including skills in SELECT statements, JOINs, subqueries, and window functions12.
Required
3
Years
Using Python for data manipulation, analysis, and scripting, including libraries like Pandas, NumPy, and PySpark