Apply advanced knowledge of Data Engineering principles, methodologies and techniques to design and implement data loading and aggregation frameworks across broad areas of the organization.
Gather and process raw, structured, semi-structured and unstructured data using batch and real-time data processing frameworks.
Implement and optimize data solutions in enterprise data warehouses and big data repositories, focusing primarily on movement to the cloud.
Drive new and enhanced capabilities to Enterprise Data Platform partners to meet the needs of product / engineering / business.
Experience building enterprise systems especially using Databricks, Snowflake and platforms like Azure, AWS, GCP etc
Leverage strong Python, Spark, SQL programming skills to construct robust pipelines for efficient data processing and analysis.
Implement CI/CD pipelines for automating build, test, and deployment processes to accelerate the delivery of data solutions.
Implement data modeling techniques to design and optimize data schemas, ensuring data integrity and performance.
Drive continuous improvement initiatives to enhance performance, reliability, and scalability of our data infrastructure.
Collaborate with data scientists, analysts, and other stakeholders to understand business requirements and translate them into technical solutions.