Duties and Responsibilities:
- 10+ years of experience in Data Engineering with solid experience in AWS data platform, modern data architecture, design, and implementation of scalable data ingestion solutions.
- 4-5 years of experience in AWS Glue, Lambda, Appflow, Event Bridge, Python, PySpark, Lake House, S3, Redshift, Postgres, API Gateway, CloudFormation, Kinesis, Athena, KMS, IAM.
- Experience in design and development data pipelines from source systems like SAP Concur, Veeva Vault, Azure Cost, various social media platforms and other enterprise source systems.
- Expertise in analyzing source data, designing a robust and scalable data ingestion framework and data pipelines using adhering to client Enterprise Data Architecture guidelines.
- Proficiency in working with functional teams and client stakeholders to accelerate the addition of data assets into the Enterprise Data Backbone and transforming them into valuable data products for consumption.
- Expertise in modern data architecture, design of Lake House, Enterprise Data Lake, Data Warehouse, API interfaces, solution patterns, best practices and optimizing data pipelines.
- Experience in requirements finalization, epic/task estimation, tasks planning, design documents creation and knowledge transition to stakeholders.
- Experience in CI/CD automation process and manage data pipeline services in and between production and nonproduction environments.
- Experience in working in Agile/Scrum methodologies, coding standards, code reviews, source management (GITHUB), JIRA, JIRA Xray and Confluence.
- Strong analytical and problem-solving skills, excellent communication (written and oral), and interpersonal skills.
- Bachelor's or master's degree in computer science or related field.