Understanding the over-all Architecture of the system and ensure the deliverables are in-line with the proposed AWS architecture
Build the AWS data processing solution focusing development of reusable components
Work with AWS services like S3, Lambda, Step Functions any of the relational databases AWS - PrestoDB , RDS PostgreSQL
AWS Lambda function development experience with Java and/or Python.
Analyze data processing requirements, source data, and data domain models
Prepare architecture and design briefs that outline the key features and decision points of the application built in the Data Lab
Cloud development experience with AWS services, including: API Gateway, ETL data pipeline building, PrestoDB, Microservices,
Build distributed, scalable, and reliable data pipelines that ingest and process data at scale and in real-time.
Author extract, transform and load (ETL) scripts for moving and curating data into data sets for storage and use by a datalake, data warehouse and datamart.
Develop tools and procedures to monitor and automate system tasks on servers and clusters
Collaborate with other teams to design, develop, and deploy data tools that support both operations and product use cases.
Experience with stream-processing systems: Kafka
Ability to provide a leadership role for the work group through knowledge in area of specialization.
Bachelor's degree in computer science, computer engineering, or a related field, or the equivalent combination of education and related experience.
12 years of professional experience as a data software engineer
3 years of experience with AWS cloud or other cloud Big Data computing design, provisioning, and tuning.