This role demands good hands on different programming languages, especially Python and the knowledge of technologies like Kafka, AWS Glue, Cloudformation, ECS, etc
You will be spending most of your time on facilitating seamless streaming, tracking and collaborating huge data sets
This is a back-end role, but not limited to it
You will be working closely with producers and consumers of the data and build optimal solutions for the organization
Will appreciate a person with lots of patience and data understanding
What will you do
Track, process, and manage huge amount of data (100 million records/day)Design and build systems to efficiently move data across multiple systems and make it available for various teams like Data Science, Data Analytics and Product
Design, construct, test and maintain data management systems
Understand data and business metrics required by the product and architect the systems to make that data available in a usable/queryable manner
Ensure that all systems meet the business/company requirements as well as best industry practices
What we need
Bachelors/Masters, Preferably in Computer Science or a related t5chnical field 3-6 years of relevant experience
Deep knowledge and working experience of Kafka ecosystem
Good programming experience, preferably in Python, PHP, Java, Go and a willingness to learn more.
Experience in working with large sets of data platforms
Strong knowledge of micro services, data warehouse and data lake systems in cloud, especially AWS Redshift, S3 and Glue.
Strong hands on experience in writing complex and efficient ETL jobs
Experience in version management systems (preferably with Git)
Strong analytical thinking and communication Intellectual curiosity to find new and unusual ways of how to solve data management issues.