Develop high quality, secure and scalable data pipelines using spark, Scala/ python/ Java on Hadoop or object storage
Follow Quality Assurance and Quality Control processes
Leverage new technologies and approaches to innovating with increasingly large data sets
Work with project team to meet scheduled due dates, while identifying emerging issues and recommending solutions for problems
Perform assigned tasks and production incident independently
Contribute ideas to help ensure that required standards and processes are in place and actively look for opportunities to enhance standards and improve process efficiency
Expectations:
7 to 9 years of experience in Data Warehouse related projects in product or service based organization
Expertise in Data Engineering and implementing multiple end-to-end DW projects in Big Data environment
Experience of building data pipelines through Spark with Scala/Python/Java on Hadoop or Object storage
Experience of working with Databases like Oracle, Netezza and have strong SQL knowledge
Experience of working on Nifi will be an added advantage
Experience of working in Unix shell scripting
Experience of working in Agile teams
Strong analytical skills required for debugging production issues, providing root cause and implementing mitigation plan
Strong communication skills - both verbal and written – and strong relationship, collaboration skills and organizational skills
Ability to multi-task across multiple projects, interface with external / internal resources and provide technical leadership to junior team members
Ability to be high-energy, detail-oriented, proactive and able to function under pressure in an independent environment along with a high degree of initiative and self-motivation to drive results
Ability to quickly learn and implement new technologies, and perform POC to explore best solution for the problem statement
Flexibility to work as a member of a matrix based diverse and geographically distributed project teams