- Interact with client to understand requirements and provide support
- Fetch data from the Data warehouse, analyze and produce reports as per client requirements
- Provide assistance and solve issues faced by end users
- Data reconciliation and Validation framework
- Create rule-based scripts to check data quality
- Work on Hive, Hadoop and Unix on Data lake environment which has Data pipeline created using Pyspark
- Be able to work in Unix Scripting and complex SQL
- Hands on ETL Tool- Informatica
- Ensure integrity of data and customer delight by giving appropriate levels of access to requested tools/channels by following business rules
Qualifications we seek in you
Minimum qualifications
- Graduate/B Tech/BCA
- Knowledge/exp. of Capital Markets is an added advantage for this position
Preferred qualifications
- Strong hands-on core python or Unix scripting exp.
- Experience on Datawarehouse and Data lake environment
- Experience of Data Modelling concept, dimensional model
- Expertise in HQL and writing complex DB SQL Queries
- Knowledge on Hadoop environment.
- Good understanding of ETL- Informatica
- Good interpersonal, problem solving and verbal communication skills
- Good to have experience on web scrapping and transposing data from one form to other.