Development knowledge in Spark, PySpark, AWS Lambda, Python
Experience in designing and developing ETL jobs, data transformation using SQL
Working experience with Database technologies – SQL Server, Oracle, MySQL, PostgreSQL, MongoDB
Experience in Data Engineering or working on Enterprise Data Warehouses & Business Intelligence environments
Understanding of data modeling, data access, data storage techniques, data structures and algorithms
Experience with at least one ETL tool like Talend, Informatica, etc.
5+ Yrs of Strong experience in data warehousing using ETL Talend Integration tool - Talend Big Data Platform, Informatica, Abinitio
Looking for Resources that can support the data pipeline design, development and Implementation of their enterprise data products using Talend (We will consider resources with other ETL tools experience).
Standard Jobs and Big Data / Hadoop Jobs using Talend.
Experience on AWS services like EMR, Redshift, RDS, Lamba, S3 Etc.
Experience with Talend DQ and Profiling
CI/CD Pipeline setup using Bamboo.
Must Have Skillset: Talend DI, Talend Big Data Edition, AWS(S3, EMR, Redshift), Spark