UG :- - Not Required
PG :- - Not Required
No of position :- ( 1 )
Post :- 20th May 2022
ETL jobs development using Scala/BigData/HadoopHive/Spark/Flink, NoSQL (Hbase/Cassandra/MongoDB), Oozie workflows, Redis cache, YARN resource manager, Shell scripting, Java/Scala programming, Distributed Messaging system (Kafka),Java/Scala programming, Debugging/troubleshooting of Hadoop, Spark and Oozie jobs, Performance tuning experience for Hadoop/Spark jobs
Experience working on cloud platform (Google/Amazon/IBM), Data warehousing knowledge, HealthCare domain knowledge, Knowledge and implementation of Lambda/Kappa architecture, knowledge on Microservices, Docker, Kubernetes, etc.
good to have Teradata to Data Lake migration experience.
Data Lake implementation, off-boarding of data from existing Datawarehouse (Teradata) to Data Lake and migrating the existing ETL jobs using Hadoop/Spark to populate the data into Data Lake instead of existing Datawarehouse.