UG :- - Not Required
PG :- - Not Required
No of position :- ( 1 )
Post :- 28th Apr 2022
• Engineering degree in Computer Science or related technical field, or equivalent practical experience.
• Big Data experience with 3+ years of experience in building data processing applications using Hadoop, Spark and NoSQL DB and Hadoop streaming.
• Expertise in one or more programming languages like Java, Scala or Python and in unix scripting.
• Expertise in using query languages such as SQL, Hive, Sqoop and SparkSQL.
• Expertise in storage and process optimization techniques in Hadoop and Spark.
• Experience in using tools like Jenkins for CI, Git for version Control and
• Exposure to Google Cloud (GCP) data components such as Cloud Data Flow, Cloud Data Proc, BigQuery and BigTable is preferred
• Strong problem-solving, communication and articulation skills