BE/MCA with good academic track record (>60% in 10/12/G/PG)
Should have minimum of 3-6 years of Experience.
Should have worked in Big Data Technologies for at least 2 years
Ability to read, understand and communicate complex technical information
Ability to express ideas in an organized, articulate and concise manner
Any certification in regards to Big Data services will be an added advantage
Experience in big data technologies like Hadoop, Hive, Spark, Kafka
Understanding of data ingestion, CI/CD process
Understanding of RDBMS - Oracle/SQL server/MySQL. Ability to create users/schema/populate data/simple queries for data retrieval
Hands on experience in debugging Map reduce jobs, Hive queries, Spark jobs for failures and slowness
Ability to analyze logs for error and exceptions - Ability to drill down errors to cluster issues, code issues etc.
Should have good knowledge/understanding on Solace, Kafka, HBase, ElasticSearch, CB, REDIS
Proficiency in at least one scripting language (Shell/Perl/Python etc.)
General operational exposure such as good troubleshooting skills, understanding of system s capacity, bottlenecks, basics of memory, CPU, OS, storage, and networks.
Good knowledge of Linux and debugging skills
Strong verbal and written communication skills are mandatory
Excellent analytical and problem solving skills are mandatory
Solid troubleshooting abilities and able to work with a team to fix large production issues