- Strong object oriented programming skills Deep expertise and hands on programming experience in Python and Big Data technologies
- Good understanding of Hadoop Big data concepts is a must Automation tool development for building interfaces with Big data batch and streaming tools
- Should have experience in developing interfaces with Big data batch and streaming tools within the Hadoop Ecosystem such as HDFS HIVE Impala Pig Spark Hadoop etc
- Good Experience on Pyspark and open source technologies like Kafka Storm Flume HDFS
- Must develop spark program using Spark core and Spark SQL jobs as per requirement
- Work independently and develop automation tools solution with minimal guidance
- Possess sufficient knowledge and skills to effectively deal with issues challenges within field of specialization to develop simple applications solutions
- Strong analytical and problem solving skills UNIX Linux scripting to perform ETL on Hadoop platform
- Work with other team members to accomplish key development tasks
- Good to have Scala knowledge
Having Teradata knowledge or background would be a plus
regards,
Karun Yadav
Peritus Inc,
222 West Las Colinas Blvd,
Suite# 745 East, lrving, TX 75039
Phone number (Direct): 972-666-6051
E-mail: karun.y@peritussoft.com