- Strong object oriented programming skills Deep expertise and hands on programming experience in Python and Big Data technologies
- Good understanding of Hadoop Big data concepts is a must Automation tool development for building interfaces with Big data batch and streaming tools
- Should have experience in developing interfaces with Big data batch and streaming tools within the Hadoop Ecosystem such as HDFS HIVE Impala Pig Spark Hadoop etc
- Good Experience on Pyspark and open source technologies like Kafka Storm Flume HDFS
- Must develop spark program using Spark core and Spark SQL jobs as per requirement
- Work independently and develop automation tools solution with minimal guidance
- Possess sufficient knowledge and skills to effectively deal with issues challenges within field of specialization to develop simple applications solutions
- Strong analytical and problem solving skills UNIX Linux scripting to perform ETL on Hadoop platform
- Work with other team members to accomplish key development tasks
- Good to have Scala knowledge
- Having Teradata knowledge or background would be a plus
Warm Regards,
Abhishek Singh
Sr. Technical Recruiter
Peritus Inc.
Contact: +1 (972) 214-2378
Email ID: abhishek.s@peritussoft.com
LinkedIn: https://www.linkedin.com/in/abhishek-singh-22558ba6/