To develop and deliver codes using Big Data, Hadoop, proficient in Python ,Java, Microservices like Docker, Kubernates and Fabric
Postgres and database understanding or willing to learn Scala and Spark programming languages and be responsible for managing technology in projects and providing technical guidance / solutions for work completion
Candidate should have strong development experience with Big Data, Python, Java, Postgres, Docker/Kubernates/Fabric and Hadoop and understanding of Scala/Spark programming.
Additionally, experience in following: Java and knowledge of design patterns on Python/Pyspark,Hadoop-HDFS/MapReduce/YARN, Hive/HBase , DDL, DML, Sub Queries, Joins , Hierarchical Queries, Analytical Functions, Views & views ,Machine learning/Algorithms , Apache Kafka.
Should have experience with development and CI/CD tools like Jira, unit testing and mocking frameworks eg. SVN and GIT, Teamcity, Maven.
Should have worked in Agile environments.
Should have good communication skills and client interfacing skills.
Maintain personal effectiveness, embracing challenging deadlines, change and complex problem solving, approaching tasks with motivation and commitment.