Participate in the development, enhancement and maintenance of web applications both as an individual contributor and as a team member.
Leading in the identification, isolation, resolution and communication of problems within the production environment.
Leading developer and applying technical skills Apache/Confluent Kafka, Big Data technologies, Spark/Pyspark.
Design recommend best approach suited for data movement from different sources to HDFS using Apache/Confluent Kafka
Performs independent functional and technical analysis for major projects supporting several corporate initiatives
Communicate and Work with IT partners and user community with various levels from Sr Management to detailed developer to business SME for project definition .
Works on multiple platforms and multiple projects concurrently.
Performs code and unit testing for complex scope modules, and projects
Qualifications
Provide expertise and hands on experience working on Kafka connect using schema registry in a very high volume environment (~900 Million messages)
Provide expertise in Kafka brokers, zookeepers, KSQL, KStream and Kafka Control center.
Provide expertise and hands on experience working on AvroConverters, JsonConverters, and String Converters.
Provide expertise and hands on experience working on Kafka connectors such as MQ connectors, Elastic Search connectors, JDBC connectors, File stream connector, JMS source connectors, Tasks, Workers, converters, Transforms.
Provide expertise and hands on experience on custom connectors using the Kafka core concepts and API.
Working knowledge on Kafka Rest proxy.
Ensure optimum performance, high availability and stability of solutions.
Create topics, setup redundancy cluster, deploy monitoring tools, alerts and has good knowledge of best practices.
Create stubs for producers, consumers and consumer groups for helping onboard applications from different languages/platforms. Leverage Hadoop ecosystem knowledge to design, and develop capabilities to deliver our solutions using Spark, Scala, Python, Hive, Kafka and other things in the Hadoop ecosystem.
Experience with RDBMS systems, particularly Oracle 11/12g
Use automation tools like provisioning using Jenkins and Udeploy.
Ability to perform data related benchmarking, performance analysis and tuning.
Strong skills in In-memory applications, Database Design, Data Integration