Required Skills

MapReduce Yarn Pig Hive HDFS Oozie Spark Storm Sama Kafka Avro ElasticSearch Hbase Cassandra MongoDB CouchDB

Work Authorization

  • Us Citizen

  • Green Card

  • EAD (OPT/CPT/GC/H4)

Preferred Employment

  • Corp-Corp

Employment Type

  • Consulting/Contract

education qualification

  • UG :- - Not Required

  • PG :- - Not Required

Other Information

  • No of position :- ( 1 )

  • Post :- 2nd Mar 2021

JOB DETAIL

Role and responsibility:-

- Responsible for all the tasks involved in the administration of ETL Tool (Ab-Initio).

- Maintaining access, licensing, and file system in the ETL server.

- Guide the design and integration of ETL to ETL Developers.

- Manage metadata hub, Operational Console and troubleshoot environmental issues that affect these components.

- Responsible for technical Metadata management.

- Work with the team to maintain data lineage and resolve data lineage issues.

- Design and develop automated ETL processes and architecture.

- Interact with the Client on daily basis to define the scope of different applications.

- Work on the break-fix and continuous development items and review and inspection for the production changes.

- Perform the code review for the ETL code developed by the development team and guide to resolve any issues.

- Work with the various other groups - DBAs, Server Engineer Team, Middleware Group, Citrix Group, Network Group, data transmission, etc. to resolve the performance-related/integration-related issues.

 

Required Skills:-

This position requires a BA/BS in Computer Science, Information Systems, Information Technology or related field with 6+ years of prior experience in software development, Data Warehousing, and Business Intelligence OR equivalent experience.

-Must have Ab Initio administration/engineering background to support infrastructure-related tasks/procedures

- Administrator experience working with batch processing and tools in the Hadoop technical stack (e.g. MapReduce, Yarn, Pig, Hive, HDFS, Oozie)

- Administrator experience working with tools in the stream processing technical stack (e.g. Spark, Storm, Sama, Kafka, Avro)

-Must have at least one of the following: Hbase, Solr, Spark, and Kafka Experience

- Administrator experience with NoSQL stores (e.g. ElasticSearch, Hbase, Cassandra, MongoDB, CouchDB)

- Expert knowledge on AD/LDAP security integration with Big Data

- Hands-on experience with at least one major Hadoop Distribution such as Cloudera, Horton Works, MapR, or IBM Big Insights

- Advanced experience with SQL and at least two major RDBMS-s

- Advanced experience as a systems integrator with Linux systems and shell scripting

- Advanced experience doing data related benchmarking, performance analysis and tuning, troubleshooting

- Excellent verbal and written communication skills

Company Information