Required Skills

AWS Ab Initio ETL

Work Authorization

  • US Citizen

  • Green Card

  • EAD (OPT/CPT/GC/H4)

  • H1B Work Permit

Preferred Employment

  • Corp-Corp

  • W2-Permanent

  • W2-Contract

  • Contract to Hire

Employment Type

  • Consulting/Contract

education qualification

  • UG :- - Not Required

  • PG :- - Not Required

Other Information

  • No of position :- ( 1 )

  • Post :- 8th Mar 2024

JOB DETAIL

Required qualifications, capabilities, and skills

•    Formal training or certification on software engineering concepts and 3+ years applied experience.
•    Hands on Ab Initio/ETL (Informatica) development experience.
•    To gain expertise in critical business process of Deposits Eco system.
•    Hands-on practical experience in system design, application development, testing, and operational stability.
•    In-depth knowledge of Ab Initio/Informatica ETL programming (GDE/EME), Unix Shell Scripting and Control-M / Autosys batch schedulers.
•    In-depth knowledge of developing application and infrastructure architectures.
•    Experience in developing, debugging, and maintaining code in a large corporate environment and in RDBMS/querying languages.
•    Solid understanding of agile methodologies such as CI/CD, Applicant Resiliency, and Security.
•    Demonstrated knowledge of software applications and technical processes within a technical discipline (ETL Processing, Scheduling, Operations).
•    Proficient in scripting with Python for data processing tasks and ETL workflows.
•    Experience writing Splunk or Cloudwatch queries, DataDog metrics.

Preferred qualifications, capabilities, and skills

•    Familiarity with Java framework, modern front-end technologies and exposure to cloud technologies.
•    Practical cloud native experience in AWS (EC2, S2, Glue, AWS Lambda, Athena, RDS, SNS), proficiency with Python, PySpark, Machine Learning disciplines.
•    Strong experience with distributed computing frameworks such as Apache Spark, specifically PySpark and event driven architecture using Kafka.
•    Experience with distributed databases like AWS DynamoDB and distributed computing frameworks such as Apache Spark, specifically PySpark.
•    Working knowledge of AWS Glue services, including experience in designing, building, and maintaining ETL jobs for diverse data sources.
•    Familiarity with AWS Glue Dynamic Frames to streamline ETL processes.
•    Capability to troubleshoot common issues in PySpark and AWS Glue jobs, with the ability to identify and address performance bottlenecks.

Company Information