Required Skills

AWS Data Engineer AWS Data Engineer ETL ELT Data Pipelines CloudWatch S3 Lambda Glue ETL Python Shell Scripting Spark Snowflake DBT Data Warehouse Big Data DevOps CI/CD IaC

Work Authorization

  • US Citizen

  • Green Card

  • EAD (OPT/CPT/GC/H4)

  • H1B Work Permit

Preferred Employment

  • Corp-Corp

  • W2-Permanent

  • W2-Contract

  • Contract to Hire

Employment Type

  • Consulting/Contract

education qualification

  • UG :- - Not Required

  • PG :- - Not Required

Other Information

  • No of position :- ( 1 )

  • Post :- 31st Oct 2025

JOB DETAIL

  • Design and Develop Data Pipelines (Highly Important):
    • Create robust, scalable data pipelines to ingest, process, and transform data from various sources.
    • Ensure data quality and readiness for analytics and reporting.
  • Implement ETL/ELT Processes (Highly Important):
    • Develop efficient ETL (Extract, Transform, Load) or ELT (Extract, Load, Transform) workflows using open-source and AWS tools like Glue ETL.
    • Experience with DBT (Data Build Tool) for data transformations on Snowflake is a plus.
    • Proven ability to extract, transform, and load data into Snowflake.
    • Write complex SQL queries to pull data from multiple sources and design testing scripts to ensure data accuracy in target tables.
  • Adopt DevOps Practices (Important):
    • Utilize DevOps methodologies and tools (CI/CD, IaC) to automate and streamline data engineering processes.
  • Design Data Solutions (Important):
    • Leverage your analytical skills to design innovative data solutions that address complex business challenges and drive data-driven decision-making.

Required Skills (Highly Important):

  • AWS Expertise:

    • Strong understanding of core AWS services like S3, Lambda, Glue ETL, CloudWatch.
    • Familiarity with additional services like Kinesis, EMR, Athena, DynamoDB, SNS, and Step Functions is a plus.
  • Programming Languages:

    • Proven experience with Python (Highly Important).
    • Experience with Java or Scala is a plus.
  • Data Storage Technologies:

    • In-depth knowledge of data warehouse (Redshift, RDS) and big data ecosystem technologies (Hadoop).
  • Scripting Languages:

    • Familiarity with shell scripting (Important).
  • Data Engineering Tools:

    • Experience with Spark and Jupiter Notebook is a plus.
  • Additional Skills (Important):

    • Experience with GCP concepts (BigQuery, Vertex-AI, Cloud Storage, Composer, Pub/Sub) is a plus.

Company Information