Required Skills

Python Amazon Redshift Amazon S3 AWS Data Modelling AWS Databases

Work Authorization

  • US Citizen

  • Green Card

  • EAD (OPT/CPT/GC/H4)

  • H1B Work Permit

Preferred Employment

  • Corp-Corp

  • W2-Permanent

  • W2-Contract

  • Contract to Hire

Employment Type

  • Consulting/Contract

education qualification

  • UG :- - Not Required

  • PG :- - Not Required

Other Information

  • No of position :- ( 1 )

  • Post :- 4th Sep 2025

JOB DETAIL

We are seeking an experienced Sr. Data Engr with 10 to 13 years of experience to join our team.
The ideal candidate will have a strong background in Data warehousing, with proven background in any of the Cloud Data pipeline development, Python, Performance Optimization, and SQL.
The Data Engr will play a crucial role in designing, developing, and optimizing data pipelines for client projects to support our business objective

Responsibilities
 

  • Experience in Enterprise Data Engineering and Analytics projects including any Cloud Data pipeline development platform and enterprise data warehousing project/ETL.
  • Clear understanding Data warehousing and Data Lake concepts. Must Have Requirement gathering
     
  • Working with business/product owners to clarify the reporting needs.
     
  • Working with other stakeholders like application teams etc to clarify the source systems/data.
     
  • Performing source data analysis, discovery/reverse engineering of existing models. Designing
     
  • Creating the data model designs/relations.
     
  • Designing the etl pipeline.
     
  • Experience with SQL/Stored Procedures .
     
  • Experience with ETL orchestration.
     
  • Understanding of metadata driven frameworks for performing ELT.
     
  • Pros/Cons of using ETL tools VS custom built frameworks in SQL/Python.
     
  • High level understanding/knowledge of ingestion/collection patterns in to cloud objects storage(streaming, file based )
     
  • Good understanding of storage optimizations(partitioning, archiving unused data), knowledge of various file formats CSV/JSON/PARQUET/ORC/AVRO
     
  • Optimizing the ETL/SQL for performance, performance troubleshooting, DB specific features related to query performance.
     
  • Intermediate to expert knowledge in python.
     
  • Exposure to any clouds(AWS/Azure/GCP/OCI) Build
     
  • Strong experience in working in Star schema models
     
  • Experience in building and maintaining of Datawarehouse
     
  • implementations of the data models using ELT.
     
  • Development of the etl pipeline using redshift SQL/stored procedures.
     
  • Troubleshooting the ETL loads , failures.
     
  • Orchestration of pipelines using airflow.
     
  • Performance optimization of the pipelines.
     
  • Complete involvement in the SDLC. Nice to Have
     
  • Data Modelling design (ER/Dimensional Modelling)
     
  • Conceptual/Logical/Physical.
     
  • Experience with streaming data ingestions/collections using Kafka etc.
     
  • Consumption of DWH data in BI platforms like tableau/power BI, data source preparation (live/extracts).
     
  • Data sharing methodologies.
     
  • Experience with AWS/OCI. Non-technical
     
  • Communication/listening skills
     
  • Exposure to tools like Service Now, Jira, Confluence
     
  • Work with various SMEs in understanding business process flows, functional requirements specifications of existing system, their current challenges and constraints and future expectation
     
  • Address customer issues with speed and efficiency
     
  • Develop and manage relations with key client stakeholders
     
  • Collaborate with solutions delivery architects Ensuring contracts are carried out according to agreed terms
     
  • Strong customer focus, ownership, urgency, and drive.
     
  • Should be able to drive engineering best practices and mentoring junior engineers.
     
  • Proficient in troubleshooting technical problems.
     
  • Excellent client interfacing communication skills and the

Company Information