Required Skills

Python Amazon Redshift Amazon S3 Data Architect Data Modelling DB Performance Optimization

Work Authorization

  • US Citizen

  • Green Card

  • EAD (OPT/CPT/GC/H4)

  • H1B Work Permit

Preferred Employment

  • Corp-Corp

  • W2-Permanent

  • W2-Contract

  • Contract to Hire

Employment Type

  • Consulting/Contract

education qualification

  • UG :- - Not Required

  • PG :- - Not Required

Other Information

  • No of position :- ( 1 )

  • Post :- 4th Sep 2025

JOB DETAIL

We are seeking a highly skilled Sr. Architect with 12 to 15 years of experience to join our team.
The ideal candidate will have extensive experience in Cloud Data pipeline, with Architecting and modelling skills.

Must haves:

  • Experience in AWS and enterprise data warehousing project/ETL (building ETL pipeline), Enterprise Data Engineering and Analytics projects.
  • Data Modelling design (ER/Dimensional Modelling) - Conceptual/Logical/Physical.
  • Clear understanding Data warehousing and Data Lake concepts.
  • Redshift implementation with hands-on experience in AWS.
  • Understand business requirements and existing system designs, enterprise applications, IT security guidelines, Legal Protocols
  • Should possess Data modelling experience and should be able to collaborate with other teams within project/program.
  • Proven experience in data modelling & analysis, data migration strategy, cleanse and migrate large master data sets, data alignment across multiple applications, data governance.
  • Should be able to assist in making technology choices and decisions in an enterprise architecture scenario.
  • Should possess working experience in different database environment/applications like OLTP, OLAP etc.
  • Design, build and operationalize data solutions and applications using one or more of AWS data and analytics services in combination with 3rd parties
  • EMR, RedShift, Kinesis, Glue.
  • Actively participate in optimization and performance tuning of data ingestion and SQL processes
  • Knowledge on basic AWS services like S3, EC2, etc
  • Experience in any of the following AWS Athena and Glue PySpark, EMR, Redshift
  • Design and build production data pipelines from ingestion to consumption within a big data architecture, using Java, Python, Scala.
  • Design and implement data engineering, ingestion and curation functions on AWS cloud using AWS native or custom programming
  • Analyze, re-architect, and re-platform on-premises data warehouses to data platforms on AWS cloud using AWS or 3rd party services
  • Understand and implement security and version controls
  • Support data engineers with design of ETL processes, code reviews, and knowledge sharing

Company Information