Required Skills

PostgreSQL Redshift BigQuery Snowflake DynamoDB Neo4J MongoDB Cassandra HBase

Work Authorization

  • Us Citizen

  • Green Card

  • EAD (OPT/CPT/GC/H4)

Preferred Employment

  • Corp-Corp

Employment Type

  • Consulting/Contract

education qualification

  • UG :- - Not Required

  • PG :- - Not Required

Other Information

  • No of position :- ( 1 )

  • Post :- 1st Mar 2021

JOB DETAIL

  • Data Architecture experience in a large complex data ecosystem, domain expertise with complex high-tech environments.
  • experience in Industry 4.0 initiatives and/or other aspects of corporate data management; e.g. Data Governance, Data Security, AI/ML, Data DevOps.
  • hands-on experience with one or more relational SQL and NoSQL databases (PostgreSQL, Redshift, BigQuery, Snowflake, DynamoDB, Neo4J, MongoDB, Cassandra, HBase, etc).
  • Job Description:
  • Develop Entity-Relationship, Dimensional, Canonical and other Data Models for the Enterprise
  • Partner with the Staff Architect to drive data vision, strategy, and execution that meets the technology and business' needs including creating a highly scalable Core Data Element architecture comprising of operational and product data to build a Data fabric foundation for the enterprise.
  • Partner with business, product, engineering and data science teams to unlock the power of leveraging data across Amex.
  • Partner with development teams in the data design of complex solutions and ensure that they are in alignment with the data architecture principles, standards, strategies, and target states.
  • Enforcing data architecture standards, procedures and policies to ensure consistency across different program and project implementations.
  • Adopting industry leading technologies to support best-in-class business capabilities for high performance computing and data storage solutions
  • Be a thought leader around all things Data, reviewing current and future needs alongside the Executive Team
  • Experience with any of the following Data Modeling tools (or similar tools) (Archi, ER/Studio, Erwin Data Modeler, Oracle SQL Developer Data Modeler, Power Designer, SQL DBM)
  • Experience with following message queuing, stream processing, and highly scalable ‘big data’ tools and technologies is a plus (Amazon Kinesis / GCP PubSub, Apache Flink, Apache Kafka, Apache Kylin, Apache NiFi, Apache Samza, Apache Spark, Apache Storm, Hadoop, Hive, Zeppelin/ Jupyter).

 

Company Information