Responsibilities:
- Hands on at least 5 years in Data architect space
- Build a framework, no just pipelines, more of a liaison between data engineers and architect, build data
- Pyspark, python (must be able to code, not just write scripts), at least 2 years with modern data: spark, snowflake (redshift or data lakes also works in lieu of snowflake), etc.
- ETL/ ELT: dbt, airflow (Work flow mgmt.), quality great expectations
- SQL knowledge to transfer to DB2: writing SQL and doing implementations for transformation logic
- Documentation, doing the implementations, architects will socialize a solution with the data engineers, more of a work with data architect, present to socialize with the stakeholders
- Database modeling experience
- DevOps: Jekins, Gitlab required
- Adobe Experience Platform: more of a ingestion into batch. Able to understand different patterns or ingestions: batch ingestion, micro batch ingestion, etc. Diff implementations with AEP integrations. In general, need to know how would it look on AWS for reference ingestions, etc.
- Plus: informatica, snap logic, kafka, new database platforms such as Postgres
- Team Size: <10 ppl in the architecture team, this resource will be working with 1-2 people in terms of data arch, diff implementations. Engineers: 8-9 will be using reference implementations
- Tuesdays- Thursday: 2 days are onsite days, most of the team does 2 of these days
- Some technical knowledge, good interpersonal skills, coding, etc.
- Tech Data Architect specializing in Hands on data engineering work for our technology & business partners. You will be part of the Distribution & Marketing Solutions Architecture team and steer strategic technology direction, define target state architecture, roadmaps and help build reference implementations in partnership with our Platform Engineering teams.
Tech Architect Job Responsibilities:
- At least 5+ years’ experience as a data architect
- Experience in Adobe Experience Platform Integration patterns
- Experience in building framework and reference Implementations
- Experience in SQL is a must
- Experience coding in python, pyspark for server side/data processing
- 2+ years experience using modern data stack (spark, snowflake) on cloud platforms (AWS)
- Experience building ETL/ELT pipelines for complex data engineering projects (using Airflow, dbt, Great Expectations would be a plus)
- Experience with Database Modeling, Normalization techniques
- Experience with dev ops tools like Git, Jenkins, Gitlab CI
- Skills that would be a plus:
- ETL tools (Informatica, Snaplogic, dbt etc.)
- Experience with Snowflake or other Cloud Data warehousing products
- Exposure with Workflow management tools such as Airflow
- Exposure to messaging platforms such as Kafka
- Exposure to New SQL platforms such as Cockroachdb, Postgres etc