Extract data from core systems to solve analytical problems; ensure development teams have the required data
Process large, medium and small scale complex data sets efficiently with the lowest compute cost
Define and implement data pipelines to support central data assets and digital products
Provide technical guidance related to data architecture, data models and meta data management to data architects
Work closely with database teams on topics related to data requirements, cleanliness, accuracy, etc
Interact with the business divisions to understand all data requirements to develop business insights for CRM and translates them into data structures and data model requirements to IT
Track analytics impact on business
Background & Competencies
Experience processing large datasets of structured and unstructured data
Expertise in SQL processing
Experience in a commonly used data processing language and frameworks (e.g. Python, Spark, Java, Scala, go)
Expertise in MPP databases with 2-3 years experience in one of the main platforms (Snowflake-preferred, Redshift, Synapse)
Expertise in Cloud Platforms, preferably Azure
Experience in developing applications in high volume data staging/ETL environments
Background in software engineering development including collaboration (source control) and agile