Experience building, maintaining, and improving Data Processing Pipeline / Data routing in large scale environments
- Fluency in common query languages, API development, data transformation, and integration of data streams
- Strong experience with large dataset platforms such as (e.g. Azure SQL Database, Teradata etc )
- Experience with Azure Synapse is preferable
- Fluency in multiple programming languages, such as Python, Shell Scripting, SQL, Java, or similar languages and tools appropriate for large scale data processing.
- Experience in any ER Tool
- Experience with acquiring data from varied sources such as: API, data queues, flat-file, remote databases
- Must have basic Linux administration skills and Multi-OS familiarity (e.g. Microsoft Windows, Linux)
- Data Pipeline and Data processing experience using common platforms and environments
- Understanding of traditional Data Warehouse components (e.g. ETL, Business Intelligence Tools)
- Creativity to go beyond current tools to deliver the best solution to the problem
- 5+ years working on data processing environments