Create and maintain optimal data pipeline (AWS construct) architectures in the AWS cloud environment
Work closely with the Product Manager and internal team members to ensure all the data is optimized for next step or process.
Assemble large and complex data sets that meet both functional and non-functional business needs
Identify, design and implement internal process improvements such as automating manual data processes, optimizing data delivery and scalability
Ensure the long-term data storage, processing, infrastructure, speed and quality of the Promētha data-based solutions in the marketplace
Build and maintain the infrastructure that is required to transfer and hold data from a wide variety of data sources
Create analytics tools that utilize the data pipeline to provide actionable insights, operational efficiencies and other key performance metrics/indicators (KPM/KPI)
Create data tools for analytics and data science team members that will assist them in building and optimizing the Promētha product portfolio
Work with stakeholders on data-related technical issues and their data needs, and maintain a data dictionary of all the data elements and all of the different variations based on equipment data tags
Education & Experience:
Bachelor's degree in Computer Science, Information Systems, Applied Mathematics or equivalent work experience
Experience in Machine learning model deployment in live environment
Designed, built, and configured the architecture for an IoT program; successfully completed multiple iterations of changes and/or upgrades to the system
Development with Big Data, IoT data, SQL, AWS and built productionized solutions
Experience building and optimizing big data pipelines architectures and data sets.
3+ years of experience as a data engineer within a modeling environment including parsing JSON data
3+ years of data & analytics experience inside industrial industry is preferred
2+ years of experience in working with at least one NoSQL system like MongoDB.