Design, Develop, Test, and optimize new or existing data pipeline solutions
Assemble large, complex healthcare data sets(payors, EHR Data) that meet functional / non-functional business requirements
Identify, design, and implement process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
Keep our data secure following HIPAA standards and Hitrust best practices
Create data tools for Analytics, PopHealth, and Data science team members that assist them in building and optimising our product into an innovative industry leader.
Build API interfaces for interoperability between EHR, HIE’s, and OSH systems
What we’re looking for
We’re looking for motivated, experienced developers with:
Bachelor’s degree in Computer Science, Engineering, or related field from an accredited university
Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases.
Experience building and optimizing ‘big data’ data pipelines, architectures, and data sets.
Working knowledge of message queuing, stream processing, and highly scalable ‘big data’ data stores using Python, Spark
Experience in building APIs and following security best practices
Experience with our Tech Stack: Python, Databricks, Spark, Azure DF, Synapse Warehousing, and Analytics with PowerBI
Experience in Dev/Data Ops best practices and exposure to Observability platforms
Knowledge of FHIR, CCDA, ADTs, HL7 standards - Preferred
Experience working with EMRs (Cerner, Meditech, Epic, etc.) preferred
Working knowledge with Interface engines: Rhapsody, QVera