- Bachelors or Master’s Degree in Computer Science / Information Technology
- Deep experience with graph modelling techniques and toolsets
- Experience working with knowledge graphs stores (RDFox, AWS Neptune, Stardog, TigerGraph, Ontotext GraphDB, Neo4j) and surrounding semantic technology (OWL, RDF, SWRL, SPARQL, JSON-LD)
- Comfort and ideally substantial experience operating big data infrastructure in a cloud-based ecosystem (AWS preferred)
- Experience with stream-processing systems (Kafka, Spark Streaming, Apache Beam/Flink, etc.)
- Strong algorithms, data structures, and coding background with either Java or C# programming experience
- Deep understanding of the theoretical and practical trade-offs of various NoSQL stores (Cassandra, Elasticsearch, DynamoDB, etc.) with respect to different read/write patterns and availability/consistency requirements
Desirable skills/Preferred Qualifications:
- Experience with Cloud platforms (AWS, Google)
- Experience with Hive, Impala, Sqoop, AWS Lambda, AWS S3, FIBO Ontology
- Experience in the financial services domain, preferably in Trade Processing or Middle Office applications.
- Understanding of OTC and Listed Derivatives/Futures, execution and clearing processing