10 to 15 years of proven experience in modern cloud data engineering, broader data landscape experience and exposure and solid software engineering experience.
Prior experience architecting and building successful self-service enterprise scale data platforms in a green field environment with microservices based architecture.
Proficiency in building end to end data platforms and data services in GCP is a must.
Proficiency in tools and technologies: BigQuery, Cloud Functions, Cloud Run, Dataform, Dataflow, Dataproc, SQL, Python, Airflow, PubSub.
Experience with Microservices architectures -Kubernetes, Docker. Our microservices are build using TypeScript, NestJS, NodeJS stack. Prefer candidates with this experience.
Experience building Symantec layers.
Proficiency in architecting and designing and development experience with batch and real time streaming infrastructure and workloads.
Solid experience with architecting and implementing metadata management including data catalogues, data lineage, data quality and data observability for big data workflows
Hands-on experience with GCP ecosystem and data lakehouse architectures.
Strong understanding of data modeling, data architecture, and data governance principles.
Excellent experience with DataOps principles and test automation.
Excellent experience with observability tooling: Grafana, Datadog.