· Experience in the Big Data technologies (Hadoop, Hive, Spark, Nifi, Impala, Looker, Elastic Search etc.).
· Experience with performing data analysis, data observability, data ingestion and data integration.
· 5+ years of relevant data engineering, data infrastructure, DataOps, DevOps, SRE, or general systems engineering experience.
· 5+ years of Experience in running production systems.
· 2+ years of Hands-on experience in industry standard CI/CD tools like Git/BitBucket, Jenkins, Maven, Artifactory, and Chef.
· Experience architecting and implementing data governance processes and tooling (such as data catalogs, lineage tools, role-based access control, PII handling)
· Strong coding ability in Python or other languages like Java, C#, Golang, C, C++, Perl or Ruby etc., and a solid grasp of SQL fundamentals
· Experience with algorithms, data structures, scripting, pipeline management, and software design.