Professional Bigdata Hadoop development experience between 3-8 years is preferred.
Expertise with Big Data ecosystem services, such as Spark(Scala/Python), Hive, Kafka, Unix and experience with any cloud stack, preferably GCP(Big Query & DataProc), AWS(Glue,EMR,RedShift)
Object-oriented programming and component-based development with Java.
Experience in working with large cloud data lakes.
Experience with large-scale data processing, complex event processing, stream processing.
Experience in working with CI/CD pipelines, source code repositories, and operating environments.
Experience in working with both structured and unstructured data, with a high degree of SQL knowledge.
Experience designing and implementing scalable ETL/ELT processes and modeling data for low latency reporting
Experience in performance tuning, troubleshooting and diagnostics, process monitoring, and profiling.
Understanding containerization, virtualization, and cloud computing.