Required Skills

Spark prefer Scala Spark S3 Glue Athena Airflow

Work Authorization

  • US Citizen

  • Green Card

  • EAD (OPT/CPT/GC/H4)

  • H1B Work Permit

Preferred Employment

  • Corp-Corp

  • W2-Permanent

  • W2-Contract

  • Contract to Hire

Employment Type

  • Consulting/Contract

education qualification

  • UG :- - Not Required

  • PG :- - Not Required

Other Information

  • No of position :- ( 1 )

  • Post :- 5th Apr 2024

JOB DETAIL

Build components of large-scale data platform for real-time and batch processing, and own features of big data applications to fit evolving business needs
Build next-gen cloud based big data infrastructure for batch and streaming data applications, and continuously improve performance, scalability and availability
Contribute to the best engineering practices, including the use of design patterns, CI/CD, code review and automated test
Chip in ground-breaking innovation and apply the state-of-the-art technologies
As a key member of the team, contribute to all aspects of the software lifecycle: design, experimentation, implementation and testing.
Collaborate with program managers, product managers, SDET, and researchers in an open and innovative environment
WHAT TO BRING
Bachelor or above in computer science or EE
4+ years of professional programming in Java, Scala, Python, and etc.
3+ years of big data development experience with technical stacks like Spark, Flink, Singlestore, Kafka, Nifi and AWS big data technologies
Knowledge of system, application design and architecture
Experience of build industry level high available and scalable service
Passion about technologies, and openness to interdisciplinary work

Company Information