Required Skills

Microservices Apache Hadoop Apache Spark

Work Authorization

  • US Citizen

  • Green Card


  • H1B Work Permit

Preferred Employment

  • Corp-Corp

Employment Type

  • Consulting/Contract

education qualification

  • UG :- - Not Required

  • PG :- - Not Required

Other Information

  • No of position :- ( 1 )

  • Post :- 26th Jul 2022


At least 4+ years of experience implementing complex ETL pipelines preferably with Spark toolset.

At least 4+ years of experience with Java particularly within the data space

Technical expertise in data models, database design development, data mining and segmentation techniques

Good experience writing complex SQL and ETL processes

Excellent coding and design skills, particularly in Java/Scala and Python and or Java.

Experience working with large data volumes, including processing, transforming and transporting large-scale data

Experience in AWS technologies such as EC2, Redshift, Cloud formation, EMR, AWS S3, AWS Analytics required.

Big data related AWS technologies like HIVE, Presto, Hadoop are required.

Excellent working knowledge of Apache Hadoop, Apache Spark, Kafka, Scala, Python etc.

Strong analytical skills with the ability to collect, organize, analyze, and disseminate significant amounts of information with attention to detail and accuracy

Good understanding & usage of algorithms and data structures

Good Experience building reusable frameworks.

Experience working in an Agile Team environment.

AWS certification is preferable: AWS Developer/Architect/DevOps/Big Data


Education Qualification & Work Experience Criteria:

Bachelor's Degree in Engineering (Computer Science or IT or equivalent technical discipline)

Excellent communication skills both verbal and written

Company Information