Required Skills

Hive SQL Spark SQL Oracle SQL

Work Authorization

  • US Citizen

  • Green Card

  • EAD (OPT/CPT/GC/H4)

  • H1B Work Permit

Preferred Employment

  • Corp-Corp

  • W2-Permanent

  • W2-Contract

  • Contract to Hire

Employment Type

  • Consulting/Contract

education qualification

  • UG :- - Not Required

  • PG :- - Not Required

Other Information

  • No of position :- ( 1 )

  • Post :- 1st Mar 2024

JOB DETAIL

Role Description:

1 Design high quality deliverables adhering to business requirement with defined standards and design principles, patterns2 Develop and maintain highly scalable, high performance Data transformation applications using Apache Spark framework3 Develop/Integrate the code adhering to CI/CD, using Spark Framework in Scala/Java4 Provide solutions to Big data problems dealing with huge volumes of data using Spark based data transformation solutions , Hive, MPP processes like IMPALA.5 Create Junit tests and ensure code coverage is met as per the agreed standards6 Should be able to work with a team who might be geographically distributed. Review the code modules developed by other juniors.

Competencies:

Digital : BigData and Hadoop Ecosystems

Experience (Years):

4-6

Essential Skills:

Hands on development experience in programming languages such as JAVA,SCALA using Maven, Apache Spark Frameworks and Unix Shell scriptingShould be comfortable with Unix File system as well as HDFS commandsShould have worked on query languages such as Oracle SQL, Hive SQL, Spark SQL, Impala, HBase DBShould be flexibleShould have good communication and customer management skills

Desirable Skills:

Should have knowledge on Big data Data Ingestion tools such as SQOOP and KAFKA.Should be aware of the components in Big Data ecosystem.Should have worked on building projects using Eclipse IDE, Tectia Client, Oracle SQL Developer.

Country:

United States

Company Information