Required Skills

Scala Java Python

Work Authorization

  • US Citizen

  • Green Card

  • EAD (OPT/CPT/GC/H4)

  • H1B Work Permit

Preferred Employment

  • Corp-Corp

  • W2-Permanent

  • W2-Contract

  • Contract to Hire

Employment Type

  • Consulting/Contract

education qualification

  • UG :- - Not Required

  • PG :- - Not Required

Other Information

  • No of position :- ( 1 )

  • Post :- 23rd Jan 2024

JOB DETAIL

All candidates should come from investment banking or financial services related to brokerage.

?**This role is onsite no exceptions

\

  • Title: Senior Big Data Engineers (Java J2EE, CASSANDRA and Hadoop)  
  • Project: Large scale Big Data maintenance project. 
  • Work Location: 100% Onsite, NYC, NY (Midtown)  
  • Duration: 1yr (Renewable) 

     

The Job: 

Must come from Banking, Financial Services, Investment Banking no exceptions. Last 2 most recent roles must be from this industry. 

The consultants will be 'Sr. Big Data Engineers' helping, support, maintain and test the New Database for specific business units. These positions will be responsible for maintaining complex Databases for the Business Technology Group. The consultants will work with minimal supervision and guidance from more seasoned consultants and may also be expected to provide application and Database support. 

Screening question (***Please answer this before submitting a resume***): 

How many records are you regularly transforming and querying tables/sets?  

 

  • **We are looking for experience with large data sets.  
  • **Answer should be the amount of records experienced with, a number amount. 

Skills Needed: 

  • Must have 6+ years of ‘Recent’ experience working for a Major Bank or Brokerage house in the US. 
  • Must 12 yrs+ of experience maintaining applications utilizing Java, J2EE, SDLC, and WebSphere. 
  • Must have last 6 years experience working with CASSANDRA, Hadoop, MongoDB, Apache Spark, HDFS, YARN, MapReduce, PID&HIVE, Flume & Scoop, and Zookeeper. 
  • Must have 6 years of experience maintaining Tier-1 data driven apps. 
  • Must have experience with 24/7 uptime and strict SLA. 
  • Extensive experience maintaining data pipelines, aggregate & transform raw data coming from a variety of data sources. 
  • Extensive experience optimizing data delivery and helping redesign to improve performance, handling, transformation, and managing Big Data using Big Data Frameworks. 
  • Extensive experience maintaining processed data parallel on top of distributed Hadoop storage using MapReduce. 
  • Must have experience wit SOA -Design principles. 
  • Must have 5+ years programming in Scala, Java, Python or GO 
  • Must have 5+ years developing on Hadoop/Spark. 
  • Must have 6+ years developing on an RDBMS such as Microsoft SQL Server, and PostgreSQL. 
  • Must have experience with large data sets 
  • Exposure to data hygiene routines and models 
  • Experience in database maintenance. 
  • Ability to identify problems, and effectively communicate solutions to team. 

 

Company Information