Required Skills

SQL

Work Authorization

  • US Citizen

  • Green Card

  • EAD (OPT/CPT/GC/H4)

  • H1B Work Permit

Preferred Employment

  • Corp-Corp

  • W2-Permanent

  • W2-Contract

  • Contract to Hire

Employment Type

  • Consulting/Contract

education qualification

  • UG :- - Not Required

  • PG :- - Not Required

Other Information

  • No of position :- ( 1 )

  • Post :- 12th Feb 2024

JOB DETAIL

Hadoop

Hive

Impala

SPARK

Kafka

Python

Cloud

Employee Value Proposition (EVP)

Salaried contract - receive 6 TEK holidays (holiday pay) and 5 Bank holidays (guaranteed pay not holiday pay but guaranteed a 40 hour week of pay those weeks) on this contract.

Work Environment

They will be working with a team of about 30. The team is mainly in Charlotte, NC but have some other team members in other markets. Very collaborative, close knit team. Hybrid - 3 days onsite/ 2 days remote required in this role in these locations:

Chicago, IL; Denver, CO. (Will look at other core BoA markets)

Business Drivers/Customer Impact

Top Skills' Details

1) Spark/Scala experience - Will need to develop in these or impala and hive to pull data, working with SPARK Structured steaming

• Basic SQL Skills – one or more of MySQL, HIVE, Impala, SPARK SQL

• Data ingestion experience from message queue, file share, REST API, relational database, etc. and experience with data formats like json, csv, xml, parquet

• Working experience with Spark, Sqoop, Kafka

• Hands on programming experience in at least one of Scala, Python, PHP, or Shell Scripting, to name a few

• Experience and proficiency with Linux operating system is a must

• Troubleshoot platform problems and connectivity issues

• Experience working with Cloudera manager portal and YARN

• Experience with complex resource contention issues in a shared cluster/cloud environment

Job Description

We invite you to join the Data Strategy & Engineering team within the Global Information Security organization at Bank of America as a Big Data Platform Developer/Engineer. We are a tight-knit, supportive community passionate about on delivering the best experience for our customers while remaining sensitive to their unique needs.

In this role, you will be helping to build new data pipelines, identifying existing data gaps, and providing automated solutions to deliver advanced analytical capabilities and enriched data to applications that are supporting the operations team. You will also be responsible for obtaining data from the System of Record and establishing real-time data feed to provide analysis in an automated fashion.

There are a few drivers, one being the enhancement of both their Jupyter Hub tool and SIEM tool as well as their TPCA - third party control and assessments - trying to identify what questions to ask and then build a model off of those previous questions asked. For example: Home Depot - if they are doing 3 million dollars in transactions or TEKsystems is doing a lot of transactions with us and they are assessing us, does that mean 200 or 300 questions, just better using the data they already have versus re-inventing the wheel every time. Their data is all over the place right now in spreadsheets and just no one consolidated way to find this info or data and need to improve/make this more efficient for CSA - Byron specifically his groups is the customer/end user here. Allans team would provide the data and reports to Brent Bohmont's group who would develop the UI that would be what Byrons teams would actually see and use. The UI is internally built/not off the shelf. His team will build the application for the UI workflow, have it consolidated in a queue for them to look at and improve efficiency.

 

Company Information