Required Skills

Big Data Engineer

Work Authorization

  • US Citizen

  • Green Card

  • EAD (OPT/CPT/GC/H4)

  • H1B Work Permit

Preferred Employment

  • Corp-Corp

  • W2-Permanent

  • W2-Contract

  • Contract to Hire

Employment Type

  • Consulting/Contract

education qualification

  • UG :- - Not Required

  • PG :- - Not Required

Other Information

  • No of position :- ( 1 )

  • Post :- 14th Nov 2023

JOB DETAIL


Required skills: Java, Hadoop, Spark, Cloud Applications such as Azure and/or AWS 

Glider Assessment: Bigdata Engineer(v2) - Spark, Scala/Java, Hadoop, MySQL/PostgreSQL and Azure/AWS/Cloudera.

 

Our team is responsible for building out data pipelines and processes for data movement from sources to their new warehouse

What we look for is the person should be able to self-research and can be autonomous in this position. Cannot depend on others to perform their work. Need the ability to research by oneself. They would work with product owners (data scientist) and enterprise architecture team, and data engineers.

Required skills: Java, Hadoop, Spark, Cloud Applications such as Azure and/or AWS 

 

As a Senior Software Engineer in Data Platform & Engineering Services team, you’ll hold a valued role within a rapidly growing team inside one of the world’s most successful organizations, working closely with experienced and passionate engineers to solve problems customer problem. You will be partnering with the data engineering teams, so the ability to influence and provide operational guidance is key. Initially, the Developer focus will be contributing to the development of operational tools and practices that help maintain service availability across hosted and cloud-based infrastructure. You must understand the full stack and how systems are built as well as a grasp of operational best practices.

 

Role:

As a member of the Unified Data Acquisition and Processing (UDAP) platform team, you will be responsible for building tools and systems that deploy and scale our applications and data in hybrid - cloud and physical - environments. We enable the platform that helps teams across multiple programs to build, test, deploy and host hundreds of data pipelines across several global datacenters along with enterprise logging, monitoring and vulnerability detection.

 

All about You/Experience:

  • Experience in Data Warehouse related projects in product or service-based organization
  • Experience solving for Scalability, Performance and Stability
  • Experience in a programming language in Java, Scala or Python
  • Experience working in SQL and relational databases
  • Operational experience in Big Data Stacks (Spark and Hadoop ecosystem)
  • Expert knowledge of Linux operating systems and environment and Scripting
  • A deep expertise in your field of Software Engineering
  • Expert at troubleshooting complex system and application stacks
  • Operational experience in Elastic Search (ELK stack) would be a plus
  • Operational experience troubleshooting network/server communication is a plus
  • Motivation, creativity, self-direction, and desire to thrive on small project teams
  • Strong written and verbal English communication skills

 

Company Information