Required Skills

ETL Scala Hadoop Python

Work Authorization

  • US Citizen

  • Green Card

  • EAD (OPT/CPT/GC/H4)

  • H1B Work Permit

Preferred Employment

  • Corp-Corp

  • W2-Permanent

  • W2-Contract

  • Contract to Hire

Employment Type

  • Consulting/Contract

education qualification

  • UG :- - Not Required

  • PG :- - Not Required

Other Information

  • No of position :- ( 1 )

  • Post :- 1st Feb 2024

JOB DETAIL

Design, implement, and maintain scalable and robust data processing pipelines on Google Cloud Platform (GCP).
Collaborate with cross-functional teams to understand data requirements and develop solutions to meet business needs.
Utilize expertise in Big Data technologies, including Hive, Kafka, Scala, Hadoop, Python, and Apache Airflow, to architect and implement data solutions.
Ensure data quality, integrity, and security across all stages of the data lifecycle.
Optimize and troubleshoot performance bottlenecks in data processing workflows.
Stay abreast of industry trends and emerging technologies to continuously enhance our data engineering capabilities.
Requirements:

Bachelor's degree in Computer Science, Information Technology, or a related field.
Proven experience as a Big Data Engineer with a focus on Google Cloud Platform.
Strong proficiency in Big Data technologies such as Hive, Kafka, Scala, Hadoop, Python, and Apache Airflow.
Hands-on experience in designing and implementing scalable and efficient data processing pipelines.
Solid understanding of data modeling, ETL processes, and data warehousing concepts.
Excellent programming and scripting skills, with a focus on Python and Scala.
Familiarity with cloud-native architectures and services on GCP.
Ability to work collaboratively in a fast-paced and agile environment.
Strong problem-solving and communication skills.
Relevant certifications in Google Cloud Platform and Big Data technologies are a plus.

Company Information