Required Skills

ETL sql Python Airflow Data Pipeline kafka pyspark Data Warehousing Data Modeling apache flink apache beam

Work Authorization

  • Citizen

Preferred Employment

  • Full Time

Employment Type

  • Direct Hire

education qualification

  • UG :- - Not Required

  • PG :- - Not Required

Other Information

  • No of position :- ( 1 )

  • Post :- 5th Jul 2022

JOB DETAIL

Roles and Responsibilities

  1. Develop high performance and scalable solutions using GCP that extract, transform, and load big data.
  2. Designing and building production-grade data solutions from ingestion to consumption using Java / Python
  3. Design and optimize data models on GCP cloud using GCP data stores such as BigQuery
  4. Optimizing data pipelines for performance and cost for large scale data lakes.
  5. Writing complex, highly-optimized queries across large data sets and to create data processing layers.
  6. Closely interact with Data Engineers to identify right tools to deliver product features by performing POC
  7. Collaborative team player that interacts with business, BAs and other Data/ML engineers
  8. Research new use cas

Desired Candidate Profile

  • Bachelors degree in computer science, software/computer engineering, mathematics, or equivalent practical experience.
  • Should have 1-3 years of working experience with Java/Python and SQL
  • Should have 1-3 years of working experience with big data technologies such as Apache flink / Apache Beam / PySpark/ Spark framework.
  • Experience in Data Warehouse or Bigquery
  • Familiar with messaging technologies (Kafka) and workflow environments (Airflow) .
  • Should have experience with Agile development methodologies.
  • Excellent communication skills, strong critical thinking skills, and an ability to pick new technology quickly and deliver .
  • Google cloud Professional Data Engineer would be a plus

Company Information