Required Skills

Big Data pyspark RDBMS hadoop AWS SQL

Work Authorization

  • Citizen

Preferred Employment

  • Full Time

Employment Type

  • Direct Hire

education qualification

  • UG :- - Not Required

  • PG :- - Not Required

Other Information

  • No of position :- ( 1 )

  • Post :- 26th Sep 2022

JOB DETAIL

Roles

Required Skills:

  • At least 4 years of experience on designing and developing Data Pipelines for Data Ingestion or Transformation using Scala or Python
  • At least 4 years of experience with Python, Spark.
  • At least 3 years of experience with working on AWS technologies.
  • Experience of designing, building, and deploying production-level data pipelines using tools from AWS Glue, Lamda, Kinesis using databases Aurora and Redshift.
  • Experience with Spark programming (pyspark or scala).
  • Hands on experience with AWS components like (EMR, S3, Redshift, Lamdba,  API Gateway, Kinesis ) in production environments
  • Strong analytical skills and advanced SQL knowledge, indexing, query optimization techniques.
  • Experience using ETL tools for data ingestion..
  • Experience with Change Data Capture (CDC) technologies and relational databases such as MS SQL, Oracle and DB
  • Ability to translate data needs into detailed functional and technical designs for development, testing and implementation .
  •  
  • Notice :-0-15 Days

Company Information