Required Skills

Data Engineer

Work Authorization

  • US Citizen

  • Green Card

Preferred Employment

  • Corp-Corp

  • Contract to Hire

Employment Type

  • Consulting/Contract

education qualification

  • UG :- - Not Required

  • PG :- - Not Required

Other Information

  • No of position :- ( 1 )

  • Post :- 3rd Feb 2023

JOB DETAIL

  • Evaluating, developing, maintaining and testing data engineering solutions for Data Lake and advanced analytics projects.
  • Implement processes and logic to extract, transform, and distribute data across one or more data stores from a wide variety of sources
  • Distil business requirements and translate into technical solutions for data systems including data warehouses, cubes, marts, lakes, ETL integrations, BI tools or other components.
  • Creation and support of data pipelines built on AWS technologies including Glue, Redshift, EMR, Kinesis and Athena
  • Participate in deep architectural discussions to build confidence and ensure customer success when building new solutions and migrating existing data applications on the AWS platform.
  • Optimize data integration platform to provide optimal performance under increasing data volumes
  • Support the data architecture and data governance function to continually expand their capabilities
  • Experience in the development of Solution Architecture for Enterprise Data Lakes (applicable for AM/Manager level candidates)
  • Should have exposure to client-facing roles
  • Strong communication, interpersonal and team management skills

 

Requirements:

  • Proficient in any object-oriented/ functional scripting languages: Java, Python, Node etc.
  • Experience in using AWS SDKs for creating data pipelines ingestion, processing and orchestration.
  • Hands-on experience in working with big data on AWS environment including cleaning/transforming/cataloguing/mapping etc.
  • Good understanding of AWS components, storage (S3) & compute services (EC2)
  • Hands-on experience in AWS managed services (Redshift, Lambda, Athena) and ETL (Glue).
  • Experience in migrating data from on-premise sources (e. g. Oracle, API-based, data extracts) into AWS storage (S3)
  • Experience in setup of data warehouse using Amazon Redshift, creating Redshift clusters and perform data analysis queries
  • Experience in ETL and data modelling on AWS ecosystem components - AWS Glue, Redshift, DynamoDB
  • Experience in setting up AWS Glue to prepare data for analysis through automated ETL processes.
  • Familiarity with AWS data migration tools such as AWS DMS, Amazon EMR, and AWS Data Pipeline
  • Hands-on experience with AWS CLI, Linux tools and shell scripts
  • Certifications on AWS will be an added plus.

Company Information