Required Skills

Data Engineer

Work Authorization

  • US Citizen

  • Green Card

Preferred Employment

  • Corp-Corp

  • W2-Contract

Employment Type

  • Consulting/Contract

education qualification

  • UG :- - Not Required

  • PG :- - Not Required

Other Information

  • No of position :- ( 1 )

  • Post :- 26th Jan 2023

JOB DETAIL

  • Assesses requirements, use cases and customer needs to make recommendations on Data engineering, data quality and data analytics-related technical tools and services as part of solutions and pilots
  • Serves as a key resource in developing key DataOps and Data engineering pipelines, applications, and programs in support of broader strategy and customer needs
  • Solves complex problems as they arise in vetting new technology and develops innovative solutions as applicable related to DataOps, integration with CI/CD and automation
  • Focuses on quality of work and continuous improvement for developing scalable code using modern tools, applications, and services in the AWS cloud
  • Provides clear explanations and recommendations to other on complex issues including working with vendors as applicable on debugging, troubleshooting or research
  • Adheres to deadlines and delivers on tasks assigned ensuring transparency with challenges, and proactively identifies mitigations and solutions to ensure tasks are complete on-time
  • Reviews work performance by others and provides recommendations for improvement
  • Collaborates with organizational leadership to understand strategy and needs to ensure recommendations and results align with expectations
  • Collects input from other team members as well as internal and external stakeholders to fold into implementation or develop new ideas for delivering results

Posting Job Qualifications:

  • 7+ years Data engineering and Data operation related experience in AWS. Preferably in Azure as well. The experience includes build systems that collect, manage, and convert raw data into usable information for data scientists and business analysts to interpret
  • Experience with Spark based data engineering tools such as Databricks, RStudio, Jupiter, AWS Sage maker, AWS serverless services such as Lambda, AWS Glue and EMR.
  • Experience in automating data Opes jobs, data pipelines using the orchestration tools such as Airflow, Databricks or equivalent AWS services
  • Experiences with management of streaming data using KAFKA
  • Experience with cloud native data warehouse such as Snowflake, RedShift, Amazon RDS, Delta Lake
  • Experience with Data cataloging tools such as AWS Glue, Unity Catalog, Alation or equivalent
  • Fluency in developing complex data engineering pipelines using scripting languages such as Python, Bash Shell Spark, R , Lambda, Scala, Hive and integration with AWS services to monitor pipelines
  • Experience working in multiple cloud environments, especially AWS and Azure and familiarity with cloud-native tools and services
  • Experience in operationalizing data pipeline application log management, monitoring, debugging and notification services
  • Experience with how data access management, authentication and authorization concept and implementation for the data management tools such as Snowflake, Delta, S3, AWS RDS. Knowledge of Privacera and/or opensource Apache Ranger
  • Experience with team collaboration tools such as JIRA and agile methodology
  • Hand-on knowledge of scripting languages such as Bash shell, YAML and JSON
  • Experience with tools and process to manage data profile

Fluency in using REST API based technology to use as data source

  • Must have working experience in using technology and/or developing codes to manage data quality
  • Experience with various data formats including Parquet, CSV, TSV, JSON, and PDF

Company Information