Required Skills

Data Engineer

Work Authorization

  • Us Citizen

  • Green Card

Preferred Employment

  • Corp-Corp

Employment Type

  • Consulting/Contract

education qualification

  • UG :- - Not Required

  • PG :- - Not Required

Other Information

  • No of position :- ( 1 )

  • Post :- 8th Jan 2021

JOB DETAIL

Title- Sr. Data Engineer

Address: 100% Remote (they do have offices in LA and Atlanta)

Type- Contract- 12 months with extension opportunity.

Seeking GC holders,  or US Citizens.

Project Details: You will work closely with Data Scientists, the engineering team, business stakeholders, etc. to build out the data infrastructure for company’s mission driven project. Currently they are working on standing up Kafka as their primary data processing engine, as they do a lot of work with bank transactions so this will allow them to scale properly. As well, they are slowly moving and building a data lake using AWS data ecosystem (Glue, Kinesis, S3, Lambda, EMR, Redshift…etc).

Duties:

  •  Work across all phases of the software development lifecycle in a cross-functional, agile development team setting
  •  Collaborate with data scientists and analysts to prepare complex data sets that can be used to solve difficult problems
  •   Administer, maintain, and improve data infrastructure and data processing pipeline, including ETL jobs, events processing, and job monitoring and alerting.
  •   Deliver high-quality, well-tested technical solutions that make sense for the problem at hand
  •   Fearlessly work across components, services, and concerns to deliver business value
  •   Partner with engineers, data scientists, and the CDO to define and refine our data architecture and technology choices
  •   Help define, implement, and reinforce data engineering best practices and processes

Tech Stack: AWS (Sagemaker), Databricks, MLFlow, ETL (AWS Glue or Airflow), SQL (Snowflake)

Team: 2-3 analysts, 2-3 engineers, 2-3 scientists, CDO

Requirements

  •  Machine learning experience as a Data Engineer within AWS ecosystem (Glue, Kinesis, S3, Lambda, EMR, Redshift, etc.)
  •  Experience with ingesting, processing, and transforming data at scale
  •  Experience working alongside data science team and understand technologies, building pipelines, and managing modeling environments
  •  Strong communication skills and the ability to work autonomously

Nice to haves:

  •   Experience working in Startups
  •   Experience working with event-driven and/or streaming workflows using Kafka and Spark
  •   Bachelor’s or Master’s degree in Computer Science or equivalent experience.
  •   Experience working alongside a data science team and/or understand technologies and process of data science, data modeling, and/or machine learning.

Regards

Satya

Company Information