Required Skills

Data Engineer

Work Authorization

  • US Citizen

  • Green Card

  • EAD (OPT/CPT/GC/H4)

  • H1B Work Permit

Preferred Employment

  • Corp-Corp

  • Contract to Hire

Employment Type

  • Consulting/Contract

education qualification

  • UG :- - Not Required

  • PG :- - Not Required

Other Information

  • No of position :- ( 1 )

  • Post :- 21st Jan 2023

JOB DETAIL

 

Top Skills: Python, SQL,AWS, Databricks, Pyspark, ETL

Preferred: Machine learning tools like (TensorFlow, Pytorch,Xgboost, Scitkit-learn), Hadoop/Hive/Spark.

 

Sr. Data Engineer for ADP

 

Contract to Hire, Hybrid Remote, 3-days onsite in Alpharetta, GA

Basic Skills/ Experience

● 8+ years of Data Engineering experience

● Work experience with ETL, Data Modeling, and Data Architecture.

● Experience with Big Data technologies such as Hadoop/Hive/Spark.

● Skilled in writing and optimizing SQL.

● Experience operating very large data warehouses or data lakes.

● Knowledge in Python scripting, and Pyspark

Preferred Skills Experience

● Experience in designing and implementing highly performant data ingestion pipelines from multiple sources using Apache Spark and/or Azure Databricks

● Show efficiency in handling data - tracking data lineage, ensuring data quality, and improving discoverability of data

● Integrating end-to-end data pipeline to take data from source systems to target data repositories ensures the quality and consistency of data is always maintained. Knowledge of Engineering and Operational Excellence using standard methodologies.

● Comfortable using PySpark APIs to perform advanced data transformations.

● Familiarity with implementing classes with Python.

● Hands on experience or knowledge of Data Science modeling Summary Design, build and operationalize large scale enterprise data solutions and applications using one or more of AWS data and analytics services in combination with 3rd parties - Spark, EMR, RedShift, Lambda, Glue Design and build production data pipelines from ingestion to consumption within a big data architecture, using Java, and Python.’ Design and implement data engineering, ingestion, and curation functions on AWS cloud using AWS native or custom programming.

 

Company Information