Required Skills

ETL Data Hadoop Cluster Big Data technologies

Work Authorization

  • US Citizen

  • Green Card

  • EAD (OPT/CPT/GC/H4)

  • H1B Work Permit

Preferred Employment

  • Corp-Corp

  • W2-Permanent

  • W2-Contract

  • Contract to Hire

Employment Type

  • Consulting/Contract

education qualification

  • UG :- - Not Required

  • PG :- - Not Required

Other Information

  • No of position :- ( 1 )

  • Post :- 15th Sep 2023

JOB DETAIL

  • Build a highly functional and efficient Big Data platform that brings together data from disparate sources and allows FinThrive to design and run complex algorithms providing insights to Healthcare business operations.
  • Build ETL Data Pipelines in Azure Cloud using Azure ADF and Databricks using PySpark and Scala.
  • Migrate ETL Data pipelines from On Prem Hadoop Cluster to Azure Cloud.
  • Build Data Ingestion Pipelines in Azure to pull data from SQL Server.
  • Perform Automated and Regression Testing.
  • Partner with internal business, product and technical teams to analyze complex requirements and deliver solutions.
  • Participate in development, automation and maintenance of application code to ensure consistency, quality, reliability, scalability and system performance.
  • Deliver data and software solutions working on Agile delivery teams Requirements:
  • Bachelor's degree in Computer science or a related discipline
  • 6+ years of data engineering in an enterprise environment
  • 6+ years of experience writing production code in Python, PySpark or Scala
  • Strong knowledge of Azure platform. Should have worked in Azure ADF, Deployed ADF and Databricks code to production and be able to troubleshoot production issues.
  • Experience with SQL.
  • Experience with Big Data technologies in Azure such as Spark, Hive, Sqoop, Databricks or any other equivalent components.
  • Experience working with git and CI/CD tools
  • Proven background in Distributed Computing, ETL development, and large-scale data processing
  • Travel: None.

Preferred Skills:

  • Healthcare experience preferred
  • Proficiency in SQL and query optimization
  • Proficiency in Linux and Bash shell scripting
  • Experience with Azure ADF, Azure Databricks, Terraform templates, ADF Automated pipelines.
  • Experience migrating applications from an On Prem Hadoop to Cloud.
  • Experience with SQL Server.
  • Knowledge and passion for software development – including software architecture, functional and non-functional aspects
  • Any background in ETL tools such as Ab-Initio, Data Stage

Company Information