Required Skills

Spark Hadoop Hive Kafka

Work Authorization

  • US Citizen

  • Green Card

  • EAD (OPT/CPT/GC/H4)

  • H1B Work Permit

Preferred Employment

  • Corp-Corp

  • W2-Permanent

  • W2-Contract

  • Contract to Hire

Employment Type

  • Consulting/Contract

education qualification

  • UG :- - Not Required

  • PG :- - Not Required

Other Information

  • No of position :- ( 1 )

  • Post :- 5th Feb 2025

JOB DETAIL

  • Design, build, and maintain scalable data pipelines to support business intelligence, analytics, and machine learning initiatives.
  • Develop and optimize ETL/ELT processes using tools such as Apache Spark, Apache Kafka, and Airflow.
  • Work with AWS, GCP, or Azure cloud platforms to manage data storage, processing, and retrieval.
  • Implement real-time and batch data processing solutions to enable high-performance data access.
  • Collaborate with data scientists, analysts, and software engineers to ensure smooth data integration and accessibility.
  • Maintain and improve data quality, governance, and security by implementing best practices and compliance policies.
  • Design and optimize data warehouse solutions using technologies like Snowflake, Redshift, BigQuery, or Databricks.
  • Develop and maintain APIs for data access and integration with third-party applications.

Company Information