-
US Citizen
-
EAD (OPT/CPT/GC/H4)
-
H1B Work Permit
-
Corp-Corp
-
W2-Permanent
-
W2-Contract
-
Contract to Hire
-
UG :- - Not Required
-
PG :- - Not Required
-
No of position :- ( 1 )
-
Post :- 29th Aug 2025
- Design, develop, and maintain robust and scalable ETL workflows and data pipelines using tools like Hive, Spark, and Airflow.
- Implement and manage data storage and processing solutions using Apache Hudi and BigQuery.
- Develop and optimize data pipelines for structured and unstructured data in GCP environments, leveraging GCS for data storage.
- Write clean, maintainable, and efficient code in Scala and Python to process and transform data.
- Ensure data quality, integrity, and consistency by implementing appropriate data validation and monitoring techniques.
- Work with cross-functional teams to understand business requirements and deliver data solutions that drive insights and decision-making.
- Troubleshoot and resolve performance and scalability issues in data processing and pipelines.
- Stay updated with the latest developments in big data technologies and tools and incorporate them into the workflow as appropriate.