US Citizen
Green Card
EAD (OPT/CPT/GC/H4)
H1B Work Permit
Corp-Corp
W2-Permanent
W2-Contract
Contract to Hire
Consulting/Contract
UG :- - Not Required
PG :- - Not Required
No of position :- ( 1 )
Post :- 29th Jan 2025
Remove/unsubscribe | Update your contact and subscribed mailing list(s) | Subscribe to mailing list(s) to receive requirements & resumes
From:
mohit,
Spar
mohit.n@sparinfosys.com
Reply to: mohit.n@sparinfosys.com
Title: Data Engineer with Snowflake
Location is Remote
Long Term
Best Rate: $53/hr on c2c
JD:
Data Science Engineer with Snowflake and Production Support Experience
Key Responsibilities:
Data Analysis & Insights Generation:
• Analyze and interpret large datasets from both on-premise and cloud environments (Snowflake, Teradata, SQL Server) to extract meaningful insights for system optimization and business decisions.
• Work with stakeholders to provide actionable recommendations based on data analysis results.
Production System Monitoring & Maintenance:
• Continuously monitor the performance, stability, and data integrity of production systems in both cloud (Snowflake, Kafka) and on-premise environments (SQL Server, Teradata, Hadoop).
• Troubleshoot and resolve system performance issues, data discrepancies, and application errors to ensure seamless operations.
ETL & Data Pipeline Management:
• Develop, maintain, and optimize ETL processes using Spark, Hadoop, and other big data technologies to ensure efficient and timely data movement across platforms.
• Implement and enhance data processing workflows to support complex data transformations and integrations across multiple systems.
Application & Service Support:
• Provide production support for enterprise applications including WebSphere, PEGA, and Kafka, ensuring minimal downtime and rapid resolution of service disruptions.
• Collaborate with development teams to resolve issues in application stacks such as .NET, Java, and Angular, maintaining system stability and performance.
Performance Optimization & Query Tuning:
• Optimize queries and improve performance for large-scale data processing in Teradata, Snowflake, and SQL Server.
• Enhance the efficiency of distributed data tasks and computation within Spark and Hadoop environments.
Data Integration & Automation:
• Manage and automate data integration tasks between different environments (on-premise and cloud) using tools like Kafka and FTP.
• Ensure smooth data transfers, monitor batch jobs, and implement automation for data processing and system alerts.
Security & Compliance:
• Ensure data handling, transfer protocols, and storage meet organizational security standards and compliance regulations (including the use of FTP and secure communication).
• Apply best practices in data governance and privacy in both cloud and on-prem environments.
Documentation & Reporting:
• Document data processes, system configurations, and troubleshooting steps to create a knowledge repository.
• Provide detailed reports on system performance, issue resolution, and recommendations for future enhancements.
Collaboration & Stakeholder Communication:
• Work closely with cross-functional teams, including DevOps, engineering, and business analysts, to ensure data solutions align with overall system requirements.
• Communicate technical findings clearly to non-technical stakeholders to support informed decision-making.