HDFSHiveOozieSqoopImpalaPresto and SparkAWSEMRS3AthenaSnowflakePythonSQL and Shell ScriptingMySQLTeradataand SQL SERVEROozieAirflowCTRL-MMavenJenkinsIntelliJGIT
Work Authorization
US Citizen
Green Card
EAD (OPT/CPT/GC/H4)
H1B Work Permit
Preferred Employment
Corp-Corp
W2-Permanent
W2-Contract
Contract to Hire
Employment Type
Consulting/Contract
education qualification
UG :- - Not Required
PG :- - Not Required
Other Information
No of position :- ( 1 )
Post :- 10th Sep 2025
JOB DETAIL
Expertise in all components of Big Data Ecosystem- Spark, Hive, Sqoop, Oozie, Impala and Hue
Extensive Experience with AWS services ensuring efficient data management and analysis.
Successfully led the migration of legacy systems to Datawarehouse environments and Cloud, utilizing Informatica, Snowflake and AWS technologies, resulting in improved scalability, performance, and cost-effectiveness.
Extensively worked with lambda, S3, cloudwatch,SQS,SNS,Step Functions and IAM roles in AWS services.
Extensive knowledge on data serialization techniques like Avro, Sequence Files, Parquet, JSON and ORC.
Experienced in handling databases: MySQL, SQL Server, Teradata.
Expertise working across all phases of SDLC requirements gathering, system design, development, enhancement, maintenance, testing, deployment, production support, and documentation.
Working experience in creating complex data ingestion pipelines, data transformations, data management and data governance in a centralized enterprise data hub.
Knowledge of Kubernetes ecosystem.
Did Data analysis and interpretation Using Databricks features to perform data transformations, run queries, and build visualizations collaborating Working with diverse teams to deliver insights in Analytics platform. – good to have.
Strong problem-solving skills
Strong team player with good communication, analytical, presentation and inter-personal skills.