Required Skills

PYspark Python AWS Snowflake

Work Authorization

  • US Citizen

  • Green Card

  • EAD (OPT/CPT/GC/H4)

  • H1B Work Permit

Preferred Employment

  • Corp-Corp

  • W2-Permanent

  • W2-Contract

  • Contract to Hire

Employment Type

  • Consulting/Contract

education qualification

  • UG :- - Not Required

  • PG :- - Not Required

Other Information

  • No of position :- ( 1 )

  • Post :- 5th Jan 2024

JOB DETAIL

Having 10+ years of relevant experience in delivering Data Solutions on variety of Data Warehousing, Big Data and Cloud Data Platforms.
Implementing SCD Techniques like slowly changing dimensions (type1, type2 and type3).
Experience working with distributed data technologies (e.g. Spark, Kafka etc) for building efficient, large scale big data pipelines.
Strong Software Engineering experience with proficiency in at least one of the following programming languages: PYspark, Python, Scala or equivalent · Experience in working with AWS S3 and Snowflake cloud data warehouse.
Experience in Transforming/integrating the data in Redshift/Snowflake.
Handling large and complex data sets like JSON, ORC, PARQUET, CSV files from various sources like AWS S3.
Good exposure in Snowflake Cloud Architecture and SnowSQL and SNOWPIPE for continuous data ingestion.
Hands on experience in bulk loading and unloading data into Snowflake tables.
Experience in writing complex SQL scripts using Statistical Aggregate functions and Analytical functions to support ETL in snowflake cloud data warehouse. · Used COPY/INSERT,PUT,GET commands for loading data into Snowflake tables from internal,external stages.
Experience with performance tuning of SnowFlake data warehouse with Query Profiler, Caching and Virtual data warehouse scaling
Extensively worked in ETL process consisting of data transformation, data sourcing, mapping, conversion and loading.
Hands on exp loading to snowflake.
Knowledge on redshift (Good to have)
Knowledge on AWS EC2, lambda
Strong Datawarehouse exp

Company Information