-
US Citizen
-
Green Card
-
EAD (OPT/CPT/GC/H4)
-
H1B Work Permit
-
Corp-Corp
-
W2-Permanent
-
W2-Contract
-
Contract to Hire
-
UG :- - Not Required
-
PG :- - Not Required
-
No of position :- ( 1 )
-
Post :- 18th Jul 2024
- Design & Develop the ETL
- Good experience in writing SQL, Python and Pyspark programming
- Create the Pipelines (simple and complex) using ADF.
- Work with other Azure stack modules like Azure Data Lakes, SQL DW
- Must be extremely well versed with handling large volume of data.
- Understand the business requirements for Data flow process needs.
- Understand requirements, functional and technical specification documents.
- Development of mapping document and transformation business rules as per scope and requirements/Source to target.
- Responsible for continuous formal and informal communication on project status
- Good understanding of JIRA stories process for SQL development activities
Required Skills:
- Overall, 4+ years of developer skills with SQL, Python with Spark (Pyspark)
- Experience in Azure Data Factory, Data Sets, Data Frame, Azure Blob & Storage Explorer
- Implement data ingestion pipelines from multiple data sources using ADF, ADB (Azure data bricks)
- Experience in creating Data Factory Pipelines, custom Azure development, deployment, troubleshoot data load / extraction using ADF.
- Extensive experience on SQL, python, Pyspark in Azure databricks.
- Able to write Python code in Pyspark frame by using Dataframes.
- Have good understanding on Agile/Scrum methodologies.