Required Skills

data pipelines ‘big data’ PY-Spark Agile Python metadata Redshift Glue

Work Authorization

  • Us Citizen

  • Green Card

  • EAD (OPT/CPT/GC/H4)

Preferred Employment

  • Corp-Corp

Employment Type

  • Consulting/Contract

education qualification

  • UG :- - Not Required

  • PG :- - Not Required

Other Information

  • No of position :- ( 1 )

  • Post :- 21st Nov 2020

JOB DETAIL

  • Collaborate on a daily basis with the product team. This includes pairing for all aspects of software delivery.
  • Create and maintain optimal data pipeline architecture.
  • Assemble large, complex data sets that meet functional and non-functional business requirements.
  • Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, and re-designing infrastructure for greater scalability.
  • Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS ‘big data’ technologies.

Minimum Qualifications:

  • Bachelor's Degree in Computer Science or job-related discipline or equivalent experience
  • 5 years-experience with software delivery
  • Experience delivering product with Agile / Scrum methodologies
  • Experience with PY-Spark
  • Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases.
  • Experience with ETL flows using Python
  • Experience building and optimizing ‘big data’ data pipelines, architectures and data sets.
  • Strong analytic skills related to working with unstructured datasets.
  • Build processes supporting data transformation, data structures, metadata, dependency and workload management.
  • Proficiency with the following tools that enable the candidate to contribute autonomously: Glue, Kafka, Redshift (with a focus on infrastructure -as-code), Python.

Desired Qualifications

 

  • Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.
  • AWS DevOps skills
  • Humble – is open to being coached, has high Emotional Quotient (EQ) and is self-aware
  • Hungry – desires to get things done while honoring people, and seeks better ways to do the job, is highly motivated by the significant impact this work will have
  • Collaborative – has strong interpersonal skills; demonstrates empathy with teammates and stakeholders, cares about and works well with teammates
  • Willingness to impact beyond defined role
  • Experience with data & analytics product development

 

Please share your DL and VISA Copy along with your updated resume

TableMatrix:

FULL NAME  as per Social Security Card:

 

Current location

 

Are you willing to relocate

 

Primary Contact number

 

Secondary contact number (If any)

 

Total-experience (number of years)

 

Hourly Rate/Salary ($/HR on W2/C2C/1099)

 

Email address

 

Skype ID

 

LinkedIn

 

Visa status

 

Passport Number (If required for client submission)

 

Last 4 digits of SSN (if required for client submission)

 

The expiry date of Visa status

 

Employer details

Name of the company:

Name of the Recruiter:

Contact number:

Email ID:

Currently working (Yes/No). If Yes, Reason for a job change

 

Available to start if client offer the job (Immediate/ 2 weeks upon confirmation)

 

Availability for Telephone Interview and lead time needed:

 

Availability for In-Person Interview and lead time needed:

 

Availability to start:

 

Any other interviews lined up (or) Any offers in hand

 

Educational Details:

 

3 Professional Reference

1-Name:

   Title:

   Company:

   Phone number:

   Email Address:

2-Name:

   Title:

   Company:

   Phone number:

   Email Address:

3-Name:

   Title:

   Company:

   Phone number:

   Email Address:

Company Information