Required Skills

Data Engineer

Work Authorization

  • US Citizen

  • Green Card

  • EAD (OPT/CPT/GC/H4)

  • H1B Work Permit

Preferred Employment

  • Corp-Corp

  • W2-Permanent

  • W2-Contract

  • Contract to Hire

Employment Type

  • Consulting/Contract

education qualification

  • UG :- - Not Required

  • PG :- - Not Required

Other Information

  • No of position :- ( 1 )

  • Post :- 7th Jan 2026

JOB DETAIL

  • Looking for a Senior Cloud/Data Engineer with expertise in Python, Spark, PySpark and SQL.
  • Need someone who is expert in Snowflake (Mainly in Administration).
  • Need someone with expertise in AWS Services such as Lambda, S3, EC2, CloudWatch, SSM, and EMR.
  • Someone with previous Mortgage/Financial experience candidates will be preferred first. 

 • Develop data filtering, transformational and loading requirements

• Define and execute ETLs using Apache Sparks on Hadoop among other Data technologies

• Determine appropriate translations and validations between source data and target databases

• Implement business logic to cleanse & transform data

• Design and implement appropriate error handling procedures

• Develop project, documentation and storage standards in conjunction with data architects

• Monitor performance, troubleshoot and tune ETL processes as appropriate using tools like in the AWS ecosystem.

• Create and automate ETL mappings to consume loan level data source applications to target applications

• Execution of end to end implementation of underlying data ingestion workflow.

 Qualifications

• At least 10+ years of overall experience and 5 years of experience developing in Python, SQL (postgres/snowflake preferred). Strong experience in sql is preferred. 

• Bachelor’s degree with equivalent work experience in computer science, data science or a related field.

• Experience working with different Databases and understanding of data concepts (including data warehousing, data lake patterns, structured and unstructured data)

• 3+ years’ experience of Data Storage/Big Data platform implementation, with a preference for hands-on experience in implementation and performance tuning Hadoop/Spark implementations.

• Implementation and tuning experience specifically using Amazon Elastic Map Reduce (EMR).

• Implementing AWS services in a variety of distributed computing, enterprise environments.

• Experience writing automated unit, integration, regression, performance and acceptance tests. 

• Solid understanding of software design principles

 Top Personal Competencies to possess

• Seek and Embrace Change – Continuously improve work processes rather than accepting the status quo

• Growth and Development – Know or learn what is needed to deliver results and successfully compete

 Preferred Skills

• Strong in SQL with strong understanding of snowflake.

• Understanding of Apache Hadoop and the Hadoop ecosystem. Experience with one or more relevant tools (Sqoop, Flume, Kafka, Oozie, Hue, Zookeeper, HCatalog, Solr, Avro).

• Deep knowledge on Extract, Transform, Load (ETL) and distributed processing techniques such as Map-Reduce

• Experience with Columnar databases like Snowflake, Redshift

• Experience in building and deploying applications in AWS (EC2, S3, Hive, Glue, EMR, RDS, ELB, Lambda, etc.)

• Experience with building production web services

• Experience with cloud computing and storage services

• Knowledge of Mortgage industry

Company Information