Required Skills

Lambda AWS Glue Matillion

Work Authorization

  • US Citizen

  • Green Card

  • EAD (OPT/CPT/GC/H4)

  • H1B Work Permit

Preferred Employment

  • Corp-Corp

  • W2-Permanent

  • W2-Contract

  • Contract to Hire

Employment Type

  • Consulting/Contract

education qualification

  • UG :- - Not Required

  • PG :- - Not Required

Other Information

  • No of position :- ( 1 )

  • Post :- 12th Oct 2023

JOB DETAIL

  • Bachelor’s degree in the areas of Computer Science, Engineering, Information Systems, Business, or equivalent field of study required
  • 7+ years of experience in working with data solutions.
  • 3+ years of experience coding in Python, or Scala or similar scripting language.  
  • 3+ years of experience in developing data pipelines in AWS Cloud Platform (preferred), Azure, or Snowflake at scale.
  • 2+ years Experience in designing and implementing data ingestion with real-time data streaming tools like Kafka, Kinesis or any similar tools.SAP/Salesforce or other cloud integrations are preferred.
  • 3+ years experience working with MPP databases such as Snowflake (Preferred) , Redshift or similar MPP databases.
  • 2+ years experience working with Serverless ETL processes (Lambda, AWS Glue, Matillion or similar)
  • 1+ years experience with big data technologies like EMR, Hadoop, Spark, Cassandra, MongoDB or other open source big data tools. 
  • Knowledge of professional software engineering best practices for the full software development life cycle, including coding standards, code reviews, source control management, build processes, testing, and operations.
  • Experience designing, documenting, and defending designs for key components in large distributed computing systems
  • Demonstrated ability to learn new technologies quickly and independently
  • Demonstrated ability to achieve stretch goals in a very innovative and fast paced environment
  • Ability to handle multiple competing priorities in a fast-paced environment
  • Excellent verbal and written communication skills, especially in technical communications
  • Strong interpersonal skills and a desire to work collaboratively

Experience participating in an Agile software development team, e.g. SCRUM

 

Job Responsibilities:

  • Responsible for the building, deployment, and maintenance of critical scalable Data Pipelines to assemble large, complex sets of data that meet non-functional and functional business requirements
  • Work closely with SMEs, Data Modeler, Architects, Analysts and other team members on requirements to build scalable real time/near real time/batch data solutions.
  • Contributes design, code, configurations, and documentation for components that manage data ingestion, real time streaming, batch processing, data extraction, transformation, and loading into Data Lake/Cloud Data Warehouse/MPP (Snowflake/Redshift/similar Technologies ) .
  • Owns one or more key components of the infrastructure and works to continually improve it, identifying gaps and improving the platform’s quality, robustness, maintainability, and speed.
  • Cross-trains other team members on technologies being developed, while also continuously learning new technologies from other team members.
  • Interacts with technical teams across  and ensures that solutions meet customer requirements in terms of functionality, performance, availability, scalability, and reliability.
  • Performs development, QA, and dev-ops roles as needed to ensure total end to end responsibility of solutions.
  • Keep up with current trends in big data and Analytics , evaluate tools and pace yourself for innovation.

Mentor Junior engineers ,create necessary documentation and Run-books while still being able to deliver on goals

Company Information