Required Skills

AWS Cloud

Work Authorization

  • US Citizen

  • Green Card

  • EAD (OPT/CPT/GC/H4)

  • H1B Work Permit

Preferred Employment

  • Corp-Corp

Employment Type

  • Consulting/Contract

education qualification

  • UG :- - Not Required

  • PG :- - Not Required

Other Information

  • No of position :- ( 1 )

  • Post :- 2nd Jun 2022

JOB DETAIL

 

The Work Itself
Understand and peer review Cloud centric pipelines through Infrastructure as Code components
Implement open source, vendor and cloud native pipelines via GitOps model
Support Data Science and Analytic teams using Python, R and Scala code
Integrate Datproducta Governance tools using API’s and DevOps
Develop auto scaling and self-healing / self service offerings in the Analytics space at AWS
Collaborate and partner with counterparts from Security, Enterprise Architecture, and CIO application teams to enable developer agility while ensuring appropriate controls are in place.
Help to facilitate a collaborative development approach which encourages and accepts contributions and emphasizes transparency.
Manage teams to provide tech thought leadership in all areas above.
The Skills You Bring
5+ years of technical leadership skills (tech lead, architect or manager ) in DevSecOps / Cloud environment
3 + years working in Agile, and the ability to provide metrics for senior leadership on work that the team is doing and capacity available to take on new work within Sprint commitments.
5+ years of Cloud experience and the ability to articulate the benefits of the cloud using concrete examples of work that has been done in the past.
3+ years background in Hadoop administration, DevOps, or developer experience. The emphasis would be on Spark, Hive, NiFi , Impala and Kafka.
3+ year experience with AWS Big Data tech stack including EMR, Lambda, Sagemaker, Glue , Kinesis , SMS and other related technologies.
Demonstrated experience supporting Data Science teams with experience using technologies such as Jupyter Notebooks, Anaconda, DataRobot and other platforms that are used to run ML models.
Proficient in working with containers (Kubernetes, Docker, etc) , and the ability to move workloads in these type of deployments.
Demonstrated experience in optimizing Cloud centric workloads using monitoring solutions such as ELK, Prometheus, Grafana, Splunk and other APM tools.
Familiarity with running Big Data pipelines in an automated manner using Jenkins, Terraform and any other similar Cloud based tool.

Company Information