Citizen
Full Time
Direct Hire
UG :- - Not Required
PG :- - Not Required
No of position :- ( 1 )
Post :- 26th Sep 2022
Hope You Are Doing Well.
I wish to speak to you regarding a role for " Data Scientist " - ML / Ops role for Gurgaon.
Role : Data Scientist - ML / Ops
Location : Gurgaon
Experience : 7.5 to 14 Years
First 3 Months will be Work form home
Must Have/Primary Skills for Data Scientist - Machine Learning, Python, R, DevOps, NLP, Apache, Hadoop, Hive, AWS, Spark, PyTorch, TensorFlow, Predictive modeling, recommendation Model, time-series analysis
Interview process: 3 rounds to happen Virtually ( Call /Video Call). Telephonic/Zoom interview to happen this week
lf interested, please send your updated CV in word format with following details (mandatory for short-listing) :-
(Please mention in Years in front of every column in respect of skill set experience)
a)Total Experience & Relevant Exp ?
i. Machine Learning :
ii. Python, R, Python ML Libraries :
iii. NLP, Predictive modeling, recommendation Model :
iv. Apache, Hadoop, Hive :
v. PyTorch, TensorFlow :
vi. DevOps, AWS :
b)Notice Period (official / needs to serve)?
c)Current location and contact details?
d)Current Salary (Fixed and Variable)?
e)Any offer? CTC(Fixed+Var)?
f)Expected Salary?
g)Open to work in Gurgaon?
h)Reason for Change?
Kindly go through the job details. Not only it is a prerequisite by the client but it will also help you to understand the client Vision and requirement, which subsequently will help you in the interview process.
Required Knowledge and Key-skills :
BASIC QUALIFICATIONS
Bachelor's or Masters degree in Computer Science, IT or related technical field
6+ years of professional software development experience
3+ year experience with programming languages such as Python, R and open-source technologies (Apache, Hadoop, Spark, PyTorch, TensorFlow)
PREFERRED
Proficiency in Python, R.
Machine learning knowledge and experience.
Experience building tools for data scientists and developers. Must have experience in AWS SageMaker and AWS SageMaker Studio
Experience with IDE/notebook software (Jupyter, nteract, R-Studio, VSCode, PyCharm, etc)
Experience in building data pipeline using on cloud using Cloud technologies (S3, SQS, SNS, Kinesis, Spark, Kafka, Glue etc)
Good to have experience in Data Visualization tools like SAS, Tableau, QLikview,
Key Responsibilities:
Build tools and data pipelines for Data Scientists and Developers using Cloud technologies.
Create Intake process for setting up the Jupyter Notebooks and Data Piplelines. Should be able to do sizing of infrastructure need and suggest cost optimal setup.
Good understanding of the Python ML Libraries. Should be able to prototype and evaluate new libraries and new features available.
Experience in communicating with Data science team, Cloud Infrastructure team and developers to collect requirements, describe software product features, and technical designs.
Ability and willingness to multi-task and learn new technologies quickly
Stakeholder management with good Written and verbal technical communication skills with an ability to present complex technical information in a clear and concise manner to a variety of audiences