Us Citizen
Green Card
EAD (OPT/CPT/GC/H4)
Corp-Corp
Consulting/Contract
UG :-
PG :-
No of position :- ( 1 )
Post :- 18th Dec 2020
Job Role: Spark Hadoop Developer with AWS Experience
Location – Santa Clara, CA or Tempe, AZ
Duration: 12+ Month
Job Description:
· Demonstrated experience in building data pipelines in data analytics implementations such as Data Lake and Data Warehouse
· At least 2 instances of end to end implementation of data processing pipeline
· Experience configuring or developing custom code components for data ingestion, data processing and data provisioning, using Big data & distributed computing platforms such as Hadoop, Spark, and Cloud platforms such as AWS or Azure.
· Hands on experience developing enterprise solutions using designing and building frameworks, enterprise patterns, database design and development in 2 or more of the following areas
· End to end implementation of Cloud data engineering solution AWS (EC2, S3, EMR, Spectrum, Dynamo DB, RDS, Redshift, Glue, Kinesis)
· End to end implementation of Big data solution on Cloudera/Hortonworks/MapR ecosystem
· Frameworks, reusable components, accelerators, CICD automation
· Languages (Python, Scala) Proficiency in data modelling, for both structured and unstructured data, for various layers of storage