Required Skills

Teradata SAP HANA.

Work Authorization

  • US Citizen

  • Green Card

  • EAD (OPT/CPT/GC/H4)

  • H1B Work Permit

Preferred Employment

  • Corp-Corp

Employment Type

  • Consulting/Contract

education qualification

  • UG :- - Not Required

  • PG :- - Not Required

Other Information

  • No of position :- ( 1 )

  • Post :- 30th Jul 2022

JOB DETAIL

The ideal candidate is an experienced data pipeline builder and data wrangler who enjoys optimizing data systems 
and building them from the ground up.
Tapping into our massive product usage data sets, you will architect, build 
and optimize analytics platform and pipelines to harness our data, derive actionable insights to help guide the 
business with data.
One must be self-directed and comfortable supporting the data needs of multiple teams, 
systems and products.
The right candidate will be excited by the prospect of optimizing or even re-designing the 
data architecture to support our next generation of products and data initiatives.

What you’ll do

• Architect, build and maintain scalable automated data pipelines ground up. Be an expert of stitching and 
calibrating data across various data sources.
• Work with Adobe’s data ingestion, data platform and product teams to understand and validate instrumentation 
and data flow.
• Develop data set processes for data modeling, mining and production.
• Integrate new data management technologies and software engineering tools into existing structures
• Support regular ad-hoc data querying and analysis to better understand customer behaviors.
• Understand, monitor, QA, translate, collaborate with business teams to ensure ongoing data quality.

What you need to succeed

• Bachelor’s degree in Computer Science, Information Systems or a related field is required, master’s preferred.
• 5-7 years of experience building and maintaining big data pipelines and/or analytical or reporting systems at scale.
• Expert level skills working with Apache Hadoop and related technology stack like Pig, Hive, Oozie etc.
• A strong proficiency in querying, manipulating and analyzing large data sets using SQL and/or SQL-like languages.
• Approaching data organization challenges with a clear eye on what is important; employing the right 
approach/methods to make the maximum use of time and human resources.
• Good attention to detail and ability to stitch and QA multiple data sources. Exploring new territories and finding 
creative and unusual ways to solve data management problems.
• Be a self-starter.
• Good interpersonal skills.

Preferred (but not required) Skills:

• Experience working with at least one other big data platform like Apache Spark, Redshift, Teradata or SAP HANA.
• Familiarity with streaming platforms like Apache Kafka, Amazon Kinesis etc.
• Knowledge of Data Science, Machine Learning and Statistical Models is desirable.
• Knowledge of Adobe Analytics, Salesforce, Marketo is a plu

Company Information