Required Skills

Datawarehouse Hive Hadoop Datalake Databricks JIRA

Work Authorization

  • Us Citizen

  • Green Card

  • EAD (OPT/CPT/GC/H4)

Preferred Employment

  • Corp-Corp

Employment Type

  • Consulting/Contract

education qualification

  • UG :- - Not Required

  • PG :- - Not Required

Other Information

  • No of position :- ( 1 )

  • Post :- 9th Mar 2021

JOB DETAIL

Responsibilities

 

  • Build data expertise and own data quality for the pipelines you build
  • Architect, build and launch new data models and data marts that provide intuitive analytics to your customers
  • Design, build and launch extremely efficient & reliable data pipelines to move data (both large and small amounts) into and out of the Data Warehouse
  • Design and develop new systems and tools to enable folks to consume and understand data faster
  • Use your coding skills across a number of languages including Python, Shell Scripting, PL/SQL, AI PDL.
  • Have a clear understanding of the reports/analyses/insights to be driven by data and build data solutions to optimally support the analytics needs
  • Integrate third party data to enrich our data environment and enable new analytic perspectives
  • Work across multiple teams in high visibility roles and own solutions end-to-end
  • Work with program managers, business partners and other engineers to develop and prioritize project plans

 

Must Have Skills –

 

  • Experience in building, maintaining and automating reliable and efficient ETL, ETL jobs
  • Hands-On experience with Python, SQL, Apache Spark, including manipulating data with Sql/Python
  •  experience with Cloud Data Warehouses, AWS IAM Roles. Terraform. CloudFormation. Glue, EC2, S3 and Redshift,
  • Hands-On experience with ETL tools Informatica 9/10.x Power Center, Ab Initio 3.x GDE, AI Control Center and Shell Scripting.
  • Strong CS fundamentals and experience developing with object-oriented programming (Python, Java)
  • Expertise with dimensional warehouse data models (dynamo, star, snowflake schemas)
  • Experience with multi-Terabyte MPP relational databases, such as My Sql, Oracle, Teradata
  • Understanding of streaming technologies and concepts used with data warehouses, is preferred
  • Understanding of automation and orchestration platforms such as Control M, Automic or Airflow

Good to have skills- Other Cloud Datawarehouse , Hive, Hadoop, Datalake, Databricks, JIRA experience

Company Information