Required Skills

Hive Scala Hadoop Big Data Data Engineer Spark Python

Work Authorization

  • Citizen

Preferred Employment

  • Full Time

Employment Type

  • Direct Hire

education qualification

  • UG :- - Not Required

  • PG :- - Not Required

Other Information

  • No of position :- ( 1 )

  • Post :- 13th Aug 2022

JOB DETAIL

Data Engineer identifies the business problem and translates these to data services and engineering outcomes. You will deliver data solutions that empower better decision making and flexibility of your solution that scales to respond to broader business questions.

key responsibilities

As a Data Engineer, you are a full-stack data engineer that loves solving business problems. You work with business leads, analysts and data scientists to understand the business domain and engage with fellow engineers to build data products that empower better decision making. You are passionate about data quality of our business metrics and flexibility of your solution that scales to respond to broader business questions. If you love to solve problems using your skills, then come join the Team Searce. We have a casual and fun office environment that actively steers clear of rigid "corporate" culture, focuses on productivity and creativity, and allows you to be part of a world-class team while still being yourself.

  • Consistently strive to acquire new skills on Cloud, DevOps, Big Data, AI and ML

  • Understand the business problem and translate these to data services and engineering outcomes

  • Explore new technologies and learn new techniques to solve business problems creatively

  • Think big! and drive the strategy for better data quality for the customers

  • Collaborate with many teams - engineering and business, to build better data products

  • preferred qualifications
  • Over 1-2 years of experience with

  • Hands-on experience of any one programming language (Python, Java, Scala)

  • Understanding of SQL is must

  • Big data (Hadoop, Hive, Yarn, Sqoop)

  • MPP platforms (Spark, Pig, Presto)

  • Data-pipeline schedular tool (Ozzie, Airflow, Nifi)

  • Streaming engines (Kafka, Storm, Spark Streaming)

  • Any Relational database or DW experience

  • Any ETL tool experience

  • Hands-on experience in pipeline design, ETL and application development

  • Good communication skills

  • Experience in working independently and strong analytical skills

  • Dependable and good team player

  • Desire to learn and work with new technologies

  • Automation in your blood

Company Information