Required Skills

Data modeling RDBMS Agile Data structures JSON Scrum Informatica Stored procedures SQL Python

Work Authorization

  • Citizen

Preferred Employment

  • Full Time

Employment Type

  • Direct Hire

education qualification

  • UG :- - Not Required

  • PG :- - Not Required

Other Information

  • No of position :- ( 1 )

  • Post :- 19th Jul 2022


As a Senior Data Engineer in Enquero’s Data Analytics unit you will be part of a fast-paced team designing, developing, testing, integrating, and supporting technically innovative solutions for our Fortune 500 customers. You will leverage your wide range of experiences, developed professional concepts as well as understanding of the industry, customer, and company objectives to resolve complex issues in creative ways. This job will start your journey as a leader within the organization.


We expect your passion for technologies and your ability to work on issues where analysis of situation or data requires review or relevant factors. you will be responsible for leading the development/testing of applications, which may include end-to-end ownership of software stack. Your demonstrated ability to consistently achieve this while building and leading effective teams and diversifying your own knowledge in the areas broader than your programs/assignments will define success in this role.

  • Ability to work individually or mentor a small group in an Agile development environment.
  • Communicate effectively with global customers and collaborate well within a team environment to drive results.
  • Embrace new technologies and work with various tools and technologies to achieve desired functionality.
  • Work on problems of diverse scope, develop solutions to technology challenges and deliver the requirements before the deadlines.
  • Follow standard practices and procedures in analyzing situations or data from which answers can readily be obtained.
  • Contributing to your BU/Practice by
    • Documenting your learnings from the current work and engaging in the external tech community by writing blogs, contributing in Github, Stack overflow, meet-ups/conferences etc.
    • Keep updated on the latest technologies with technology trainings and certifications
    • Actively participate in organization level activities and events related to learning, formal training, interviewing, special projects etc.



  • Bachelor’s/ Masters in Computer Science or related disciplines
  • 5 years of relevant experience
  • Expertise in Enterprise Data Warehouse Design, Metadata, Data Quality, Master Data Management and Data Governance
  • Clear understanding of Snowflake’s architecture and data sharing concepts
  • End to end implementation of at least one Snowflake project is a must.
  • Expertise in Snowflake Data Modeling, ELT using Snowflake SQL, implementing stored procedures and standard DWH ETL concepts
  • Clear understanding of Snowflake’s advanced concepts like virtual warehouses, query performance using micro partitions and pruning, zero copy clone, time travel, resource monitors and security control methods including network, data, role -based etc. Expertise in planning and implementing Snowflake’s virtual warehouse sizes, resource monitoring, planning security and role-based access control strategies
  • Experience with SnowPipe and SnowSQL Development
  • Experience in working with any 1 cloud marketplace - AWS, Azure or Google data services
  • Proficient in building data platforms (architecture, storage, management, monitoring)
  • Experience working with different file formats like Parquet, ORC, Avro, JSON etc.
  • Expertise with code versioning tools Git, perforce, SVN etc.
  • Knowledge of Data Structures and Algorithm
  • Strong knowledge on RDBMS and NoSQL databases with the ability to implement them from scratch
  • Strong expertise in building optimizing data pipelines, architectures, and data sets
  • Experience building ETL pipelines using ETL tools like Informatica, TalendD, Pentaho, etc.
  • Knowledge of using orchestration frameworks like Airflow, Oozie, Luigi, etc.
  • Familiarity with big data infrastructure inclusive of MapReduce, Hive, HDFS, YARN, HBase, Oozie, etc.
  • Knowledge of Spark, and building jobs using Python/Scala/Java
  • Knowledge building stream processing platforms using Spark Streaming, Storm, etc. Knowledge of Kafka/Flink Beam would be plus
  • Knowledge of building REST API end points for data consumption
  • Knowledge of implementing CI/CD in the pipelines is a plus


  • Bachelor’s/ Masters in Computer Science or related disciplines
  • 10 years of relevant experience
  • Experience building self-service tools for analytics
  • Knowledge of Containerization (Docker/Kubernetes)
  • Excellent oral and written communication skill
  • Well versed with Agile methodologies and experience in working with scrum teams
  • Ability to understand business requirements and translate them into technical requirements
  • A knack for benchmarking and optimization

Company Information