Required Skills

Data Engineer

Work Authorization

  • US Citizen

  • Green Card

  • EAD (OPT/CPT/GC/H4)

  • H1B Work Permit

Preferred Employment

  • Corp-Corp

  • Contract to Hire

Employment Type

  • Consulting/Contract

education qualification

  • UG :- - Not Required

  • PG :- - Not Required

Other Information

  • No of position :- ( 1 )

  • Post :- 15th Oct 2022

JOB DETAIL

At Cerberus Technology Solutions we pride ourselves as technology practitioners operating as a

technology subsidiary of Cerberus Capital advising & transforming Cerberus’ portfolio companies. Since 1992, we believed change is the only constant within our industry. Our team relies on clean datasets accessible via the latest tools to answer questions that are often not simple or clear. You will be creating solutions that will help derive value from our rich datasets that will solve tough problems impacting consumers across geographies & industries (healthcare, retail/ consumer, telecom, industrial) driving technology transformation.

 

Responsibilities:

  • Design & build batch and real-time data pipelines using some of the latest technologies available on Microsoft Azure, Google Cloud Platform and AWS
  • Implement large-scale data platforms to meet the analytical & operational needs
  • Build products & frameworks that can be re-used across different use-cases in increase efficiency in coding and agility in implementation of solutions
  • Build streaming ingestion processes to efficiently read, process, analyze & publish data for real- time need of applications and data science models
  • Perform analyses of large structured and unstructured data to solve multiple & complex business problems
  • Investigate and prototype different task dependency frameworks to understand the most appropriate design for a given use case to assess & advise
  • Understand business use cases to design engineering routines to affect the outcomes
  • Review & assess data frameworks & technology platforms with the goal of suggesting & implementing improvements on the existing frameworks & platforms.
  • Understand quality of data used in existing use cases to suggest process improvements & implement data quality routines

 

Requirements:

  • An Engineer interested in working in both streaming and batch processing environments
  • A tech-enthusiast excited to work with Cloud Based Technologies like GCP, Azure, Kubernetes
  • A doer who loves to produce meaningful analytic insights for an innovative, data-intensive products
  • Always curious about analytics frameworks and you are well-versed in the advantages and limitations of various big data architectures and technologies
  • Technologist who loves studying software platforms with an eye towards modernizing the architecture
  • Believer in transparency & communication

The Tools We Use

To give you a flavor of our current tools:

Language: Python, Scala, Java, SQL, PySpark

Streaming: Spark Streaming, Pub/sub, Kafka

Database software: BigQuery, Snowflake, Synapse

Cloud Technologies: GCP, Azure, AWS

Company Information