Required Skills

Java C# Kafka NoSQL Cassandra Elasticsearch Redis

Work Authorization

  • US Citizen

  • Green Card

  • EAD (OPT/CPT/GC/H4)

  • H1B Work Permit

Preferred Employment

  • Corp-Corp

  • W2-Permanent

  • W2-Contract

  • Contract to Hire

Employment Type

  • Consulting/Contract

education qualification

  • UG :- - Not Required

  • PG :- - Not Required

Other Information

  • No of position :- ( 1 )

  • Post :- 8th Dec 2023

JOB DETAIL

Lead the development of scalable cloud platforms, fostering an agile team environment, and implementing best practices.
Manage and mentor a team of 5+ members, with the potential to oversee up to 10 team members.
Leverage your strong software architectural background to drive innovation.
Oversee Big Data processing and Data science teams.
Develop Cloud applications utilizing Micro-service APIs.
Demonstrate exceptional knowledge of modern cloud technologies, such as AWS, Azure, and GCP.
Recruit, mentor, lead, and empower emerging talent within the team.
Bring at least 10 years of experience in software product development to the table.
Exhibit abstract thinking skills, enabling you to perform detailed data modeling, ontology description, and data synthesis for high-performing APIs.
Manage multiple high-priority technology initiatives effectively.
Evaluate, onboard, and operationalize modern data lake technologies.
Possess a minimum of 5 years of hands-on experience with various Data Lake technologies.
Select and integrate Big Data tools and frameworks.
Oversee data migration from legacy systems to modern solutions.
Monitor performance and recommend necessary infrastructure adjustments.
Prepare comprehensive database design and architecture reports.
Define data retention policies.

Qualifications:

A minimum of 5 years of experience in Big Data and analytics.
Experience managing and mentoring a team of at least 5
Candidates should still be technically hands on and up to date on new technologies
Expertise in Big Data Processing using Spark and Data Databricks.
Ability to conceptualize, architect, and design Big Data pipelines and implement ETL processing.
Expertise in deploying large distributed Big Data applications.
Familiarity with Data Lake technologies, including Delta Lake.
Mastery of data modeling and architecture principles.
Proficiency in AWS, Azure, or Google Cloud technology stacks.
Strong command of programming languages such as Java, C#, Kafka, NoSQL, Cassandra, Elasticsearch, and Redis.
Extensive experience with RDBMS systems like PostgreSQL and Oracle.
Competence in Micro-services APIs.
Experience in automated testing frameworks and CI/CD pipeline implementations.
Strong understanding of data security and privacy.
Knowledge of Enterprise BI and analytics.
Proficiency in data analysis and management.
Familiarity with database structure systems and data mining.
Exceptional analytical and problem-solving skills.

Company Information