Required Skills

Python SQL and PySpark

Work Authorization

  • US Citizen

  • Green Card

  • EAD (OPT/CPT/GC/H4)

  • H1B Work Permit

Preferred Employment

  • Corp-Corp

  • W2-Permanent

  • W2-Contract

  • Contract to Hire

Employment Type

  • Consulting/Contract

education qualification

  • UG :- - Not Required

  • PG :- - Not Required

Other Information

  • No of position :- ( 1 )

  • Post :- 26th Jan 2024

JOB DETAIL

RESPONSIBILITIES

  • Work closely with cross-functional teams, including product managers, data scientists and engineers to understand project requirements and objectives ensuring alignment with overall business goals.
  • Build data ingestion framework and data pipelines to ingest unstructured and structured data from various data sources such as SharePoint, Confluence, Chat Bots, Jira, External Sites, etc. into our existing OneData platform.
  • Design a scalable target state architecture for data processing-based on document content (Data types may include, but are not limited to: XML, HTML, DOC, PDF, XLS, JPEG, TIFF, and PPT) including PII/CII handling, policy-based hierarchy rules and Metadata tagging.
  • Design, development, and deployment of optimal data pipelines including incremental data ingestion strategy by taking advantage of leading-edge technologies through experimentation and iterative refinement.
  • Design and implement vector databases to efficiently store and retrieve high-dimensional vectors.
  • Conducting research to stay up to date with the latest advancements in generative AI services and identify opportunities to integrate them into our products and services.
  • Implement data quality and validation checks to ensure accuracy and consistency of data.
  • Build automation that effectively and repeatably ensures quality, security, integrity, and maintainability of our solutions.
  • Monitor and troubleshoot data pipeline performance, identifying and resolving bottlenecks and issues.
  • Define and implement data access policies; implement and maintain data security measures and access policies for cloud storage buckets and vector databases.

 

QUALIFICATIONS REQUIRED

  • Bachelor’s degree in Engineering, Computer Science or a related field; Master’s degree is a plus.
  • 10+ years relevant industry and functional experience in Database and Cloud-based technologies
  • Experience in working with Machine learning and AI concepts related to RAG architecture, LLMSs, embedding and data insertion into a Vector data store.
  • Experience in building data ingestion pipelines for Structured and Unstructured data both for storage and optimal retrieval
  • Experience working with Cloud data stores, noSQL, Graph and Vector databases.
  • Proficiency with languages such as Python, SQL, and PySpark
  • Experience working with Databricks and Snowflake technologies.
  • Experience with relevant code repository and project tools such as GitHub, JIRA and Confluence
  • Working experience with Continuous Integration & Continuous Deployment with hands-on expertise on Jenkins, Terraform, Splunk and Dynatrace.
  • Highly innovative with aptitude for foresight, systems thinking and design thinking, with a bias towards simplifying processes.
  • Detail oriented individual with strong analytical, problem-solving, and organizational skills
  • Ability to clearly communicate to both technical and business teams.

 

Company Information