Required Skills

Pandas NumPy MySQL PostgreSQL SQL Server AWS Azure GCP Git

Work Authorization

  • US Citizen

  • Green Card

  • EAD (OPT/CPT/GC/H4)

  • H1B Work Permit

Preferred Employment

  • Corp-Corp

  • W2-Permanent

  • W2-Contract

  • Contract to Hire

Employment Type

  • Consulting/Contract

education qualification

  • UG :- - Not Required

  • PG :- - Not Required

Other Information

  • No of position :- ( 1 )

  • Post :- 21st Aug 2025

JOB DETAIL

•    Design, implement, and maintain scalable data pipelines to collect, process, and store data from various sources.
•    Develop and optimize Python scripts for data extraction, transformation, and loading (ETL).
•    Collaborate with data scientists, analysts, and other stakeholders to ensure data is accessible and in the proper format for analysis and reporting.
•    Work with large datasets, ensuring their integrity, consistency, and availability.
•    Leverage UNIX/Linux tools and scripting (e.g., Shell, Bash) for data processing and automation tasks.
•    Create and maintain efficient database architectures and data storage solutions (SQL and NoSQL).
•    Monitor, troubleshoot, and optimize data pipeline performance to ensure data availability and quality.
•    Perform data validation, ensure the accuracy of datasets, and implement data cleaning processes.
•    Develop and implement data models for reporting and analytics purposes.
•    Build and maintain APIs for data access, both internally and externally.
•    Ensure the security and compliance of data handling processes.
•    Automate repetitive tasks using Python and shell scripting to streamline operations and improve efficiency.
•    Collaborate with cross-functional teams to support business intelligence (BI), reporting, and analytics needs.
•    Participate in code reviews, and ensure adherence to best practices and data governance standards.
 

Company Information