Required Skills

System Administrator

Work Authorization

  • US Citizen

  • Green Card

  • EAD (OPT/CPT/GC/H4)

  • H1B Work Permit

Preferred Employment

  • Corp-Corp

  • W2-Permanent

  • W2-Contract

  • Contract to Hire

Employment Type

  • Consulting/Contract

education qualification

  • UG :- - Not Required

  • PG :- - Not Required

Other Information

  • No of position :- ( 1 )

  • Post :- 5th Jun 2025

JOB DETAIL

Cloud Platforms: Google/AWS/Azure public cloud, PySpark, Big Query, Google Airflow

  • Shift Support: Participate in 24x7x365 SAP Environment rotational shift support and operations
  • Team Lead Responsibilities: Maintain the upstream Big Data environment, manage financial data flow, streamline and tune Big Data systems and pipelines, ensure efficient and cost-effective operations
  • Operations Management: Manage the operations team, make changes to underlying systems, provide day-to-day support, enhance platform functionality through DevOps practices, collaborate with application development teams
  • Data Warehouse Solutions: Architect and optimize data warehouse solutions using BigQuery, ensure efficient data storage and retrieval
  • Big Data Applications: Install, build, patch, upgrade, configure big data applications
  • BigQuery Management: Manage and configure BigQuery environments, datasets, and tables, ensure data integrity, accessibility, and security, implement partitioning and clustering for efficient data querying, define and enforce access policies, implement query usage caps and alerts
  • Linux Systems: Troubleshoot Linux-based systems, proficient with Linux command line
  • Dashboards and Reports: Create and maintain dashboards and reports to track key metrics like cost and performance
  • GCP Integration: Integrate BigQuery with other GCP services like Dataflow, Pub/Sub, and Cloud Storage
  • Data Quality: Implement data quality checks and validation processes
  • Data Pipelines: Manage and monitor data pipelines using Airflow and CI/CD tools (e.g., Jenkins, Screwdriver)
  • Collaboration: Work with data analysts and data scientists to understand data requirements and translate them into technical solutions, provide consultation and support to application development teams
  • Scripting and Automation: Proficiency in Unix/Linux OS fundamentals, shell/perl/python scripting, and Ansible for automation
  • Disaster Recovery & High Availability: Plan and coordinate disaster recovery principles, including backup/restore operations, experience with geo-redundant databases and Red Hat cluster
  • Service Delivery: Ensure delivery within defined SLA and agreed milestones, follow best practices and processes for continuous service improvement
  • Support Collaboration: Work closely with other support organizations (DB, Google, PySpark data engineering, and infrastructure teams)
  • Incident, Change, Release, and Problem Management

Company Information