Required Skills

Big Data

Work Authorization

  • US Citizen

  • Green Card

  • EAD (OPT/CPT/GC/H4)

  • H1B Work Permit

Preferred Employment

  • Corp-Corp

  • W2-Permanent

  • W2-Contract

  • Contract to Hire

Employment Type

  • Consulting/Contract

education qualification

  • UG :- - Not Required

  • PG :- - Not Required

Other Information

  • No of position :- ( 1 )

  • Post :- 23rd Feb 2024

JOB DETAIL

Responsibilities:

  • Design, develop, and implement Big Data analytic solutions on a Hadoop-based platform.

  • Refine data processing pipelines focused on unstructured and semi-structured data.

  • Create custom analytic and data mining algorithms to extract knowledge and meaning from vast data stores.

  • Configure data flows from different sources (relational databases, XML, JSON) and orchestrate them using Nifi.

  • Develop Spark Frameworks using PySpark and Java for Raw/Analytical Layers in Big Data.

  • Utilize Jenkins for Continuous Integration and Git for Version Control.

  • Write shell scripts and job management scripts to invoke and manage Data Ingestion steps.

  • Design HIVE tables for better performance and apply partitions where needed.

  • Review HDFS data organization and provide mechanisms to support Multi-Tenant features.

  • Work on AWS Services like S3, EMR, Lambda, Glue Jobs, Athena as part of the Open data initiative.

  • Build and maintain QlikView dashboards using data from various sources.

  • Collaborate with cross-functional teams and stakeholders to gather requirements and provide technical expertise.

  • Mentor and guide junior team members.

 

Preferred Skills:

  • Familiarity with Apache SOLR, Elasticsearch for data indexing and search.

  • Experience with Kafka for data streaming.

  • Knowledge of Data Governance and Metadata Management.

  • Previous exposure to Informatica PowerCenter or similar ETL tools.

  • Strong analytical and problem-solving skills.

  • Excellent communication and interpersonal skills.

 

Company Information