Required Skills

Big Data Python

Work Authorization

  • US Citizen

  • Green Card

  • EAD (OPT/CPT/GC/H4)

  • H1B Work Permit

Preferred Employment

  • Corp-Corp

Employment Type

  • Consulting/Contract

education qualification

  • UG :- - Not Required

  • PG :- - Not Required

Other Information

  • No of position :- ( 1 )

  • Post :- 7th Sep 2022

JOB DETAIL

• Hands-on architecture/development of ETL pipelines using our internal framework written in Java
• Hands-on architecture of real time REST APIs or other solutions for streaming data from Graph using Spark
• Interpret data, analyze results using statistical techniques and provide ongoing reports
• Develop and implement databases, data collection systems, data analytics and other strategies that optimize statistical efficiency and quality
• Acquire data from primary or secondary data sources and maintain databases/data systems
• Identify, analyze, and interpret trends or patterns in complex data sets
• Filter and clean data by reviewing reports and performance indicators to locate and correct problems
• Work with management to prioritize business and information needs
• Locate and define new process improvement opportunities
Skills/Requirements
• At least 8+ years of experience architecting and implementing complex ETL pipelines preferably with Spark toolset.
• At least 4+ years of experience with Java particularly within the data space
• Technical expertise regarding data models, database design development, data mining and segmentation techniques
• Good experience writing complex SQL and ETL processes
• Excellent coding and design skills, particularly in Java/Scala and Python and or Java.
• Experience working with large data volumes, including processing, transforming and transporting large-scale data
• Experience in AWS technologies such as EC2, Redshift, Cloud formation, EMR, AWS S3, AWS Analytics required.
• Big data related AWS technologies like HIVE, Presto, Hadoop required.
• AWS certification is preferable: AWS Developer/Architect/DevOps/Big Data
• Excellent working knowledge of Apache Hadoop, Apache Spark, Kafka, Scala, Python etc.
• Strong analytical skills with the ability to collect, organize, analyze, and disseminate significant amounts of information with attention to detail and accuracy
• Good understanding & usage of algorithms and data structures
• Good Experience building reusable frameworks.
• Experience working in an Agile Team environment.
• AWS certification is preferable: AWS Developer/Architect/DevOps/Big Data
• Excellent communication skills both verbal and written
Skilled in design and developing data warehousing components.  Experience in using any known ETL tools with knowledge of data extraction, transformation, copying data from one or more sources into a destination system; Data Warehousing analysis;  Experience in loading data from Oracle and loading from files; Good knowledge of SQL; Knowledge of Apache Nifi is a nice to have

Company Information