Required Skills

Big Data

Work Authorization

  • Us Citizen

  • Green Card

  • EAD (OPT/CPT/GC/H4)

  • H1B Work Permit

Preferred Employment

  • Corp-Corp

Employment Type

  • Consulting/Contract

education qualification

  • UG :- - Not Required

  • PG :- - Not Required

Other Information

  • No of position :- ( 1 )

  • Post :- 9th Jan 2021

JOB DETAIL

Position           : Big Data Technical Engineer/Architect

Location           : San Jose or Costa Mesa (Remote + Onsite) - Preferred locals 

Duration           : 12+ Months

 

Description:

BS degree in computer science, computer engineering or equivalent

  • Overall 8-10yrs with 5 – 6 years of experience delivering enterprise software solutions for Developers
  • Proficient in Java, Spark, Kafka, Python, AWS Cloud technologies
  • Must have active current experience with Scala, Java, Python, Oracle,Cassandra, Hbase, Hive
  • 3+ years of experience across multiple Hadoop / Spark technologies such as Hadoop, MapReduce, HDFS, Cassandra, HBase, Hive, Flume,
  • Sqoop, Spark, Kafka, Scala Familiarity with AWS scripting and automation
  • Flair for data, schema, data model, how to bring efficiency in big data related life cycle
  • Must be able to quickly understand technical and business requirements and can translate them into technical implementations
  • Experience with Agile Development methodologies
  • Experience with data ingestion and transformation
  • Solid understanding of secure application development methodologies
  • Experienced in developing microservices using spring framework is a plus
  • Understanding of automated QA needs related to Big data
  • Strong object-oriented design and analysis skills
  • Excellent written and verbal communication skills

Responsibilities

  • Utilize your software engineering skills including Java, Spark, Python,
  • Scala to Analyze disparate, complex systems and collaborativelydesign new products and services
  • Integrate new data sources and tools
  • Implement scalable and reliable distributed data replication strategies
  • Ability to mentor and provide direction in architecture and design to onsite/offshore developers
  • Collaborate with other teams to design and develop and deploy data tools that support both operations and product use cases
  • Perform analysis of large data sets using components from the Hadoop ecosystem
  • Own product features from the development, testing through to production deployment
  • Evaluate big data technologies and prototype solutions to improve our data processing architecture
  • Automate everything

Best Regards

Venkatesh B  | Technical Recruiter

Email: Venkatesh.b@axiustek.com

https://www.linkedin.com/in/venkatbudarapu

 

 

 

Company Information