Citizen
Full Time
Direct Hire
UG :- - Not Required
PG :- - Not Required
No of position :- ( 1 )
Post :- 28th May 2022
Roles and Responsibilities
immediate joiners are prefered.
Basic skills : Hadoop, Hive, Python, SQL , Oracle and other DB technologies. Knowledge of SPark and Scala .
Develops software that processes, stores and serves data for use by others. Develops large scale data structures and pipelines to organize, collect and standardize data that helps generate insights and addresses reporting needs. Writes ETL (Extract / Transform / Load) processes, designs database systems and develops tools for real-time and offline analytic processing. Ensures that data pipelines are scalable, repeatable and secure. Troubleshoots software and processes for data consistency and integrity. Integrates data from a variety of sources, assuring that they adhere to data quality and accessibility standards. Has in-depth knowledge of large scale search applications and building high volume data pipelines. In-depth knowledge of Java, Hadoop, Hive, Cassandra, Pig, MySQL or NoSQL or similar.
General requirements :