Pythonazure databrickssparkBigqueryairflowazure data factory
Work Authorization
Citizen
Preferred Employment
Full Time
Employment Type
Direct Hire
education qualification
UG :- - Not Required
PG :- - Not Required
Other Information
No of position :- ( 1 )
Post :- 2nd Jul 2022
JOB DETAIL
Establish data model design based on business requirements. Should be hands on.
Develop end to end streaming and batch data analytics pipelines. From data ingestion, processing, storage, analysis to visualization.
Understand big-data principles and best practices
Perform code reviews, ensure code quality and encourage a culture of excellence.
Coach team members in the design of database solutions/optimization/Schema design
Troubleshoot performance and optimize database design.
Be a front face of the company in front of customers and prospects.
Required Skills
1+ years of strong technology experience in the field of transactional data and analytics systems
Should understand and be able to command architecture design for transactional and analytics systems.
Strong SQL skills /technologies (SQL Server, Oracle, Redshift and Bigquery)
Proficient in building and optimizing big data data pipelines, architectures and datasets using technologies like Spark, Hive, AWS Data Pipeline, Azure Data Factory, BigQuery, and Apache Airflow etc
Experience in building reusable and metadata driven components for data ingestion, transformation and delivery
Cloud Experience- Should have experience with Cloud data products of GCP, AWS or Azure (BigQuery, Dataflow, Azure Databricks, Glue, Redshift etc.)
Experience in Agile Methodologies
Familiarity with source repositories (Git, BitBucket etc.)
Excellent communication skills
Desired Certification:
Google Cloud Certified Professional Data Engineer or Relevant certification in AWS/Azure