Required Skills

advanced analytics Diversity and Inclusion SCALA Agile Data analytics Business intelligence

Work Authorization

  • Citizen

Preferred Employment

  • Full Time

Employment Type

  • Direct Hire

education qualification

  • UG :- - Not Required

  • PG :- - Not Required

Other Information

  • No of position :- ( 1 )

  • Post :- 28th Jul 2022

JOB DETAIL

  • Develop, test, and implement data solutions based on functional / non-functional business requirements
  • You would be required to code in Python/PySpark daily on Cloud as well as on-prem infrastructure Developing and maintaining data lake and data warehouse schematics, layouts, architectures and relational/non-relational databases for data access and Advanced Analytics Maintain, deploy, cleanse, organize, improve, and protect the data set Build Data Models to store the data in a most optimized manner Identify, design, and implement process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability etc.
  • Implementing the ETL process and optimal data pipeline architecture Monitoring performance and advising any necessary infrastructure changes Create data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader
  • Work with data and analytics experts to strive for greater functionality in our data systems
  • Proactively identify potential production issues and recommend and implement solutions
  • Must be able to write quality code and build secure, highly available systems Create design documents that describe the functionality, capacity, architecture, and process Review peer-codes and pipelines before deploying to Production for optimization issues and code standards.

 

Skill Sets:

 

  • Should have above 5 years of experience in Data engineering and a relevant of at least 4 years
  • A Good understanding of Business Intelligence and Data warehousing end to end architecture Hands on experience working in complex data warehouse implementations using Azure SQL Database, Azure Data Factory and Azure Data Lake / Blob storage
  • Experience in developing data pipelines to transform, aggregate or process data using Azure Databricks and Experience with SQL Server Expert at Python /Pyspark/Scala programming for data engineering/ ETL purposes Experience with integration of data from multiple data sources
  • Knowledge of Build/Release management in Azure DevOps Optimization techniques (performance, scalability, monitoring, etc) and Experience in developing Pipelines using Azure Data Factory Creation of DAGs for data engineering Beginners knowledge/willingness to learn Spotfire Familiarity with CI/CD or Agile Strong English speaking; able to communicate verbally, written at collegiate level Polished and able to conduct themselves/articulate value and purpose to business leaders
  • Ability to manage their own time, self-motivated, driven to meet or exceed targets Socially aware, diversity and inclusion

.

 

Company Information