UG :- - Not Required
PG :- - Not Required
No of position :- ( 1 )
Post :- 28th Jul 2022
Minimum 10 years of hands-on development experienceusing parallel processing databases like Teradata. Mandatory, 3 years’experience in cloud technologies like AWS, preferably Big Query, Data Proc onGoogle Cloud Platform. Using GCP native ETL solutions. Or building customETL/ELT solutions using Python. Preferably expert-level hands-on developmentexperience using Informatica IICS Experience in data streaming technologieslike Kafka Experience with all aspects of data systems, including databasedesign, ETL, aggregation strategy, performance optimization. Experience settingbest practices for building and designing ETL code and strong SQL experience todevelop, tune, and debug complex SQL applications. Expertise in schema design,developing data models, and proven ability to work with complex data isrequired. Hands-on experience with programming language Java/Python/SparkHands-on experience with Linux and shell scripting Hands-on experience withCI/CD tools like Bamboo, Jenkins, Bitbucket, etc.
Do you want to be part of an enterprise datasolutions team managing over 4 petabytes of data and building thenext-generation analytics platform for a leading financial firm with overtrillion in assets under management? At the Data & Rep Technology (DaRT)organization owns the strategy, implementation, delivery, and support of theenterprise data warehouse and emerging data platforms.
We are looking for someone who has a passion fordata and comes with a data engineering background. Someone who has experiencedesigning and coding batch and real-time ETL (and ELT) and wants to be part ofthe Dev Engineering team that is actively designing and implementing theEnterprise Data solution frameworks. Someone who wants to be challenged everyday and has a passion for keeping up to date on new technologies in the DataEngineering space set new standards for hundreds of ETL developers andcollaborates with the team members along the way.
What you’ll do:
You will be a Sr. Data Engineer in a horizontalDev Engineering team that includes onshore and offshore developers usingbest-in-class Google Cloud, Big Data, and relational Data warehousetechnologies, including Informatica Big Query, IICS, Talend, Teradata, Python,etc.
You will be prototyping data solutions to enablefaster access to data for the analytics use case developers.
You will be developing re-usable data solutionpatterns to enable quick-to-market data assets.
You will be analyzing & profiling businessdata on relational and Big Data environments.
You’ll have the opportunity to grow inresponsibility, work on exciting and challenging projects, train on emergingtechnologies and help set the future of the Data Solutions Delivery teams.
Manage day-to-day re-usable frameworkdevelopment activities for new data solutions and troubleshooting existingsolutions.
Partner with product owners and directors tolead technical discussions and resolve technical issues
Apply best practices of data integration fordata quality and automation
Partner with the product vendors to identify andmanage open product issues
Solve complex data integration problems
Work with project development teams andtechnology partners to develop high-level designs and cost and effort estimatesfor new framework development efforts.
Architect, design, and develop solutions andprovide supporting documentation.
Develop and maintain code for data ingestion andcuration using Informatica IICS, Talend, Spark, Kafka, etc