Required Skills

Hadoop Spark MongoDB

Work Authorization

  • US Citizen

  • Green Card

  • EAD (OPT/CPT/GC/H4)

  • H1B Work Permit

Preferred Employment

  • Corp-Corp

Employment Type

  • Consulting/Contract

education qualification

  • UG :- - Not Required

  • PG :- - Not Required

Other Information

  • No of position :- ( 1 )

  • Post :- 17th Sep 2022

JOB DETAIL

Data Engineer (MUST BE SENIOR)
Location: Austin, TX – Two days onsite per week. – Local to Austin or nearbyonly

MOI: Phone or Video



Send resume,GC or Passport copy (they can cover up the personal details but need to seename and date). If the copy is later than 2019 then I need the older copy ofwhatever they had. The older one doesn't need to be GC just the current one ifthat makes sense (the older one will show they had a work visa for the jobs onthe resume, the newer shows they still have one).
Send proofof TX address (could be DL or utility bill or something like that).

•Weare looking for someone who is passionate about data, thrives in an evolvingenvironment, brings an enthusiastic and collaborative attitude, and delights inmaking a difference
•As a successful candidate, you must have a deep background in data engineering
•You are a self-starter, embody a growth mindset, and can collaborateeffectively across multiple business and IT teams
•Graduation from an accredited four-year college or university with majorcoursework in computer information systems, computer science, data management,information systems, information science, mathematics or a related field
•High School diploma or equivalent and additional directly related experiencemay substitute for the required education on a year-for-year basis

•Three(3) - Six (6) years of experience in:
•Data-related fields of work: data (ETL) pipeline design and build, datamanagement platform management and support, data mining, data analytics, datawarehousing (including dimensional modeling), ETL tools and cloud platforms
•Data pipeline builds and implementation
•Utilizing ETL tools or creating ETL scripts

•Data Warehouse solution implementations (design, implementation, and ongoingsupport/maintenance) including interaction with business users
•Working closely with business analytics teams and data analysts
•Working with various development and deployment environments on-premises, inthe cloud and across multi-cloud platforms
•Required Registration, Certification, Or Licensure
•Data management disciplines: data architecture, data warehousing, dataintegration and interoperability, data modeling (including dimensional), anddata storage and operations
•Process management, and metrics management
•Emerging data and analytics technologies (i.e
•Hadoop, Spark, MongoDB, Azure, Data Lake, etc.)
•AWS, Azure, MapReduce, etc.)

•Highly complex problem solving and critical thinking, and operating computersand applicable computer software
•Planning, organizing, and coordinating work assignments to effectively meetfrequent and/or multiple deadlines, handling multiple tasks simultaneously, andmanaging conflicting priorities and demands
•Project management and system development life cycle concepts
•Client/user interaction to determine system requirements
•Strong written and oral communication skills
•Strong presentation and interpersonal skills
•Strong technical zeal with a passion for solving complex problems
•Present ideas in user-friendly language
•Establish and maintain harmonious working relationships with co-workers,agency staff, and external contacts
•Work effectively in a professional team environment
•Work in an Agile development environment
•Work in a cross-functional team environment; including utilization of businessand technical resources

Responsibilities:
•You will work with highly experienced engineers to design and build dataintegrations, ETL orchestrations, and help develop the evolving analyticsfocused data stores
•You will partner with business users, analysts, engineers, and productmanagers in a Agile environment to design, build, support and evolve productsand services
•May supervise the work of others
•Works under minimal supervision with extensive latitude for the use ofinitiative and independent judgement
•Engineering and automation of data management platforms to support datapipeline management and efficient flow of data
•Establishes and documents standards for metadata management framework, datacatalog framework and data pipeline control
•Leads review of pipeline code and metadata framework changes
•Responsible for the data pipeline continuous integration and continuousdelivery (CI/CD) processes
•Manages data pipeline jobs throughout their lifecycle
•Designs, builds, manages and operationalizes data (ETL) pipelines that extractdata from distributed sources and load into a data warehouse, data store orother system to support data and analytics use cases
•Leads the curation of datasets and data pipelines created by nontechnicalusers, data scientists or IT resources and operationalizing data delivery forproduction level deployments
•Designs, builds and provides ongoing operational support of TRS enterprisedata warehouses and other data stores, continued development and enhancement ofthe enterprise data warehouses/data stores, automation of daily data extractsand external system feeds
•Creates logical and physical data models (including dimensional patterns)using a modern ER diagramming tool, such as erwin or IDERA ER/Studio
•Uses innovative and modern tools, techniques and architectures to automatedata preparation and integration tasks in order to minimize manual processesand improve productivity
•Ensures data warehouse/data store implementations meet business expectations
•Ensures compliance with data governance and data security requirements
•Ensures that controls to verify the accuracy and consistency of data areimplemented and monitored
•Acts as the IT knowledge leader on architecting distributed systems, dataplatforms, data pipelines and data stores
•Ensures the customer can exploit the data warehouse solutions and helpsidentify additional possible uses of information; anticipates future needs andopportunities
•Assists in the identification and integration of potential new data sources
•Supports the task of deploying analytics and data science outputs intoexisting business processes and applications
•Supports development new and the enhancement of current dashboards
•Performs related work as assigned
Web services (REST, SOAP, XML, WSDL, JSON)

Company Information