Location: Remote, North AmericaAbout Our Team: KAR Global Data-as-a-Service team (DAAS) is looking to expand our data team as we continue to grow our data platform in support of a mission of digital transformation in automotive wholesale markets. The data engineering team is responsible for the ingestion and persistence of data supporting an array of data products supporting KAR Global's automotive wholesale business.As part of a small, passionate and accomplished team of experts, you will work the full spectrum of Master Data Management (MDM). This is a high impact, high visibility team and is responsible for ensuring all master data is accurate, complete and consistent across the entire enterprise. This team supports current new product development projects and performs ongoing guidance throughout the master data lifecycle.About Our Candidate: This candidate should be a self-starter who is interested in learning new systems/environments and passionate about developing quality supportable data service solutions for internal and external customers. We highly value natural curiosity about data and technology that drives results through quality, repeatable, and long-term sustainable database and code development. The candidate should be highly dynamic and excited by opportunities to learn many different products and data domains and how they drive business outcomes and value for our customers.What You Will Be Doing: Members of the data engineering team participate daily in ceremonies of Agile sprint to help design, plan, build, test, develop, and support KAR Global MDM data products and platforms consisting of Python ETL pipelines, Informatica applications and Postgres, Redshift, Dynamo DB, Oracle, and Snowflake databases. Our team works in a shared services delivery model supporting seven lines of business including front-end customer facing products, B2B portals, mobile applications, business analytics, and data science initiatives.Responsibilities include:Ensure master data integrity, quality, compliance, and consistency across the systems and shared folders (public shared drives and/or SharePoint)Guide client business and technology teams to envision, design and develop strategies for Master Data Management implementation, governance, roll outs etc. Actively managing master data governance activities such as daily maintenance requests and data clean-up effortsCoordinate master data setup, validations and maintenance in accordance with business practicesEnsure metadata is consistently defined across all work streams with no redundancy, and clarity for useDevelop strategies and initiatives to improve overall data quality, improve processing times and anticipate future needsWork with product, data science, analytics, and engineering teams to learn project data needs and define project scope.Drive data life cycle management and monitor related master data usage and activitiesCollaborates with source systems data stewards, Data owners and technical personnel for data governance and resolves any data quality or technical issues related to data ingestion.What You Need to Be Successful:Bachelor's degree in Business, Computer Science, Management Information Systems.Data Engineering or a related experience is a must.Can distinguish between Master Data Management, Data Quality, Data Governance and Data Integration and the needs for each in a mature data environment.Experience with Informatica MDM Hub configurations - Data modeling & Data Mappings (Landing, staging and Base Objects), Data validation, Match and Merge rules, Active VOS, SIF Framework, and MDM User Exits is a PlusProven success with Java developmentkeeping up to date with the latest tools and technologies.Must have the ability to build or interpret detailed project specifications to develop program logic and code.Working knowledge of Informatica in an AWS Cloud environment is a plusExperience using Github / Jenkins (CI/CD) or comparable delivery stacks (required).Experience planning and designing maintainable data schemas (required).Experience with AWS Redshift, MPP, or Dynamo DB (preferred).Experience with Kinesis/Kafka (preferred).Experience working with large enterprise data lakes (preferred).