Data Architect I - REMOTE

Apply for this position Please mention DailyRemote when applying
Posted 9 days ago United States Salary undisclosed
Before you apply - make sure the job is legit.

Attempting to apply for jobs might take you off this site to a different website not owned by us. Any consequence as a result for attempting to apply for jobs is strictly at your own risk and we assume no liability.

Job Description

Core Group Resources ( ) is America's leading recruitment company. Founded by a service academy graduate who has offshore experience, Core Group Resources expertise is unmatched in the marine offshore market, finance, IT, renewables, & non-profit for executive search, staffing, and expertise identification. For more information contact us at . We are currently in the market for the following:





Data Architect I - REMOTE





Job Summary:



This position will be an independent contributor on a global Data Architecture and Modeling team pursuing a vision of analytics-driven mining. You will provide expertise in data architecture, data modeling, and ETL will enable and empower our organization to maintain a strong and trusted Enterprise Data Warehouse. Additionally, you will work in close collaboration with subject matter experts (SME), data engineers, data scientists, business intelligence analysts, and software engineers to develop advanced, highly automated data products.









Responsibilities:



-Develop data requirements through data modeling techniques and structured working sessions. Take action to express data requirements as 3rd Normal Form logical data models through review of source system documentation, review of system features, and workshop sessions



-Create physical databases design for the Snowflake data warehouse and other database technologies in partnership with other technical resources



-Develop documentation of Data Lineage and Data Dictionaries to create a broad awareness of the enterprise data model and its applications



-Actively apply best practices within DataOps (Version Control, P.R. Based Development, Schema Change Control, CI/CD, Deployment Automation, Test Automation, Shift left on Security, Loosely Coupled Architectures, Monitoring, Proactive Notifications)



-Develop real-time/bulk data pipelines from a variety of sources (streaming data, APIs, data warehouse, messages, images, video, etc.)



-Partner with key business SMEs to build and manage the workgroup database view library by building relevant data shapes in SQL



-Utilize modern cloud technologies, follow established design patterns, and employ best practices from DevOps/DataOps to produce enterprise quality production Python and SQL code with minimal errors



-Participate in regular code review sessions, collaboratively discuss opportunities for continuous improvement, and diligently implement feedback in all solutions



-Seek out new work or training opportunities to broaden experience



-Independently research latest technologies and openly discuss applications within the department











Requirements:



-Bachelor's degree in Engineering, Computer Science, Analytical field (Statistics, Mathematics, etc.) and 3 years of relevant work experience, OR



-Master's or Ph.D. in Engineering, Computer Science, Analytical field (Statistics, Mathematics, etc.) and I 1 year of relevant work experience



-Knowledge of data model using IDEF1X or similar data modeling methodologies



-Knowledge of data modeling tools like ER Studio or ERwin



-Proficient practitioner of SQL development



-Experience leading joint design sessions and working in groups







Preferred:



-Proficient practitioner of Python development



-Working knowledge of Agile, Scrum, and Kanban



-Working knowledge of Parallel Processing Environments such as Snowflake or Spark SQL



-Working knowledge of Software Engineering and Object Orient Programming Principles

?