Role Overview
ID.me is looking for a Database Reliability Engineer V to join our team. This position is critical for maintaining the robustness of our data systems. You'll use your skills in both database engineering and DevOps to establish and upkeep our databases. You'll work closely with application developers, data engineers, and platform engineers to create a database infrastructure that is both efficient and dependable. This is a high-level individual contributor role with chances for leading and mentoring within the Engineering department.
Responsibilities
- Lead efforts to maintain the reliability and stability of our database systems, ensuring high availability and performance.
- Develop and maintain automation scripts and tools to automate database provisioning, monitoring, and maintenance tasks.
- Continuously monitor and optimize database performance, including query optimization, index performance and resource utilization.
- Develop and implement database engineering best practices, including testing, documentation, and code reviews.
- Explore and evaluate new technologies and tools to improve the efficiency and effectiveness of the engineering team.
- Mentor and guide engineers, helping them to develop their skills and grow within the organization. Ultimately help developers write better queries and implement simpler database designs.
- Implement and enforce data security best practices, access controls, and encryption mechanisms to protect sensitive data in motion and at rest
Ideal Qualifications
- 7+ years of professional experience in database administration, with a focus on PostgreSQL or similar database technologies and in-depth knowledge of database security best practices and experience with security implementations.
- Strong expertise in cloud-based database solutions, such as Amazon RDS, Azure SQL, or Google CloudSQL and extensive experience working with cloud-based platforms, such as Amazon Web Services (AWS), Microsoft Azure, or Google Cloud Platform (GCP)
- Proven experience in database performance tuning and optimization for OLTP AND OLAP workloads.
- Knowledge of engineering practices, ORM’s, caching, data domain design, CAP theorem
- 5+ years of experience in data engineering, software engineering, or a related field
- 1+ years of experience with data warehousing technologies (Snowflake, BigQuery, or Redshift)
- Strong knowledge of data storage technologies, including SQL, NoSQL, and data processing platforms such as Hadoop and Spark
- Expertise in data streaming technologies such as Apache Kafka or Google Cloud Pub/Sub.
- Experience working in CI/CD framework and familiarity with data architecture and design
- Superb time management, prioritization of tasks and ability to meet deadlines with little supervision
- Bachelor's degree in a technical field (Computer Science, Engineering, Math, Physics, Information Technology, or a related). A master's degree is a plus.
Ideal candidate will thrive in the following culture:
- Must have an obsession for building quality products
- Flexibility when there are changing priorities and shifting of gears
- Strong oral and written communication skills
- Must be a team player with a strong, self-managing work ethic
- Must be a self-starter with a passion for software engineering, learning and continuous improvement
Note that candidates must be located in the continental U.S.
#LI-JS
#LI-REMOTE