DataOps Engineer (Open to Remote)

Apply for this position Please mention DailyRemote when applying
timePosted 3 days ago location United States salarySalary undisclosed
Before you apply - make sure the job is legit.

Attempting to apply for jobs might take you off this site to a different website not owned by us. Any consequence as a result for attempting to apply for jobs is strictly at your own risk and we assume no liability.

Job Description


Olive's AI workforce is built to fix our broken healthcare system by addressing healthcare's most burdensome issues -- delivering hospitals and health systems increased revenue, reduced costs, and increased capacity. People feel lost in the system today and healthcare employees are essentially working in the dark due to outdated technology that creates a lack of shared knowledge and siloed data. Olive is designed to drive connections, shining a new light on the broken healthcare processes that stand between providers and patient care. She uses AI to reveal life-changing insights that make healthcare more efficient, affordable and effective. Olive's vision is to unleash a trillion dollars of hidden potential within healthcare by connecting its disconnected systems. Olive is improving healthcare operations today, so everyone can benefit from a healthier industry tomorrow.

Our Infrastructure team is looking to add a DataOps Engineer and continue to advance the cloud capabilities and services/systems for our internal engineering teams. As part of our engineering team, you'll be responsible for ensuring Olive's data runs smoothly through our architecture. You'll help keep our data infrastructure up to date, and use new and existing tools to solve technical problems. At Olive, automation, reliability and efficiency are part of everything we do.

  • Support customers, engineering efforts, and internal departments in SOA environment.
  • Architect and build high-scale infrastructure for rapidly growing web applications.
  • Foster proactive and cooperative relationships within the project team.
  • Exercise independent judgment in selecting methods and techniques for obtaining solutions.
  • Build specialized data-layer services for data-intensive parts of the system.
  • Analyze applications and make the necessary changes to optimize performance.
  • Diagnose and resolve issues promptly and in accordance with maintainability goals.
  • Work with a variety of technical and non-technical people.
  • Embrace changing requirements.
  • Create and maintain efficient, reliable infrastructure with code
  • Drive automation using popular cloud orchestration, configuration management, and CI/CD system
  • Design and implement:
    • Solutions to support data lake and data warehousing
    • Data quality check frameworks
    • Alerting and monitoring for overall data stack
    • Scalable data pipelines
  • Preferred programming and scripting languages include: SQL, Python, Java, Bash

  • 4+ years of Data Engineering experience
  • A strong understanding of operating systems, networking, and software engineering fundamentals
  • Experience using AWS or other virtualized infrastructure
  • Experience managing a container-based microservice architecture, including orchestration, service-discovery, monitoring, and debugging
  • Proficient in a scripting language (Bash, Python, Ruby, Perl, PowerShell, etc.)
  • Experience orchestrating infrastructure using CloudFormation, Terraform, or other similar tooling.
  • Experience building Linux and Windows systems (AWS Linux 2, Ubuntu, CentOS, ContainerLinux, etc.)
  • Strong experience with SQL and No-SQL databases (MySQL, PostgreSQL, Oracle, MongoDB, SQL Server)
  • Data warehousing and data engineering experience
  • Experience with Data Lakes (Lake Formation/Snowflake)
  • Experience with Big data solutions like Spark/Hadoop
  • Deploying or managing infrastructure across AWS AZs and regions.