Lead Data Engineer 100% Remote

Apply for this position Please mention DailyRemote when applying
Posted 6 days ago United States Salary undisclosed
Before you apply - make sure the job is legit.

Attempting to apply for jobs might take you off this site to a different website not owned by us. Any consequence as a result for attempting to apply for jobs is strictly at your own risk and we assume no liability.

Job Description

Tanu Infotech employs diverse IT professionals with strong technology skills and business knowledge. Our mature methodologies and cost-efficient delivery model enable us to effectively handle software projects of any scale and complexity.

At Tanu Infotech we believe in delivering high quality & reliable solutions to complex business problem of clients globally with our innovative & highly professional methodologies. With our continued dedication we look to become a single stop shop for all the IT needs of our customers. Tanu Infotech Software Solutions is one of the leading offshore software development service providers, offering an array of IT related services to its clients across the globe. With sound domain knowledge we aim to deliver value to our customers through our innovative software solutions and services.

Job Description:

looking for Lead Data Engineer based in Denver, CO 100% Remote

Job Summary

As a Data Engineer at the Analytics Centre of Excellence, you shall be very hands-on, level, you shall be responsible for building and maintaining a scalable and robust Data Platform involving development of complex data processing pipelines, ETL, data integration, . You shall work closely with our architect team to design and develop the Enterprise Data Platform. You shall advocate and inculcate best engineering practices in the development of the Enterprise Data Platform. The Data Engineer is responsible for the development of analytics big data transformation flows on a large-scale service analytics platform for various in house customers. Responsibility includes delivery of high quality use case Data Pipeline ready for use and deployment towards customers, meeting the business requirements and aligning with the solution vision and strategy. The Data Engineer will have a strong influence on the technical design, and be expected to have a deep understanding of business and solution development context.

Key Responsibilities

Create and maintain optimal data pipeline architecture for both stream processing and batch processing,
Assemble large, complex data sets that meet functional / non-functional business requirements.
Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS 'big data' technologies.
Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics.
Work with stakeholders including the Business, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs.
Work with data and analytics experts to strive for greater functionality in our data systems.

Required Knowledge/Skills/Abilities:

Must Haves

Data Engineer with at least 5to 10 years of experience in Batch and Stream Processing
Experience in Apache Spark and Apache Flink (Highly Desirable)
Experience in working with Apache Kafka, Kafka Connect
Very Strong in Analytical SQL
Experience developing large scale and high-volume data pipelines is a must
Experience in Java/Scala and/or Python

Highly Desirable

Experience on Ingestion, Rollups and Real Time Analytics with Apache Druid
Experience in Stateful Stream Processing in Apache Flink
Experience on AWS Data Analytics Stack -AWS EMR, AWS Lake Formation, AWS S3, AWS Glue,
Experience in Apache Airflow
Experience in Data Quality