This is a 100% Remote position.
Our client is a well-funded, major player based in Austin, TX in the consumer e-commerce space, has recently raised funding at a 1 Billion Dollar Valuation, and is on a hyper-growth trajectory.
How many people in their life can get a chance to say that they set up a data stack from scratch? You will have that opportunity here with even choosing which ETL tool they decide to use. You will lead the charge and transformation to become a data-driven organization and enabling the effective analysis of data across the organization. At the moment, there are an overwhelming amount of data requests coming from the business so if you have a passion for data and AWS / Cloud technologies, and a drive to design an effective data pipeline, then look no further.
What You'll Do:
- Lead the design and build of our new data architecture and platform
- Build and maintain ETL pipelines that are reliable and scalable
- Ensure that our data infrastructure and architecture support the evolving requirements of the business.
- Work closely with business stakeholders, Data Analytics, and application engineers to develop a strategy for our long-term Data Platform architecture.
- Identify gaps in data processes, and drive improvements while mentoring and coaching other team members.
- Explore and evaluate new technologies and make recommendations where necessary
- Develop, test, and maintain existing architecture
- Identify gaps and monitor current data processes and drive improvements
- Recommend ways to improve data reliability, efficiency, and quality of the data platform and optimize for performance, scalability, and cost
- Work with ELT tools to sync data to/from 3rd party services
- Collaborate with the Data Analytics team to build the correct datasets for further consumption by various visualization tools
- Design data models that support business needs
- 5+ years experience with SQL, Data Warehouse development, and ETL
- Hands-on experience with cloud-based (AWS Preferable) data warehouse (e.g. Snowflake, Redshift, etc.)
- Expertise in PySpark and Pandas
- Experience with standard warehousing concepts like Data Marts and Dimensional Modeling
- Experience with at least one data modeling tool
- Programming experience (Python, Shell, or similar) and a demonstrated interest in statistical analysis and business intelligence
Nice to Have:
- Hands-on experience managing and performance-tuning PostgreSQL
- Experience with ETL tools like Stitch, Fivetran, Pentaho, etc.
- Experience with data warehouse schema design and architecture
- Experience with Big Data solutions such as Snowflake or Redshift
- Experience managing RDS, a definite plus
- Experience with Data Science Notebooks
- Experience with NoSQL databases
We are unable to provide H1B sponsorship at this time.
- provided by Dice