What You'll Do
- Build scalable data infrastructure: Build data models, ETL processes, and data integrations to ensure a scalable and robust data infrastructure. Build new and support existing data pipelines.
- Ensure data quality: Collaborate with data analysts and software engineers to ensure data quality, accuracy, and consistency
- Improve data performance: Monitor and troubleshoot the performance of the data infrastructure and make recommendations for improvements
What You'll Love
- Mission: This work is incredibly humbling. Every day we hear amazing stories and we get the pleasure of working on something that’s impacting lives. One of our favorite user quotes: “I hate quoting an old, overused cliche, but I’ve been very lost. And I think for the first time in a while I may be found.”
- Growth: You’ll get to journey with a VC-backed Silicon Valley startup from the beginning. When the company is this small, every employee is a core team member who is expected to help build the company as a whole as we settle into a culture and continue to expand the team.
- Flexibility: HQ will be in Chicago with the full team and would love to have everyone there. That being said, we’re super flexible with location / hours; don’t care when or where you work, just that it gets done.
- Comp: We pay competitive market rates both in terms of equity, cash, & benefits.
What We're Looking For
- Passion: We’re looking for someone excited and passionate about our mission, and passionate about data engineering.
- Experience: A foundation of 3+ years of experience in data engineering; strong knowledge of SQL and OOP in Python or other programming language; experience with Snowflake or Redshift, Airflow, and DBT strongly preferred
- Grit & Detail-Oriented: A start-up is tough and we really care about what we’re doing. You'll be jumping into a brand new role on a brand-new team - perseverance and attention to detail are important.
Something looks off?