Data Ops Senior Data Engineer
2024-11-14
USA
Fetch Rewards
What we’re building and why we’re building it.
Every month, millions of people use America’s Rewards App, earning rewards for buying brands they love – and a whole lot more. Whether shopping in the grocery aisle, grabbing a bite at the drive-through or playing a favorite mobile game, Fetch empowers consumers to live rewarded throughout their day. To date, we’ve delivered more than $1 billion in rewards and earned more than 5 million five-star reviews from happy users.
It’s not just our users who believe in Fetch: with investments from SoftBank, Univision, and Hamilton Lane, and partnerships ranging from challenger brands to Fortune 500 companies, Fetch is reshaping how brands and consumers connect in the marketplace. When you work at Fetch, you play a vital role in a platform that drives brand loyalty and creates lifelong consumers with the power of Fetch points. User and partner success are at the heart of everything we do, and we extend that same commitment to our employees.
Ranked as one of America’s Best Startup Employers by Forbes for two years in a row, Fetch fosters a people-first culture rooted in trust, accountability, and innovation. We encourage our employees to challenge ideas, think bigger, and always bring the fun to Fetch.
Fetch is an equal employment opportunity employer.The Role:
Fetch’s next step in evolving our business will require a DataOps - Senior Data Engineer to join the data team and play a pivotal role in designing and building scalable and efficient data pipelines and data transformation systems to process terabytes of data each day to support Fetch’s business. The ideal candidate will drive initiatives to create a robust data governance structure, collaborating with cross-functional teams to ensure that data is governed efficiently, securely, and in compliance with regulatory standards.The work of data engineers is to drive and take ownership of projects that enable all stakeholders to be able to access and use endless amounts of data, working closely with other engineers and teams in the organization With a goal of having world class data availability with terabytes of daily data, Data Ops and data engineering is critical to Fetch’s success.
Scope of Responsibilities:
Design and implement both real-time and batch data processing pipelines, leveraging technologies like Apache Kafka, Apache Flink, or managed cloud streaming services to ensure scalability and resilience
Create data pipelines that efficiently process terabytes of data daily, leveraging data lakes and data warehouses within the AWS cloud. Must be proficient with technologies like Apache Spark to handle large-scale data processing.
Implement robust schema management practices and lay the groundwork for future data contracts. Ensure pipeline integrity by establishing and enforcing data quality checks, improving overall data reliability and consistency
Develop tools to support rapid development of data products. Provide recommended patterns to support data pipeline deployments.
Designing, implementing, and maintaining data governance frameworks and best practices to ensure data quality, security, compliance, and accessibility across the organization.
Develop tools to support the rapid development of data products and establish recommended patterns for data pipeline deployments. Mentor and guide junior engineers, fostering their growth in best practices and efficient development processes.
Collaborate with the DevOps team to integrate data needs into DevOps tooling.
Champion DataOps practices within the organization, promoting a culture of collaboration, automation, and continuous improvement in data engineering processes.
Stay abreast of emerging technologies, tools and trends in data processing and analytics, and evaluate their potential impact and relevance to Fetch’s strategy.
The ideal candidate:
Self starter that can take a project from architecture to adoption.
Experience with Infrastructure as Code tools such as Terraform or CloudFormation. Ability to automate the deployment and management of data infrastructure.
Familiarity with Continuous Integration and Continuous Deployment (CI/CD) processes. Experience setting up and maintaining CI/CD pipelines for data applications.
Proficiency in software development lifecycle process. Release fast and improve incrementally.
Experience with tools and frameworks for ensuring data quality, such as data validation, anomaly detection, and monitoring. Ability to design systems to track and enforce data quality standards.
Proven experience in designing, building, and maintaining scalable data pipelines capable of processing terabytes of data daily using modern data processing frameworks (e.g., Apache Spark, Apache Kafka, Flink, Open Table Formats, modern OLAP databases).
Strong foundation in data architecture principles and the ability to evaluate emerging technologies.
Proficient in at least one modern programming language (Go, Python, Java, Rust) and SQL.
Comfortable presenting and challenging technical decisions in a peer review environment
Undergraduate or graduate degree in relevant fields such as Computer science, Data Science, Business Analytics.
At Fetch, we'll give you the tools to feel healthy, happy, and secure through:
Equity for everyone
401k Match: Dollar-for-dollar match up to 4%.
Benefits for humans and pets: We offer comprehensive medical, dental and vision plans for everyone including your pets.
Continuing Education: Fetch provides ten Thousand per year in education reimbursement.
Employee Resource Groups: Take part in employee-led groups that are centered around fostering a diverse and inclusive workplace through events, dialogue and advocacy. The ERGs participate in our Inclusion Council with members of executive leadership.
Paid Time Off: On top of our flexible PTO, Fetch observes 9 paid holidays, including Juneteenth and Indigenous People’s Day, as well as our year-end week-long break.
Robust Leave Policies: 20 weeks of paid parental leave for primary caregivers, 14 weeks for secondary caregivers, and a flexible return to work schedule. $2000 baby bonus.
Every month, millions of people use America’s Rewards App, earning rewards for buying brands they love – and a whole lot more. Whether shopping in the grocery aisle, grabbing a bite at the drive-through or playing a favorite mobile game, Fetch empowers consumers to live rewarded throughout their day. To date, we’ve delivered more than $1 billion in rewards and earned more than 5 million five-star reviews from happy users.
It’s not just our users who believe in Fetch: with investments from SoftBank, Univision, and Hamilton Lane, and partnerships ranging from challenger brands to Fortune 500 companies, Fetch is reshaping how brands and consumers connect in the marketplace. When you work at Fetch, you play a vital role in a platform that drives brand loyalty and creates lifelong consumers with the power of Fetch points. User and partner success are at the heart of everything we do, and we extend that same commitment to our employees.
Ranked as one of America’s Best Startup Employers by Forbes for two years in a row, Fetch fosters a people-first culture rooted in trust, accountability, and innovation. We encourage our employees to challenge ideas, think bigger, and always bring the fun to Fetch.
Fetch is an equal employment opportunity employer.The Role:
Fetch’s next step in evolving our business will require a DataOps - Senior Data Engineer to join the data team and play a pivotal role in designing and building scalable and efficient data pipelines and data transformation systems to process terabytes of data each day to support Fetch’s business. The ideal candidate will drive initiatives to create a robust data governance structure, collaborating with cross-functional teams to ensure that data is governed efficiently, securely, and in compliance with regulatory standards.The work of data engineers is to drive and take ownership of projects that enable all stakeholders to be able to access and use endless amounts of data, working closely with other engineers and teams in the organization With a goal of having world class data availability with terabytes of daily data, Data Ops and data engineering is critical to Fetch’s success.
Scope of Responsibilities:
Design and implement both real-time and batch data processing pipelines, leveraging technologies like Apache Kafka, Apache Flink, or managed cloud streaming services to ensure scalability and resilience
Create data pipelines that efficiently process terabytes of data daily, leveraging data lakes and data warehouses within the AWS cloud. Must be proficient with technologies like Apache Spark to handle large-scale data processing.
Implement robust schema management practices and lay the groundwork for future data contracts. Ensure pipeline integrity by establishing and enforcing data quality checks, improving overall data reliability and consistency
Develop tools to support rapid development of data products. Provide recommended patterns to support data pipeline deployments.
Designing, implementing, and maintaining data governance frameworks and best practices to ensure data quality, security, compliance, and accessibility across the organization.
Develop tools to support the rapid development of data products and establish recommended patterns for data pipeline deployments. Mentor and guide junior engineers, fostering their growth in best practices and efficient development processes.
Collaborate with the DevOps team to integrate data needs into DevOps tooling.
Champion DataOps practices within the organization, promoting a culture of collaboration, automation, and continuous improvement in data engineering processes.
Stay abreast of emerging technologies, tools and trends in data processing and analytics, and evaluate their potential impact and relevance to Fetch’s strategy.
The ideal candidate:
Self starter that can take a project from architecture to adoption.
Experience with Infrastructure as Code tools such as Terraform or CloudFormation. Ability to automate the deployment and management of data infrastructure.
Familiarity with Continuous Integration and Continuous Deployment (CI/CD) processes. Experience setting up and maintaining CI/CD pipelines for data applications.
Proficiency in software development lifecycle process. Release fast and improve incrementally.
Experience with tools and frameworks for ensuring data quality, such as data validation, anomaly detection, and monitoring. Ability to design systems to track and enforce data quality standards.
Proven experience in designing, building, and maintaining scalable data pipelines capable of processing terabytes of data daily using modern data processing frameworks (e.g., Apache Spark, Apache Kafka, Flink, Open Table Formats, modern OLAP databases).
Strong foundation in data architecture principles and the ability to evaluate emerging technologies.
Proficient in at least one modern programming language (Go, Python, Java, Rust) and SQL.
Comfortable presenting and challenging technical decisions in a peer review environment
Undergraduate or graduate degree in relevant fields such as Computer science, Data Science, Business Analytics.
At Fetch, we'll give you the tools to feel healthy, happy, and secure through:
Equity for everyone
401k Match: Dollar-for-dollar match up to 4%.
Benefits for humans and pets: We offer comprehensive medical, dental and vision plans for everyone including your pets.
Continuing Education: Fetch provides ten Thousand per year in education reimbursement.
Employee Resource Groups: Take part in employee-led groups that are centered around fostering a diverse and inclusive workplace through events, dialogue and advocacy. The ERGs participate in our Inclusion Council with members of executive leadership.
Paid Time Off: On top of our flexible PTO, Fetch observes 9 paid holidays, including Juneteenth and Indigenous People’s Day, as well as our year-end week-long break.
Robust Leave Policies: 20 weeks of paid parental leave for primary caregivers, 14 weeks for secondary caregivers, and a flexible return to work schedule. $2000 baby bonus.