Reward Redemption Friction Discovery
Co-led a cross-department discovery effort to identify areas of friction in the reward redemption process.
Responsibilities
Product Discovery, Workshop Design & Facilitation, Quantitative Analytics, Qualitative Analytics, Opportunity Mapping, A/B Test Design
Team
Product Manager
Membership Program Leaders
Voice of Customer Program Manager
Analysts
Product managers and designers from related product areas (online checkout, in-store checkout, and customer accounts teams)
Background
REI Co-op Members have a whole suite of benefits, but the one that is consistently highest in awareness and usage is the Co-op Member Reward. Each year, members get a reward that is typically 10% of their purchases from the previous year. It’s an important driver of member satisfaction, member demand and loyalty – why buy the same product from another retailer when, if I buy from REI, I’ll get 10% back?
But, since 2019, reward redemption metrics had been slipping. 2020 and 2021 aside – deemed outliers due to pandemic factors – reward redemption was down in 2022 compared with 2019. But why?
My product manager and I, representing the Member Engagement product team, kicked off a discovery effort to diagnose potential root causes, and propose and test solutions. Our goal was to increase member reward redemption, with the hypothesis that this would decrease member churn over time.
Identifying and mapping pain points
Step 1: Gathering inputs
To start, my product manager Maggie identified a team of collaborators and stakeholders: Membership Program leaders, Voice of Customer Program Manager, analysts, as well as product managers and designers from related product areas (online checkout, in-store checkout, and customer accounts teams). Maggie created a spreadsheet for all of these team members to dump any context they had on reward redemption – more metrics, qualitative customer tickets, known friction points, and more.
Step 2: Identifying themes
Next, I took this list of raw inputs, and ported them into FigJam as individual cards. This FigJam became our living, breathing whiteboard.
I planned and led a workshop with all of the collaborators to group the cards based on themes. Doing this collaboratively allowed us to dig into the pain points to get further context from the submitters and other team members, and surface how different pain points might be related.
The themes the group identified were:
Lack of Awareness: Member doesn’t know they have rewards, or they don’t know how rewards work (earning/redeeming).
Account Friction: Member is unable to view their rewards online when their membership is not linked, unintentionally unlinked due to a name or address change, or is linked to multiple accounts.
Checkout Friction: Member experiences friction in checkout when trying to use a gift or bonus card along with rewards.
Returns/Cancellation Friction: Member experiences friction when a reward purchase is returned or cancelled – reward is not returned according to member’s expectations.
Step 3: Beginning the opportunity map
Taking these themes and added context, I arranged the pain points into a connected map. Some of these pain points were siblings, some had a parent-child relationship. And, at the top of the map, I broke the pain points into:
Things that occur before the moment of redemption, like understanding how rewards are earned
Things that occur during the moment of redemption, like understanding how to redeem, and encountering friction with your account or in checkout
Things that occur after the moment of redemption, like encountering friction while processing a return or cancellation
In the map below, I have high-level friction groups (or themes of themes), friction themes, and the sources of friction (or pain points). Where we had quantitative data available, I annotated the pain point with a blue note to add context and inform prioritization and sizing.
While we acknowledged that all of these elements together create an ecosystem of confidence in the reward benefit, we decided to focus our efforts on the second area – during redemption – to start, since it seemed like the lowest hanging fruit, and the closest to the action we wanted to increase.
Additionally, some problems were already being addressed on other teams, and could be deprioritized in our effort. For example, the area of account friction was already a priority initiative for the Customer Accounts team.
I represented pain points that were either outside of our area of focus, or already being addressed by other teams, with unfilled cards. This allowed us to maintain visibility of the whole problem, but focus on the untapped areas of potential highest impact. Those themes were:
Member doesn’t know they have available rewards
Member doesn’t know how to redeem their rewards
Member doesn't know when their rewards expire
Member doesn’t have autonomy over how to redeem with other payment options (checkout friction)
While Account Friction was identified as a top-level group early on, what we identified through discussion and mapping was that those pain points were more of a root cause that spanned themes, so we represented the web of friction points in a way that tied it into key points of the map.
Along the way, I was sharing the map asynchronously with the team, as well as walking through it in our weekly checkins, collaborating through comments in the FigJam and conversations in meetings to refine and further flesh out the map.
Step 4: Identifying root causes
The next layer to add below the pain points was the root causes. If there was a limitation in our tech stack, or a requirement in our co-op bylaws that made something work the way it did, we represented it here.
Brainstorming solutions
Now that we had a good representation of the problem, we gathered the group to brainstorm solutions. We framed the brainstorm within the 3 themes identified within our focus area, at the time of redemption:
Member doesn’t know they have rewards
Member doesn’t know how to redeem their rewards
Member doesn’t know when their rewards expire
We left off the checkout optimization bucket, as the checkout team had begun running with the discovery insights within their team.
I led the brainstorm, reminding the group of the pain points and root causes within each them, and then posing the question: “How might we help members know when they have rewards?”, and so on. We brainstormed in Miro – it was a tool the larger group was very comfortable with, and we didn’t want the brainstorming to be slowed down by trying to fit it into the map as we went. We wanted blue-sky ideas.
The group added cards in a way that resembled the far right of the image below – some duplicates, scattered about, some mixed in with screenshots of inspiration they’d seen elsewhere.
Next, my product manager Maggie and I experimented with ways to group the solutions: Area of ownership (left)? Point of the reward redemption journey, like our map (center)?
Next, we ported the solutions into the map, slotting them in under the pain points or root causes they would address. I created tags for each card to identify the team we’d need to partner most closely with to complete the work.
Prioritizing solutions
In an ideal world, we would have been able to size the pain points using quantitative data – relative volume of support tickets, number of accounts impacted by an issue, members who completed a purchase during their reward eligibility period without using it, and more. Unfortunately, our customer insights team was going through a huge platform overhaul and was not taking any new requests. Our analytics team members could get some information, but couldn’t to get the scope we really needed.
So, I tried using some alternative factors to size our opportunities. It started as a personal exercise to work through some of my thoughts and questions, and ended up informing future conversations with Maggie and other partners.
Tactic 1: Plotting urgency and presumed feasibility
I created a four-quadrant graph using two axes: urgency and feasibility.
Factors that informed urgency:
Confidence in impact based on related learnings
Page traffic
Number of pain points affected
Member Reward timelines:
Rewards disbursement was coming up in March 2023, which meant we were approaching a once-annual opportunity to impact pain points related to reward distribution
No members rewards would expire until January 2024, meaning pain points about surfacing expiration dates as they neared wouldn’t be impactful for another year
Factors that informed feasibility:
Involvement of bylaws or business practices
Presumed tech constraints or complexity
I took a stab at plotting these on my own and, after reviewing with Maggie, shared with the team at our next meeting, asking them to leave comments on cards that they thought should be moved, and why. After all, they knew more about these aspects than I did! I just gave them a starting point.
Using this graph, we were able to begin to focus in on opportunities in the two "Urgent” quadrants – interrogate lower feasibility ones, and make plans to test higher feasibility ones.
Tactic 2: Quantifying impact of solution bundles
Another tactic that I used to size our options for solutions was:
Grouping related solutions into “bundles” that I thought would make sense to tackle in concert
Highlighting the pain points that we hypothesized would be impacted by the solution
Totaling up the pain points that would be impacted for each solution bundle
Now, this wasn’t perfect. We already knew that these pain points were not created equal, and that we didn’t have a way to accurately weight them. So this didn’t feel like a very sturdy way to make decisions, but it helped add color to the conversation.
Below is an example of one of these analyses, for a bundle I titled “Checkout Optimization”, which included multiple solutions for REI.com’s checkout as well as in-store checkout, and Re/Supply’s unique checkout, that addressed shared pain points.
Tactic 3: Mapping potential data conclusions to solutions
Despite the unavailability of the quantitative data we needed, my mind kept coming back to a few key questions. It felt like there were corners of the problem space that wouldn’t be understood without answers. But, I also didn’t want to get sucked into “admiring the problem”, when we could move forward to test some hypotheses to gain the data we didn’t have. I took a beat: What would I actually change about our path forward with the data I so craved?
I started mapping out hypotheticals. Say we did get the answer to this question. If it were x, what does that imply? If it were y, what would that imply?
Looking at these resulting hypotheses and solutions, I saw some overlap. So I created a map, connecting solutions to the hypothesized problems they might solve. Some solutions could cover for multiple potential data conclusions. Could that imply a higher potential impact, or problem coverage?
Ultimately, this just became a personal exercise to work through my own head, the nagging worry we were missing something, and the straight-up curiosity about this problem. Many of my solutions and problems did overlap with existing ones – I didn’t come up with anything glaringly missing from our map – so it made sense to move forward with some strong bets, gather data on those bets, and continue to fill in the blanks.
Testing prioritized solutions
Using mostly the urgency-feasibility analysis, Maggie and I worked with partners and the teams of ownership to kick some of the prioritized opportunities into action. Many were related to our upcoming reward disbursement – which we knew would bring increased member traffic and member demand to the site.
As my product team, Member Engagement, was a horizontal team that covered the experience spanning other product areas, many of our solutions fell into areas of other teams’ ownership. As Maggie worked with these teams Product Managers to cross-prioritize work, I worked closely with those teams’ designers to create the designs.
For example, one opportunity was surfacing reward balance in the account navigation menu, right under the user’s name when signed in (pictured below). This required collaboration with the customer accounts team, and with the navigation team, to align with their systems and philosophies.
Maggie and I also worked closely with our team’s analyst, who helped us plan how to sequence and measure the success of these tests.
Right around this time, REI announced a large restructuring that eliminated the whole Member Engagement team, folding it in under another Membership product area. At the time of my departure, the following solutions were in development and slated to be tested:
Rewards balance under name in navigation (Problem: Member doesn’t know they have rewards)
New page for rewards information, with personalized and dynamic content (Problem: Member doesn’t know how to redeem rewards)
Checkout optimizations in cases of member-account issues, and using rewards alongside gift cards (Problems: Member doesn’t have autonomy over how to redeem with other payment options; Member doesn’t know how to redeem rewards)
Increased granularity in the levels of personalization for members, adding pathways for signed-in non-members to link an existing membership in addition to purchasing (Problem: Member doesn’t know they have rewards; Member doesn’t know how to redeem rewards [due to account friction])
I can hardly wait to hear how these tests perform.
© Danielle Bushrow 2023