What do you learn in 10 years of applying behavioral science at top companies? A lot, it turns out! This month marks 10 years of Irrational Labs. During this time, we’ve worked across tech, finance, and health to help make people happier, healthier, and wealthier—and we’ve learned a thing (or 10!) along the way.
To celebrate, we’re reflecting on our top 10 takeaways from 10 years of applying behavioral science to product—and sharing our favorite actionable insights.
Ready to dive in? 10, 9, 8 (it’s our birthday—indulge us!).
1. Credit Karma: Can’t Resolve Friction? Change the Choice Architecture
Friction is often the enemy in behavioral design. We’re quick to minimize or eliminate it, but can we also treat it as an opportunity? Read on and find out.
We took this alternative approach when partnering with Credit Karma to improve member and product outcomes for their new checking account, Credit Karma Money Spend, using behavioral design. We discovered a key timing-related friction: funds transferred to Credit Karma from another bank could take days to become available. This meant members might not have access to needed funds during that time—especially for last-minute transfers.
To address this, we used the moment members first encountered the friction to encourage them to set up automatic funding of their CK Money Spend accounts. We developed an experiment comparing different conditions that prompted various types of automated transfers.
The most successful condition made recurring transfers the default choice with a direct deposit alternative. This drove an 18% rise in recurring deposit setups and a small rise in direct deposit setups—and as a bonus? It built the mental model that direct deposits are an effective way to fund the account.
The ‘Aha!’ 💡
We realized that the key friction (bank transfer processing times) was needed to prevent fraud, so we changed the choice architecture at the moment of friction instead. This worked—while reinforcing the value of testing alternative solutions. On the money? We think so.
2. TikTok: The Timing of a Message Is as Important as the Message Itself
When it comes to product interventions, the message always matters. But timing deserves a hero’s welcome, too—and our work with TikTok to develop an intervention reducing the spread of misinformation shows why.
We designed a pair of prompts for TikTok that were put on videos with ‘unsubstantiated content’ (think: potentially-misleading information that fact-checkers can’t verify).
The first prompt reminded people to consider the content’s accuracy: ‘Caution: Video flagged for unverified content.’ It was shown to some users at the start of the video and to others at the 3-second mark.
The second prompt appeared when people went to actually share the video: ‘Are you sure you want to share this video? This video was flagged for unverified content.’ They could then choose to ‘cancel’ or ‘share anyway.’
Our prompts successfully reduced shares of flagged content by 24% when compared to a control group. And as an added bonus: the intervention also reduced likes of flagged videos by 7% and views by 5%.
The ‘Aha!’ 💡
We experience ‘hot states’ when we’re emotional and ‘cold states’ when we’re more rational. On TikTok, things get heated quickly—and slowing people down helps to cool those emotions. The takeaway? Because our prompts caught people at a key ‘hot’ moment just before they went to share a flagged video, the timing mattered as much as the message itself.
3. One Medical: Want Something to Work? Simplify the Onboarding
Staying healthy can be complicated, but accessing a primary care provider shouldn’t be. We partnered with One Medical to rethink a seamless digital onboarding experience, making it easier for new members to connect with a provider at sign-up—and increasing preventive care in the process.
Our experiment sought to evaluate the effect of changing the onboarding flow for their members.
The new onboarding process offered One Medical members an immediate opportunity to connect with a provider. It suggested appointment times that a recommended provider could meet with them—and if the member was not available at those times, they could see more available times and choose a future appointment date instead.
Small changes to the One Medical onboarding experience drove a 20% increase in bookings relative to the control, signifying an interest in and intention to get medical care. Other revelations? Members who chose a time slot from the available appointment list were 2.5x likelier to continue with the booking process. By contrast, when they didn’t see an appointment that met their needs at signup, it took them 4x longer to book.
Additionally, more than 50% of new members booked a Remote Visit option for a same-day video visit—a 67% lift compared to the control. What does this tell us? Where and how appointments are offered matters—and ease of access can increase our engagement with healthcare.
The ‘Aha!’ 💡
This sounds obvious—but when things are easy to do, we’re more likely to do them! This means that simplifying digital experiences can maximize their efficacy—and the onboarding process is no exception.
4. Study with Common Cents Lab: Information Doesn’t Change Behavior
We all know we should eat better, exercise more, and stick to a budget—but our good intentions often don’t translate into action. Why? It’s complicated—but one big reason is that information alone doesn’t change behavior.
We partnered with Duke University’s Common Cents Lab and a popular fintech app to explore whether traditional budgeting—a largely information-based strategy for money management—is effective at changing financial behaviors. And background work for our experiment quickly found that budgeting is, well … hard. Just thinking through a budget takes significant self-control and time—and sticking with it means combatting information aversion, inattention, and forgetting.
Our experiment developed and tested 3 approaches to budgeting. Participants were randomized into 3 different budgeting conditions:
- Informational Control: presented a sum of overall weekly spending, broken down into transactions by category
- One-Number Budgeting: an ‘overall budget-setting’ condition guiding people to set up a one-number budget for the week
- Category Budgeting: a ‘category-by-category’ budget-setting condition prompting people to set up an overall weekly budget number, and then select specific categories of expenses to set goals for.
What did we find? Despite increased engagement with the budgeting feature, the control group and the budgeting conditions spent about the same amount. Budgeters generally overspent their budgets, and there was no evidence of reduced spending compared to historic spending patterns.
The ‘Aha!’ 💡
Information in the form of traditional budgeting doesn’t change spending behaviors—and we should develop alternative approaches to money management instead.
5. Kristen Berman: Don’t Listen to Your Customers
Product and design leaders often count on their customers’ words when building product road maps. They collect qualitative data through customer surveys and interviews—and this informs their product design and development. Because what better way to understand how your customers make decisions than to ask them—right?
But this model assumes people are always rational—which (newsflash!) we aren’t. So to truly understand user motivation, we need to go beyond interviews and survey questions—and dig deep into observation and environment.
Why? Well, psychological research has shown that generally we don’t know what drives our decisions. We think we decide things based on our attitudes and beliefs—when in fact, it’s mostly about our context and environment.
Here’s how this plays out:
- Your customers tend to make up stories to explain their behavior. This is our inner ‘Press Secretary’ at work: because ‘I don’t know’ feels unacceptable, we fill in the blanks with both fact and fiction to give a plausible answer.
- These stories tend to attribute behavior to attitudes, beliefs, and preferences—as in, ‘I’m saving for retirement because I care about my future.’
- The stories tend to discount the choice environment’s influence on behavior. Have you ever heard someone say ‘the default made me do it’? Didn’t think so.
- We’re not aware that any of this is going on. If anything, we’re actually overconfident in our ability to reflect accurately on our behavior.
The ‘Aha!’ 💡
Want to really understand customer behavior and decision-making? Study what people actually do, not just what they say they’ll do. Because at the end of the day, the story our actions tell is truer to life than our words.
6. NeuGen: How to Get People to Try Something New? Test Incentives
Our status quo bias means we tend to stick with what we know—and for many of us, telehealth is uncharted territory. So when NeuGen looked to us to help them drive telehealth adoption, we knew our core challenge would be convincing people to try something new. In other words? We needed to find the right way to incentivize them.
We partnered with NeuGen to design an experiment testing various incentive-based behavioral hypotheses. Members were randomized into 1 of 5 conditions, all of which received a postcard and a mailer with supplementary materials (along with an email and follow-up mailer 6 weeks later):
1. Regret Lottery: Members were automatically entered into a lottery to win $100—but they could only win if they had signed-up for the telehealth service. We hypothesized that the desire to avoid the regret of winning the lottery and not being able to collect the prize, combined with the endowment effect of a personal, tangible lottery ticket, would motivate telehealth sign-up.
2. Unconditional Reward: Members were sent $2 as a ‘thanks in advance’ for signing up—whether or not they actually went on to do it. We anticipated that reciprocity for this small, salient reward would motivate people to sign up for telehealth.
3. Conditional Reward: Members were offered $10 to sign up for telehealth. While the benefits of telehealth are future-based, we believed the immediate benefit of a salient $10 reward would motivate people to sign up in the present (hello, present bias!).
4. Incompletion Bias: Members were told their profile was incomplete—but that they could complete it by signing up for telehealth. We anticipated that an internal motivation to resolve incomplete tasks would result in signups.
5. Control: Members were told about the benefits of the telehealth service—access to urgent care from home.
Our behaviorally-informed postcard intervention increased telehealth signups, with all 4 treatment conditions significantly outperforming the control. The 2 conditions with the greatest impact were Conditional Reward (168% more signups) and Incompletion Bias (43% more signups).
The ‘Aha!’ 💡
Testing various incentives enabled us to overcome status quo bias and encourage people to try something new (in this case: telehealth). By strategically designing multiple incentive-based conditions and measuring each one’s impact, we learned what would truly motivate NeuGen’s members to sign up for their telehealth program—and highlighted the power of incentives to change behavior.
7. TytoCare: To Increase Telehealth Adoption, You Need a Clear Mental Model
TytoCare is a virtual medical care kit that lets physicians examine patients thoroughly and accurately—remotely. The device is easy to use. So why weren’t people using it?
One issue we’ve discovered in working with Tytocare is that telehealth adoption requires people to change their existing mental model of healthcare.
A mental model is a conception of how something works—and in the US, the mental model of healthcare looks something like this: ‘If I’m sick, I’ll visit my primary care physician, and if I have an emergency, I’ll go to the ER or urgent care.’ Here’s the problem: this existing mental model doesn’t incorporate telehealth or a device like TytoCare—telling us that digital healthcare may not be a salient part of people’s choice sets.
How to mitigate this? We designed an intervention in the form of an eligibility quiz at the outset of the TytoCare experience. The quiz asked questions about people’s comfort with technology, frequency of medical visits, and time spent waiting to receive care.
Anecdotally, TytoCare found that people who completed the quiz were more likely to purchase a TytoCare unit—53% to 37%. The eligibility quiz intervention was directly incorporated into the TytoCare user journey, along with several other key recommendations and interventions. These led to a 120% increase in devices sold and a 65% increase in completed medical visits.
The ‘Aha!’ 💡
When we designed our eligibility quiz intervention, we were careful to use inclusive questions instead of exclusive ones—because our goal wasn’t to disqualify people from using the product, but rather to build a new mental model of who the product is for. Telehealth has the potential to improve health equity in the US and across the world, but this can only work if everyone has access to this new mental model.
8. Livongo: Want People to Make Healthy Choices? Leverage the Endowment Effect
The Livongo for Diabetes Program provides new members with a collection of resources to help manage their diabetes. The problem? Many eligible people fail to sign up—forfeiting potentially life-saving resources. This failure to act in our own best interest seems surprising—when you assume that we act rationally. But here’s the thing: we often don’t.
When Livongo asked us to help drive enrollment in their health program, we knew we needed to motivate people to take ownership over their health—literally.
Because email is one of Livongo’s main ways of notifying members of program eligibility, we honed in on the offer’s framing—and hypothesized that a shift in email messaging from ‘Join the Program’ to ‘Claim Your Welcome Kit’ would be more effective in driving signup.
We collaborated with Livongo on an email experiment as part of a seasonal campaign sent to all eligible individuals. The control group received the original ‘Join the Program’ message. The experimental group received a message with ‘Claim Your Welcome Kit’ framing in the subject line, headline, CTA button, and email body.
Livongo emailed more than 15,000 people with diabetes who were eligible for their program. The ‘Claim Your Welcome Kit’ condition increased open rates by 25% and click-through rates by 88%. And most importantly, the experiment increased the program’s registration rate by more than 120% within just 1 week of the email outreach.
This translates to people in the treatment condition signing up for the program at more than double the rate of the control group—and ultimately increasing the chances of successfully managing their health condition.
The ‘Aha!’ 💡
When we changed the framing of Livongo’s email messaging from ‘Join the Program’ to ‘Claim Your Welcome Kit,’ we tapped into the endowment effect. This created a sense of ownership over the resources being offered—and encouraged members to take ownership of their health in the process.
9. Lisa Zaval & Kristen Berman: We Are What We Measure
There’s no one shortcut to behavior change. But you CAN greatly increase your odds of success—by measuring the right thing at the right time.
Not sure how to do this? Here are some key ideas:
- Measure the most important thing. Airbnb, Uber, and Lyft all use measurement to drive customer outcomes. Airbnb measures hosts on their response time and response rate—behaviors that likely correlate with guest satisfaction and increased bookings. Uber and Lyft measure drivers based on their acceptance/cancellation rates, helping to prevent cancellations while maintaining customer trust. By incentivizing behaviors that benefit customers, these companies strengthen their bottom line and ensure a positive user experience.
- Make feedback immediate. Immediate, relevant feedback is crucial for driving behavior change across different contexts. Some examples: healthcare workers improving hand hygiene and people reducing their water usage after getting real-time feedback. Similarly, Uberconference uses call summaries to draw attention to (and ideally change) user behavior. Pro tip: nudge right before the desired behavior for even greater effectiveness.
- Make feedback public. We are what we measure—and when we know we’re being measured, we tend to overperform. Research has shown that public recognition, such as giving thank you cards to top performers, can improve overall performance—even for those not directly recognized. Likewise, real-time feedback on driving speed can lead drivers to slow down—due to the social desirability of behaving responsibly. This type of public feedback is effective because after all, who wants to be perceived negatively?
- Make feedback actionable. If you give people feedback that they can’t control, they won’t be able to make changes. And if you give them feedback without an optimal way to address it, they may waste time trying different methods and ultimately fail. One effective method is the light switch feedback model, where the behavior and the feedback are directly correlated. For example, clinicians who receive feedback on hand-washing can easily understand and improve their behavior.
The ‘Aha!’ 💡
For our best chance at changing behavior, we need to measure with intention—and make feedback immediate, public, and actionable. After all: we are what we measure.
10. Charlie: One-Time Behaviors Can Yield Long-Term Benefits
Charlie aims to help people better manage and improve their finances—and when they asked for help in finding the most effective approach to reduce users’ overdraft fees, we jumped at the chance.
There are multiple ways to avoid overdraft fees. We narrowed in on two options, both of which are one-time decisions with long-term effects on reducing total fees:
- Changing banks. Switch to a bank that doesn’t charge overdraft fees.
- Opting out of overdraft protection. When you opt out of overdraft protections, your overdrafting transactions will be declined.
Ultimately, we prioritized ‘opting out’ as the recommended path for most Charlie users because it’s a simpler action than changing banks. In collaboration with Charlie and Common Cents Lab, we tested 2 overdraft opt-out messages against each other and a control of no message. In all conditions, Charlie users received a message anytime they overdrafted.
The experimental opt-out conditions drew on 2 frameworks:
- Fairness and injunctive social norms: ‘Overdraft fees aren’t fair! You should get rid of overdrafts altogether…’
- Descriptive social norms: ‘Most Charlie users (like you!) get rid of overdrafts altogether…’
We sent both an initial message and a reminder message using these conditions.
The experimental conditions resulted in a net overdraft reduction of ~$23 per person. That equates to a 9% reduction in total annual overdraft fees. While both experimental conditions reduced overdrafts, there was no statistically significant difference in the success of the two messages.
This intervention saved ~$75,000 in overdraft savings over the 2-month experimental period alone! If rolled out to the US at scale, we could save Americans $594,294,460.
The ‘Aha!’ 💡
One-time behaviors yield long-term benefits! In this case, strategic behavioral interventions targeting one-time decisions can substantially increase earnings and financial well-being. What’s not to love about that?
It’s been a fascinating 10 years at Irrational Labs. Here’s to another 10—and to continuing on our mission to make people happier, healthier, and wealthier using behavioral science. Will your company make our next 10-year list? We’re curious to find out.