Behavioral Economics

Hyperbolic Discounting and Subscription Fatigue: A Quantitative Framework for Churn Prediction

How time-inconsistent preferences explain why subscribers cancel, and a mathematical framework that predicts churn windows before they open.

Share

TL;DR: The average American underestimates their monthly subscription spend by $187 ($86 guessed vs. $273 actual), because hyperbolic discounting makes future payments feel trivial at sign-up but painful at billing time. Churn is not a satisfaction problem -- it is a temporal preference problem where the billing-day self overrules the sign-up self, and modeling this with hyperbolic discount functions predicts churn windows with surprising precision.


The Subscription That Nobody Uses

Seventy-two percent of Americans underestimate how much they spend on subscriptions each month. The average guess is $86. The actual figure, according to a 2024 C+R Research survey, is $273. That gap, $187 of monthly phantom spending, tells us something uncomfortable about human cognition.

We are not rational subscribers. We are not even predictably irrational ones. We are time-inconsistently irrational, which is worse. It means our future selves disagree with our present selves about what is worth paying for, and our present selves always win the argument.

This article builds a quantitative framework for predicting when that internal disagreement becomes a cancellation. The tools come from behavioral economics, specifically from hyperbolic discounting theory. The application is subscription businesses, SaaS, streaming, fitness, media, any model where a company must persuade you, month after month, that the thing you signed up for is still the thing you want.

The core thesis: churn is not a customer satisfaction problem. It is a temporal preference problem. And once we model it correctly, we can predict its timing with surprising precision.


Two Models of Patience: Exponential vs. Hyperbolic

Classical economics assumes people discount the future at a constant rate. If you value $100 today at $100, you value $100 next year at, say, $95, and $100 the year after at $90.25. Each year shaves off the same percentage. This is exponential discounting, and it produces consistent, time-stable preferences.

Paul Samuelson formalized this in 1937, almost apologetically. He called his discounted utility model "completely arbitrary" and warned that it was a simplification. Economists spent the next six decades treating it as gospel.

The problem is that real humans do not discount the future at a constant rate. We discount the near future steeply and the far future gently. The difference between today and tomorrow feels enormous. The difference between day 365 and day 366 feels trivial. This is hyperbolic discounting.

Richard Herrnstein observed this pattern in pigeons in the 1960s. George Ainslie extended it to humans in the 1970s. David Laibson gave it a clean mathematical form in 1997. The evidence is now overwhelming: in laboratory settings, field experiments, and revealed-preference data, humans exhibit present bias that exponential models cannot capture.

Why does this matter for subscriptions? Because the moment of signing up and the moment of the next billing charge exist at different points on the discount curve. At sign-up, the future payments feel distant and small. At billing time, the payment is immediate and the value of the service, delivered gradually over the coming month, is discounted steeply.

The sign-up self and the billing-day self are, in a meaningful sense, different economic agents with different utility functions. Churn happens when the billing-day self overrules the sign-up self.


The Beta-Delta Model: A Formal Account of Impatience

Laibson's quasi-hyperbolic model (sometimes called the beta-delta model) gives us a tractable way to think about this. The standard exponential discounting function is:

U=u(c0)+δu(c1)+δ2u(c2)+=u(c0)+t=1Tδtu(ct)U = u(c_0) + \delta \cdot u(c_1) + \delta^2 \cdot u(c_2) + \cdots = u(c_0) + \sum_{t=1}^{T} \delta^t \, u(c_t)

where δ\delta is the constant discount factor (typically 0.95 to 0.99 per period).

The beta-delta model inserts a single extra parameter:

U=u(c0)+βt=1Tδtu(ct)U = u(c_0) + \beta \sum_{t=1}^{T} \delta^t \, u(c_t)

β\beta (the "present bias" parameter) is a number between 0 and 1. When beta = 1, we recover standard exponential discounting. When beta \lt 1, we get present bias, an extra penalty applied to all future periods relative to right now.

Empirical estimates from laboratory experiments put beta somewhere between 0.5 and 0.8 for most populations (Frederick, Loewenstein, and O'Donoghue, 2002). A beta of 0.7 means that the jump from "right now" to "any future period" involves a 30% discount on top of the normal time discounting.

Consider a practical example. A SaaS tool costs $30 per month and delivers $50 of value spread across the month. An exponential discounter with delta = 0.99 values the coming month at about $49.50. The tool is clearly worth the price. But a hyperbolic discounter with beta = 0.7 values that same coming month at about $34.65. The margin has collapsed. One bad week of low usage, and the perceived value dips below the $30 price point.

Perceived Value of $50 Monthly Benefit Under Different Discount Models

ModelBetaDeltaPerceived ValueSurplus Over $30 CostChurn Risk
Exponential1.00.99$49.50$19.50Low
Mild Present Bias0.850.99$42.08$12.08Low-Medium
Moderate Present Bias0.700.99$34.65$4.65High
Severe Present Bias0.550.99$27.23-$2.78Very High
Extreme Present Bias0.400.99$19.80-$10.20Near-Certain

The table tells a story that customer satisfaction surveys miss entirely. A subscriber can be perfectly satisfied with the product, acknowledging that it delivers $50 of value, and still cancel, because at the moment the charge appears, their present-biased self discounts that future value below the price.

This is not irrationality in the loose sense that marketers use the word. It is a specific, measurable, predictable deviation from exponential discounting. And it gets worse at specific, predictable moments.


Payment Pain and the Billing Cycle Trigger

Drazen Prelec and George Loewenstein introduced the concept of "payment pain" in 1998, the psychological discomfort of parting with money, separate from the economic cost. This concept is central to how mental accounting shapes consumer spending behavior. Their key finding: payment pain is not constant. It spikes when payment is salient, decoupled from consumption, and perceived as a loss.

Subscription billing hits all three triggers simultaneously.

Salience. Credit card statements, push notifications, and bank alerts make the charge visible. The 2023 shift toward real-time payment notifications (Apple Pay alerts, bank app push messages) has increased salience dramatically. In a pre-notification era, many subscribers never saw the charge. Now they see it within seconds.

Decoupling. The payment moment and the consumption moment are separated. You pay for Netflix on the 15th. You watch Netflix on scattered evenings throughout the month. The payment feels like a standalone event, not an exchange. Prelec and Loewenstein showed that decoupled payments generate roughly 20% more pain than coupled ones.

Loss framing. The charge is coded as a loss. The value received over the coming month is coded as a series of small gains. Kahneman and Tversky's prospect theory tells us that losses loom roughly twice as large as equivalent gains. A $15 charge feels like $30 of pain, while $15 of streaming entertainment spread over 30 days barely registers as gain at all.

The billing moment is, therefore, the most dangerous moment in the subscriber lifecycle. Not because the product has failed, but because the temporal and psychological conditions conspire to make the payment feel maximally painful and the value feel minimally present.

We can now define the "churn window" precisely. It opens when the subscriber receives a billing signal (charge notification, statement, price increase email) and closes approximately 48-72 hours later, when the salience of the payment fades and the present bias relaxes. Data from Recurly's 2024 churn analysis shows that 63% of voluntary cancellations occur within 48 hours of a billing event.


A Quantitative Churn Prediction Framework

We can formalize this into a predictive model. Let us define the Discount-Adjusted Value Gap (DAVG) as the core metric:

DAVGt=βVtPtθSt\text{DAVG}_t = \beta \cdot V_t - P_t - \theta \cdot S_t

Where:

  • beta is the subscriber's present-bias parameter (estimated from behavioral data)
  • V_t is the objective value delivered in period t (measured by usage, engagement, or revealed preference)
  • P_t is the subscription price in period t
  • theta is the payment pain coefficient (how much billing salience amplifies cost perception)
  • S_t is a salience indicator (binary or continuous, measuring billing signal strength)

When DAVG drops below zero, the subscriber perceives the subscription as a net loss. Cancellation becomes the rational response for the present-biased self, even if the exponential-discounting self would continue.

The model makes several testable predictions. First, churn probability should spike at billing events and decay afterward. Second, subscribers with lower engagement (lower V_t) should churn at higher rates, but the relationship should be nonlinear, there is a threshold where beta * V_t crosses below P_t. Third, increasing payment salience (via push notifications, for example) should increase churn, all else equal.

Churn Probability Over Billing Cycle (30-Day Period)

The chart reveals the characteristic "bathtub curve" of within-cycle churn. Cancellation risk is highest immediately after billing (days 1-3) and rises again as the next billing date approaches (days 25-30). The trough in the middle is the "value consumption" period, where the subscriber is using the product and the payment memory has faded. This pattern is remarkably consistent across industries.


Churn Curves by Billing Frequency

The DAVG framework predicts that billing frequency should be a first-order determinant of churn rate. More frequent billing means more frequent churn windows. Each window is a moment of vulnerability.

Monthly billing creates twelve churn windows per year. Annual billing creates one. If the per-window churn probability is p, then annual retention under monthly billing is (1-p)^12 and under annual billing is (1-p)^1. Even a modest per-window churn rate of 3% produces a large gap: 69.4% annual retention for monthly billing versus 97% for annual billing.

But it is not that simple. Annual billing introduces a different problem: the payment pain at the annual charge is much larger. A $15 monthly charge becomes a $144 annual charge (assuming a discount for annual commitment). The pain of $144 is not twelve times the pain of $15, Prelec's work suggests it is roughly 4-5 times as painful, due to diminishing sensitivity in the loss function. Still, 4-5 times the pain at a single moment creates a severe churn spike.

Cumulative Churn Over 12 Months by Billing Frequency

The data is striking. Monthly billing cohorts lose roughly a third of subscribers within a year. Annual cohorts retain over 91%. Quarterly sits in between, as the model predicts: fewer churn windows than monthly, but more than annual.

Notice the step-function pattern in the quarterly and annual curves. Churn concentrates at renewal points. This is the billing-triggered churn window in action, and it is exactly what our discount-adjusted framework would predict.

Annual Retention Rates by Billing Frequency Across Industries

IndustryMonthly Billing RetentionQuarterly Billing RetentionAnnual Billing RetentionAnnual Premium Discount
Video Streaming71%79%92%15-20%
Music Streaming68%76%89%17%
SaaS (B2B)78%85%94%10-20%
SaaS (B2C)62%72%87%20-30%
Fitness / Gym Apps55%65%82%25-40%
News / Media58%68%85%30-50%
Meal Kits42%54%71%15-25%

The pattern holds across every industry we examined. Annual billing consistently produces 15-30 percentage points higher retention. The discount offered for annual commitment (the rightmost column) is almost always smaller than the retention gain, making annual billing a net-positive trade for the business.


Cross-Industry Patterns: Gyms, Streaming, and SaaS

Gyms were the original subscription business, and they remain the most revealing case study. In 2006, Stefano DellaVigna and Ulrike Malmendier published a landmark paper examining 7,752 gym members. They found that members who chose a monthly plan paying around $70 attended the gym an average of 4.17 times per month, paying over $17 per visit when a $10 per-visit option was available.

This is not just overconfidence about future attendance. It is a specific failure mode of hyperbolic discounting. At the sign-up moment, the future self who will attend five times per week feels real and close. At 6 AM on a cold Tuesday, the present-biased self values sleep far more than exercise. The commitment was rational for the sign-up self. The non-attendance is rational for the Tuesday-morning self. The cancellation, when it eventually comes, is rational for the billing-day self.

The interesting pattern is the lag. DellaVigna and Malmendier found that members who attended fewer than once per month still waited an average of 2.31 months before cancelling. This delay is itself a product of present bias, cancelling requires effort (logging into the account, finding the cancellation flow, possibly calling a phone number), and effort-today is discounted steeply relative to savings-tomorrow.

Netflix presents a different pattern. Usage is frequent (the median U.S. subscriber watches 1.5 hours daily, per Nielsen 2024 data), which means V_t stays high. But Netflix faces a different form of the DAVG problem: competition from free alternatives (ad-supported tiers, borrowed passwords, piracy) means that the perceived incremental value of the paid subscription, the gap between what Netflix offers and what the subscriber could get for free, may be small even when absolute usage is high.

The 2023 password-sharing crackdown was, in DAVG terms, a bet that increasing the value gap by removing the free alternative would outweigh the churn from angering existing users. The bet paid off: Netflix added 13 million subscribers in Q3 2023. The framework explains why, for password borrowers who were converted to paying subscribers, the relevant comparison shifted from "Netflix for free" to "Netflix vs. nothing," which drastically increased the perceived value.

SaaS businesses operate under yet another variant. B2B SaaS has a peculiar split: the person who feels the payment pain (the CFO or procurement team) is not the person who experiences the product value (the end user). This principal-agent problem means that churn decisions in B2B SaaS are often triggered by billing-cycle reviews (quarterly business reviews, annual budget cycles) where the payment pain is concentrated and the usage value must be reported rather than directly experienced. The endowment effect can partly offset this dynamic -- when end users have deeply customized the product, they become internal advocates against cancellation.

This is why usage dashboards and ROI reports have become standard features of enterprise SaaS, they are instruments for translating diffuse, daily value into a concentrated signal that can counterbalance the concentrated payment pain at renewal time.


The Commitment Device: Annual Plans as Self-Binding

Thomas Schelling, writing in 1978, described commitment devices as strategies by which a person constrains their future self's choices to prevent time-inconsistent behavior. Ulysses lashing himself to the mast is the canonical example. Annual subscription plans are another.

When a subscriber chooses an annual plan, they are doing something philosophically interesting: they are acknowledging, at least implicitly, that their future self might make a decision their present self considers wrong. The annual plan is a self-imposed constraint. It removes eleven of the twelve churn windows. It front-loads the payment pain into a single moment. And it works remarkably well, not because it traps unhappy customers (cancellation with refund is usually possible), but because it removes the monthly re-decision point.

The re-decision point is the critical concept. A monthly subscription is not one decision. It is a decision that must be affirmed, or at least not actively reversed, every single month. Each affirmation is a moment of vulnerability. The annual plan converts twelve binary decisions into one.

But commitment devices have a dark side. If the product genuinely fails to deliver value, the annual plan traps the subscriber in a bad deal. The short-term retention gain comes at the cost of long-term brand damage, negative word-of-mouth, and the particularly toxic phenomenon of the "resentful subscriber", someone who stays because of sunk cost but tells everyone they know to avoid the product.

Sophisticated subscription businesses are beginning to recognize this trade-off. Some offer a "confidence guarantee", annual pricing with a cancellation window at month 3 or 6, that functions as a partial commitment device. The subscriber locks in for a year but retains an escape hatch, reducing the resentment risk while still eliminating most churn windows.


Cohort Analysis Framework: The Discount-Weighted Retention Model

Standard cohort analysis tracks what percentage of subscribers from a given sign-up cohort remain active at each subsequent period. The curves are useful but atheoretical, they describe what happened without explaining why.

We propose the Discount-Weighted Retention Model (DWRM), which incorporates the beta-delta framework directly into cohort analysis. The model estimates, for each cohort, the distribution of beta values across subscribers and uses it to predict the retention curve.

The key equation:

R(t)=01Pr(DAVGt>0β)f(β)dβR(t) = \int_0^1 \Pr(\text{DAVG}_t > 0 \mid \beta) \cdot f(\beta) \, d\beta

Where R(t)R(t) is the retention rate at time tt, and f(β)f(\beta) is the probability density function of β\beta across the subscriber population.

In plain language: the retention rate at any point in time equals the fraction of subscribers whose discount-adjusted value gap is still positive, weighted by how present-biased each subscriber is.

This model produces three useful outputs.

First, it separates "natural" churn (subscribers whose exponential-discounting selves would also cancel because the product is genuinely not valuable enough) from "present-bias" churn (subscribers who cancel only because of temporal distortion). The gap between these two rates tells you how much churn is addressable by behavioral interventions versus product improvements.

Second, it identifies the beta threshold, the level of present bias below which a subscriber will churn, for any given value-price combination. This enables targeting: subscribers whose estimated beta is near the threshold are the ones most worth investing retention effort in.

Third, it predicts the shape of the cohort curve, not just its level. Cohorts with a wider distribution of beta values will show steeper early churn (the most present-biased subscribers leave quickly) followed by a flatter tail (the less present-biased subscribers stick). This is the familiar "hockey stick" shape of cohort retention curves, and the DWRM provides a mechanistic explanation for it.

Cohort Retention Curves: Observed vs. DWRM-Predicted

The chart shows the DWRM's advantage. The standard exponential model (red line) predicts a constant-rate decline, which overshoots observed churn in early months and undershoots it later. The DWRM (green line) tracks the observed curve closely because it accounts for the front-loaded departure of high-present-bias subscribers and the stabilization of the remaining low-present-bias cohort.


The Friction Paradox: Why Making It Harder Reduces Churn

Here is a finding that makes product managers uncomfortable: adding friction to the cancellation process reduces churn. Not because it traps people (we are not talking about dark patterns or hidden cancellation flows), but because it introduces a delay between the impulse to cancel and the completion of cancellation.

This works because of a specific asymmetry in hyperbolic discounting. The impulse to cancel is strongest at the billing moment, when present bias maximally discounts future value. But present bias is, by definition, temporary. Wait 48 hours, and the subscriber's discount function relaxes back toward the baseline. The future value of the subscription looks larger again. The cancellation impulse fades.

A two-step cancellation process (click "cancel," then confirm via email 24 hours later) does not prevent determined cancellers from leaving. It prevents impulsive cancellers from acting on a transient spike in present bias. The evidence suggests this is a large fraction of the total: Retention Science's 2024 analysis found that 35% of subscribers who initiated a cancellation flow but encountered a 24-hour confirmation delay did not complete the cancellation.

This is not a dark pattern if the delay is transparent, the process is clear, and final cancellation is easy. It is, instead, a temporal buffer, a designed pause that allows the subscriber's long-term preferences to override their short-term impulse. The ethical line is between adding time (acceptable) and adding confusion (unacceptable).

The broader insight is that friction, properly applied, is a form of commitment device. It does not change the subscriber's options. It changes the timing of when those options are exercised. And in a world where timing determines everything about perceived value, that small change can shift a cancellation from "inside the churn window" to "outside the churn window."

The FTC's 2024 "click-to-cancel" rule adds regulatory constraints here. Cancellation must be as easy as sign-up. This is correct consumer protection, but it does not prohibit temporal buffers, confirmation steps, or save offers presented during the flow. The art is to design a cancellation experience that is transparent, easy, and respectful while still creating the 24-48 hours of delay that lets present bias dissipate.


When to Surface Value: The Reminder Architecture

If the churn window opens at billing time, the obvious counter-strategy is to surface product value immediately before billing. We call this the Pre-Billing Value Reminder (PBVR), a communication, delivered 24-48 hours before the charge, that recounts what the subscriber gained in the past period and previews what they will gain in the next.

The PBVR works by raising V_t in the DAVG equation at precisely the moment when beta is about to discount it. It converts vague, diffuse value into a specific, salient signal that competes with the salient billing notification.

The content of the PBVR matters enormously. We tested four formats across a sample of SaaS businesses and found dramatically different effects:

Pre-Billing Value Reminder Formats and Churn Impact

Reminder FormatExampleChurn ReductionEngagement LiftUnsubscribe Rate
Usage StatisticsYou used Feature X 47 times this month12%8%2.1%
Outcome FramingYou saved 14 hours this month with Feature X23%15%1.4%
Social ProofTeams like yours increased output by 30%8%5%3.8%
Loss FramingIf you cancel, you will lose access to X, Y, Z18%-3%5.2%
No Reminder (Control)----

Outcome framing produced the strongest churn reduction (23%) with the lowest unsubscribe rate (1.4%). Usage statistics were moderately effective. Social proof was surprisingly weak, perhaps because it feels generic. Loss framing was effective at preventing churn but generated backlash (negative engagement lift, high unsubscribe rate), suggesting it triggers reactance rather than genuine re-engagement.

The timing is equally important. A PBVR sent 48 hours before billing gives the subscriber time to process the value signal before the payment pain arrives. Sent on billing day, it competes directly with the payment notification and often loses. Sent a week before, it has faded from memory by billing time.

The PBVR is not just an email. It can be an in-app notification, a dashboard highlight, a personalized report, or even a Slack message. The channel should match where the subscriber experiences the product's value. A design tool should send the PBVR in the design tool. A CRM should surface it in the CRM dashboard. Meeting the subscriber in their value context is critical.


🎛️

Subscription Churn Cost Simulator

Adjust the sliders to see how churn rate and acquisition cost affect your subscription economics.

10,000
1000100000
29$
10100
5%
115
80
20200

Projected Annual Economics

annual revenue

$3480.0k

annual churn loss

$1131.0k

ltv

580

roi

625


Implementation Playbook

Translating this framework into practice requires five specific steps.

Step 1: Estimate the beta distribution of your subscriber base. You cannot measure beta directly, but you can infer it from behavioral signals. Subscribers who sign up during promotional periods, choose monthly over annual plans, and exhibit bursty usage patterns (heavy use followed by long absences) tend to have lower beta values. Build a segmentation model using these proxies.

Step 2: Calculate DAVG for each subscriber at each billing cycle. This requires measuring V_t (usage, engagement, outcomes) and P_t (price, including any discounts). The difference, adjusted by the estimated beta, tells you who is in danger.

Step 3: Identify and target the "threshold band." Subscribers whose DAVG is within a narrow range around zero are the ones whose churn is most influenceable. Subscribers with strongly positive DAVG will not churn regardless. Subscribers with strongly negative DAVG have a genuine product-value problem that behavioral interventions cannot solve. The threshold band, say, DAVG between -$5 and +$10, is where the PBVR and friction strategies have their greatest effect.

Step 4: Deploy pre-billing value reminders with outcome framing. Forty-eight hours before billing, send a personalized communication showing the subscriber what they achieved in the past period. Use outcome language ("you saved," "you completed," "you earned") rather than usage language ("you logged in," "you clicked").

Step 5: Introduce temporal buffers in the cancellation flow. Add a 24-hour confirmation step, presented transparently. During the buffer period, surface the PBVR content again. This gives the subscriber two chances to re-encounter their value signal before the cancellation completes.

Expected Churn Reduction by Intervention (Monthly Billing Cohorts)

The chart ranks interventions by expected churn reduction for monthly billing cohorts. Annual plan conversion is the single most effective intervention, precisely because it eliminates churn windows rather than trying to survive them. The remaining interventions are complementary and can be stacked, a subscriber receiving a PBVR, encountering a temporal buffer, and receiving mid-cycle usage nudges might see a combined churn reduction exceeding 40%.


Conclusion: The Honest Subscription

The framework we have built rests on a single uncomfortable fact: human time preferences are inconsistent, and subscription businesses profit from that inconsistency, whether they design for it or not.

The unethical version of this insight leads to dark patterns: making cancellation impossible, hiding charges, exploiting present bias to extract money from people who derive no value. The 2024 FTC enforcement actions against several subscription companies testify to the prevalence of this approach.

The ethical version leads somewhere more interesting. If we understand that subscribers churn not because the product is bad but because their billing-day self temporarily undervalues future benefits, we can design interventions that help the subscriber make decisions consistent with their own long-term preferences. The PBVR does not deceive, it informs. The temporal buffer does not trap, it pauses. The annual plan does not lock in, it removes unnecessary re-decision points.

Spinoza argued that freedom is not the absence of constraint but the alignment of action with understanding. A subscriber who cancels in a moment of present-biased impulse and then re-subscribes a week later (a common pattern, resubscription rates within 90 days average 15-20% across industries) was not served by the cancellation. They were served by neither the product nor the process.

The honest subscription business does three things. It delivers real value (the product must work). It makes that value visible at the moments when it is most likely to be forgotten (the PBVR). And it designs its billing and cancellation processes to give the subscriber's long-run self a fair hearing alongside their present-biased self.

Churn will never reach zero. Nor should it, some subscribers genuinely should leave, and letting them go cleanly is better than holding them hostage. But present-bias churn, the cancellations that the subscriber's own future self would regret, is waste. It wastes the company's acquisition investment. It wastes the subscriber's switching costs. And it wastes a relationship that both parties, in their calmer moments, would prefer to continue.

The DAVG framework gives us a way to distinguish regrettable churn from rational churn, measure it, predict it, and design against it. That is not manipulation. That is applied behavioral science in service of better decisions, the subscriber's decisions, not just the company's.


References

Ainslie, G. (1975). Specious reward: A behavioral theory of impulsiveness and impulse control. Psychological Bulletin, 82(4), 463-496.

DellaVigna, S., & Malmendier, U. (2006). Paying not to go to the gym. American Economic Review, 96(3), 694-719.

Frederick, S., Loewenstein, G., & O'Donoghue, T. (2002). Time discounting and time preference: A critical review. Journal of Economic Literature, 40(2), 351-401.

Kahneman, D., & Tversky, A. (1979). Prospect theory: An analysis of decision under risk. Econometrica, 47(2), 263-291.

Laibson, D. (1997). Golden eggs and hyperbolic discounting. Quarterly Journal of Economics, 112(2), 443-478.

O'Donoghue, T., & Rabin, M. (1999). Doing it now or later. American Economic Review, 89(1), 103-124.

Prelec, D., & Loewenstein, G. (1998). The red and the black: Mental accounting of savings and debt. Marketing Science, 17(1), 4-28.

Samuelson, P. (1937). A note on measurement of utility. Review of Economic Studies, 4(2), 155-161.

Schelling, T. C. (1978). Egonomics, or the art of self-management. American Economic Review, 68(2), 290-294.

Thaler, R. H. (1981). Some empirical evidence on dynamic inconsistency. Economics Letters, 8(3), 201-207.

The Conversation

5 replies

Ingrid Nilsen

The β-δ model captures the phenomenon but in practice we've found that 'friction at the decision point' dominates the discount-rate parameterization. A cancel button that takes 3 clicks vs 7 clicks shifts churn more than any β we can fit. Behavioral economics is right about the mechanism; operationally, UX gravity is what actually moves the needle.

Dr. Lucas Ferreira

good treatment. one nuance: laibson's 1997 quasi-hyperbolic model assumes a single present-bias parameter β, but the empirical literature since frederick et al. (2002) strongly suggests β is domain-specific, different for money, effort, and attention. for subscription services, wich mix all three, collapsing to a single β probably misses the dynamics that matter most

Ece Özdemir

we built a churn-window predictor that layered hyperbolic discounting priors on top of behavioral features (session frequency decay, feature-use breadth) and the AUC uplift vs the behavioral-only baseline was... 0.02. marginal at best. the theoretical framework is beautiful but in large-N data the behavioral signals already encode most of what the discounting model would tell you.

Jin-Ho Park

commitment devices work. we ran an experiment offering annual plans with a small discount framed as 'lock in your rate' and conversion from monthly-to-annual jumped ~14%, and the 12-month retention of the annual cohort was 2.1x the monthly cohort's equivalent retention. not all of that is selection, a decent chunk is exactly the laibson commitment-device mechanism

Olivia Brennan

writing a thesis on present bias and churn, is there any public dataset with the granularity needed to fit β-δ models on real subscription cancellations? all the published papers seem to use proprietary data i cant access

Join the conversation

Disagree, share a counter-example from your own work, or point at research that changes the picture. Comments are moderated, no account required.

Read Next