Glossary
Every concept, defined.
62 canonical definitions — one per concept — that anchor the essays across the archive. Each entry links to the essays that cover it in depth.
Hyperbolic Discounting
also: quasi-hyperbolic discounting · beta-delta model · present bias
Hyperbolic discounting is a time-inconsistent preference pattern where people place disproportionate weight on immediate rewards and steeply discount near-term future payoffs, then flatten their discount curve for more distant periods. It explains why subscribers sign up enthusiastically but churn when the first renewal charge arrives.
Prospect Theory
also: cumulative prospect theory · CPT
Prospect theory, developed by Kahneman and Tversky, describes how people choose under risk: outcomes are evaluated as gains or losses relative to a reference point, losses weigh roughly 2.25× more than equivalent gains, and both gains and losses exhibit diminishing sensitivity.
Loss Aversion
also: loss aversion coefficient · lambda parameter
Loss aversion is the empirical finding that the psychological impact of a loss is roughly twice the impact of an equivalent gain — the loss aversion coefficient λ is typically estimated at 2.0 to 2.5 in controlled settings but varies with stakes, platform investment, and framing context.
Endowment Effect
The endowment effect is the finding that people demand significantly more to give up an object than they would pay to acquire it. In digital products, activated accounts and populated workspaces create psychological ownership that makes downgrade and cancellation substantially harder than the symmetric purchase decision.
Sunk Cost Fallacy
The sunk cost fallacy is the tendency to continue investing in a project, product, or relationship because of cumulative prior investment, regardless of whether additional investment produces positive expected value. In product adoption, it creates both productive lock-in (customization investment) and destructive persistence (continuing to use a failing tool).
Choice Architecture
Choice architecture is the deliberate design of the context in which decisions are made — the ordering of options, the default selection, the framing of trade-offs, and the number of alternatives presented. In digital products, it determines which outcome occurs when users operate on low cognitive engagement.
Decoy Effect
also: asymmetric dominance effect · attraction effect
The decoy effect is the phenomenon where adding a third, asymmetrically dominated option to a choice set shifts preference toward the target option. In dynamic pricing it is operationalized by introducing a decoy tier designed to be chosen against, increasing conversion to the intended price point by 20–40% in controlled experiments.
Mental Accounting
Mental accounting, formalized by Richard Thaler, is the tendency to segregate money into non-fungible categories based on source, intended use, or currency, and to apply different decision rules to each. In multi-currency e-commerce it explains why identical purchases feel costlier in a native currency than in a secondary one.
Temporal Construal Theory
also: construal level theory · CLT · psychological distance
Temporal Construal Theory (Trope and Liberman, 2003) holds that people mentally represent distant events in abstract, high-level terms and near events in concrete, low-level terms. Landing-page copy that matches the reader's psychological distance — abstract for cold traffic, concrete for hot — converts systematically better than generic copy.
IKEA Effect
also: effort justification · labor-induced valuation
The IKEA Effect (Norton, Mochon, Ariely, 2012) is the finding that people place disproportionately high value on objects they have partially created themselves — a 63% WTP premium over identical pre-assembled items. In product adoption it converts early configuration effort into durable retention via effort justification.
Network Effects
also: network effect · direct network effects · indirect network effects
A network effect exists when the value of a product to each user increases with the number of other users. Direct network effects (messaging apps) work within a single user side; indirect or cross-side network effects (marketplaces) operate across distinct user groups. Strong network effects create winner-take-most markets.
Switching Costs
also: switching cost engineering · lock-in
Switching costs are the total economic, procedural, and psychological burden a customer incurs when moving from one vendor to another. Klemperer's taxonomy decomposes them into transaction, learning, contractual, uncertainty, and psychological components, each with different decay rates and defensive strategies.
Zero Marginal Cost
Zero marginal cost describes the economics of digital goods, where serving one additional user imposes essentially no incremental production cost. This property inverts classical pricing theory: optimal pricing becomes a discovery problem over willingness-to-pay distributions, not a cost-plus calculation.
Platform Dynamics
also: two-sided market · multi-sided platform · platform economics
Platform dynamics are the economic mechanics of businesses that connect distinct user groups and extract value from the interactions between them. Platforms face chicken-and-egg launch problems, pricing asymmetries across sides, and governance choices around cannibalization and competitive entry.
Multi-Homing
Multi-homing describes users or suppliers that participate on multiple competing platforms simultaneously. High multi-homing on one or both sides of a two-sided market erodes the winner-take-most dynamic traditionally assumed for platforms, and is a critical predictor of vertical-SaaS competitive outcomes.
Platform Cannibalization
Platform cannibalization is the strategic decision by a platform to compete directly with its own complementors, entering adjacent product categories the platform's complementors originally filled. The decision follows a predictable envelope pattern once a category exceeds a threshold share of platform traffic.
Information Goods Pricing
also: API pricing · zero-marginal-cost pricing · value-based pricing
Information goods pricing is the pricing theory for products with near-zero marginal cost: APIs, SaaS, content. Optimal pricing departs from cost-plus and instead discovers willingness-to-pay distributions via bundling, tiering, two-part tariffs, and usage-based metering that align price to the value delivered per customer segment.
Data Moats
also: data network effects · data feedback loop
Data moats are defensive advantages that compound as a product accumulates proprietary usage data: each incremental interaction improves the model, which improves the product, which attracts more users, which generates more data. Unlike network effects they operate without direct user-to-user interaction.
Multi-Touch Attribution
also: MTA · touchpoint attribution
Multi-touch attribution assigns credit for a conversion across all marketing touchpoints in the customer journey, using rules (linear, time-decay, U-shaped) or data-driven models. Most MTA implementations confuse correlation with causation and systematically overvalue bottom-funnel channels that intercept already-decided buyers.
Marketing Mix Modeling
also: MMM · media mix modeling
Marketing mix modeling is a top-down econometric approach that estimates the causal contribution of each marketing channel to sales using aggregate historical data and controls for non-marketing drivers (price, distribution, seasonality, competition). Privacy-first MMM has returned as deterministic user-level tracking has eroded.
Causal Inference
also: causal graph · DAG · directed acyclic graph
Causal inference is the statistical machinery for estimating causal effects from data rather than just describing correlations. In marketing it involves directed acyclic graphs (DAGs) for identifying confounders, instrumental variables for unobserved confounding, and quasi-experimental methods like difference-in-differences and synthetic control.
Incrementality Testing
also: geo-lift · conversion lift test
Incrementality testing measures the causal lift produced by a marketing intervention by comparing treated units to untreated control units, typically using geographic randomization (geo-lift), user holdouts, or ghost bids. It is the empirical gold standard for answering 'what did this spend actually cause?'
Causal Discovery
also: PC algorithm · constraint-based causal discovery
Causal discovery is the family of algorithms that infer directed causal structure from observational data alone, using conditional-independence tests (PC, FCI) or score-based search (GES). Applied to business data it produces testable DAGs that replace ad-hoc causal intuitions with falsifiable hypotheses.
CausalImpact (Bayesian Structural Time Series)
also: BSTS · Bayesian structural time series · synthetic control
CausalImpact is Google's open-source implementation of Bayesian structural time series for measuring the causal effect of a known intervention on a time-series outcome. It constructs a counterfactual from correlated control series and reports the posterior distribution of the cumulative effect with calibrated uncertainty.
Directed Acyclic Graph (Causal DAG)
also: DAG · Bayesian network · structural causal model
A Directed Acyclic Graph is the formal representation of a causal structure: nodes are variables, directed edges are direct causal effects, and no cycles are permitted. DAGs encode the identification assumptions needed to estimate causal effects from observational data via the back-door and front-door criteria.
Unified Measurement Architecture
also: UMM · triangulation · MMM plus MTA plus experimentation
A Unified Measurement Architecture combines Marketing Mix Modeling, Multi-Touch Attribution, and randomized incrementality experiments into a single decision stack, with each method calibrated against the others. MMM supplies top-down causal anchors, MTA supplies in-flight optimization, experiments supply the ground truth.
Geo-Lift Testing
also: geographic split test · geo experiment · market holdout
Geo-lift testing is an experimental design that randomizes marketing treatment across geographic markets (DMAs, metros, countries) to measure incremental impact when user-level randomization is infeasible. Synthetic control and open-source libraries like GeoLift construct matched control markets from pre-period covariates.
Bayesian Inference
also: Bayesian statistics · Bayes' theorem
Bayesian inference updates prior beliefs about a parameter using observed data via Bayes' theorem to produce a posterior distribution. In A/B testing it directly answers 'what is the probability that B beats A?' — the question product teams actually ask — unlike the indirect counterfactual framing of frequentist p-values.
Survival Analysis
also: Cox proportional hazards · hazard model · time-to-event
Survival analysis models time-to-event data — how long until a customer churns, a subscription renews, a machine fails — accounting for censored observations where the event has not yet occurred. Cox proportional hazards is the standard semi-parametric model; deep recurrent survival models handle non-proportional hazards.
Cohort Analysis
also: cohort economics · retention curve
Cohort analysis groups users by a shared origin event (acquisition month, first purchase, signup source) and tracks behavior over time for each group. It separates true retention from the compositional distortion caused by new-user dilution, and is the foundational unit of analysis for subscription economics.
Anomaly Detection
also: outlier detection · isolation forest
Anomaly detection identifies observations that deviate meaningfully from expected behavior, accounting for trend, seasonality, and variance. In revenue data it separates true incidents (payment outages, pricing bugs) from normal fluctuation. Isolation forests and Prophet-based decomposition are the practical workhorses.
Product-Market Fit
also: PMF
Product-market fit is the empirical condition where a cohort's retention curve flattens above zero — a group of users has found sufficient value to make the product a persistent part of their behavior. It is not a feeling; it is a quantifiable property of retention, NPS decomposition, and usage depth.
Analytics Engineering
also: dbt · modern data stack · ELT
Analytics engineering is the discipline of building reliable, tested, version-controlled transformations on top of a cloud warehouse, bridging data engineering and analysis. Tools like dbt, Dagster, and Airbyte formalize a software-engineering workflow for SQL transformations with tests, documentation, and lineage.
Metric Ontology
also: semantic layer · metric definition framework
A metric ontology is a versioned, centrally-governed definition of every metric an organization uses, specifying the grain, filters, time-window, and source tables so that the same metric produces identical values regardless of tool, dashboard, or analyst. It prevents the drift that silently corrupts data-driven decisions.
Unit Economics
also: CAC · LTV · LTV to CAC ratio · payback period
Unit economics is the financial performance of a single customer (or transaction) decomposed into acquisition cost, gross margin per period, retention, and payback period. Cohort-level unit economics — computed per acquisition cohort rather than rolled up — is the only form that survives growth-driven distortion.
Peeking Problem
also: optional stopping · sequential testing
The peeking problem is the inflation of false-positive rates that occurs when a frequentist A/B test is repeatedly evaluated before reaching its pre-registered sample size. A nominal 5% false-positive rate can become 20–30% under daily peeking. Bayesian testing and sequential-analysis methods eliminate the problem.
Cox Proportional Hazards Model
also: Cox PH · Cox regression
The Cox proportional hazards model is the semi-parametric workhorse of survival analysis: it estimates how covariates multiply the baseline hazard rate without requiring a parametric form for the baseline. It yields interpretable hazard ratios under the assumption that the ratio is constant over time.
Isolation Forest
also: iForest · tree-based anomaly detection
Isolation Forest is a tree-based anomaly detection algorithm that scores observations by how easily they can be isolated via random recursive partitioning. Anomalies are isolated in few splits; normal points require many. The algorithm handles mixed feature types without density estimation or distance calculations.
Dashboards-to-Decisions Gap
also: decision systems · automated analytics
The dashboards-to-decisions gap is the structural failure of analytics investment: teams produce more dashboards but decisions don't get better or faster. Closing the gap requires moving from descriptive reports to decision systems — pre-specified trigger thresholds, automated action routing, and outcome logging for calibration.
Contextual Bandits
Contextual bandits are online learning algorithms that choose an action (a price, a layout, a recommendation) given a context (user features), observe a reward, and update their policy to balance exploration and exploitation. They are the modern foundation of real-time personalization and dynamic pricing.
Product Embeddings
also: item embeddings · transformer embeddings
Product embeddings are dense vector representations of items in a learned semantic space, such that geometrically close items are similar in the behavioral or content sense. Transformer-based embeddings trained on session sequences capture nuanced substitute/complement relationships that simple collaborative filtering misses.
Cold Start Problem
The cold-start problem describes recommendation and ranking systems' inability to serve new users or new items with no interaction history. Few-shot learning, meta-learning (MAML), and prototypical networks address it by learning initializations that adapt quickly from sparse signals.
Dynamic Pricing
Dynamic pricing is the practice of adjusting prices in real time based on demand, inventory, user context, competition, or time. Machine-learned pricing uses contextual bandits and demand models, but introduces fairness, perception, and regulatory considerations that static pricing avoids.
Conformal Prediction
Conformal prediction is a model-agnostic framework for producing calibrated prediction intervals with finite-sample coverage guarantees. Applied to demand forecasting it replaces opaque point predictions with intervals that provably contain the true demand at a specified confidence level.
Uplift Modeling
also: heterogeneous treatment effects · HTE · incremental response modeling
Uplift modeling estimates the heterogeneous causal effect of a treatment — a promotion, a feature, a message — on each individual. Unlike propensity or response models, it explicitly targets the difference in outcome between the treated and untreated counterfactual, enabling promotion budgets to be spent only on the persuadable segment.
Learning to Rank
also: LTR · L2R · search ranking
Learning to Rank is the class of supervised machine learning algorithms that optimize the ordering of a result set — search results, recommendations, product rankings — for revenue, engagement, or relevance. Pairwise (RankNet) and listwise (LambdaMART, ListNet) objectives are the dominant training paradigms.
Graph Neural Networks
also: GNN · GraphSAGE · graph convolution
Graph Neural Networks learn representations over graph-structured data by message-passing between nodes. In e-commerce cross-sell, GNNs ingest the user-item-category graph and produce recommendations that respect product hierarchy, co-purchase relationships, and session structure — outperforming flat collaborative filtering by 15–25% on business metrics.
Real-Time Personalization
also: session-based recommendation · in-session personalization
Real-time personalization adapts product recommendations, content, and pricing within a session based on immediate behavior signals — dwell time, scroll depth, added items, search queries. Contextual-bandit systems with streaming feature stores enable policy updates in milliseconds, producing 10–25% lift over batch-trained models.
Real-Time Fraud Detection
also: transaction fraud scoring · payment fraud ML
Real-time fraud detection scores checkout transactions within latency budgets of 50–200 ms to decide allow, challenge, or block. Production systems combine gradient boosting (feature-rich), graph features (linked-device, shared-card), and autoencoder-based anomaly scoring under extreme class imbalance and adversarial adaptation.
LLM-Powered Catalog Enrichment
also: LLM catalog enrichment · catalog enrichment · AI catalog enrichment
LLM-powered catalog enrichment uses large language models to generate product descriptions, attributes, categorization, and structured data from sparse inputs (SKU name, supplier feed) at scale. It eliminates the manual-curation bottleneck that has historically limited catalog coverage in marketplace and retail businesses.
Mental Availability
also: brand salience · Ehrenberg-Bass
Mental availability is the propensity of a brand to be thought of in buying situations. It is a network property across the category's entry points, not a single recall score, and is the dominant predictor of market share according to Ehrenberg-Bass research.
Category Entry Points
also: CEPs
Category Entry Points are the specific cues — occasions, needs, moods, locations, social contexts — that trigger a consumer to think about a category. CEPs are the measurable unit of mental availability. A brand's CEP portfolio breadth is a stronger driver of share than the depth of association with any single CEP.
Brand Equity
also: brand value · brand strength
Brand equity is the premium — in sales, pricing power, and LTV — attributable to the brand identity beyond functional product attributes. It is an asset that compounds with consistent investment and decays with over-optimization toward short-term performance marketing, making brand-vs-performance allocation a portfolio problem.
Jobs-to-be-Done
also: JTBD
Jobs-to-be-Done is a segmentation framework that groups customers by the underlying 'job' they hire a product to do, rather than by demographic or firmographic attributes. Modern implementations use NLP on customer interviews and reviews to derive job spaces at scale.
Attention Economics
also: cost per attention second · CPAS · attention-based media buying
Attention economics prices media against the attention it actually produces, not impressions or clicks. A display banner costing 2.50 CPM may generate zero attention seconds, making its true cost per attention second far higher than a connected-TV ad priced at 25 CPM. Correcting for attention reshuffles media-plan ROI rankings.
Creative Fatigue
also: ad fatigue · entropy-based creative decay
Creative fatigue is the decay of advertising performance as the same creative is repeatedly shown to the same audience. Entropy-based detection metrics identify the inflection point earlier than CTR decline — typically 40–60% through the CTR curve — enabling proactive rotation before wasted spend accumulates.
Content Moats (Topical Authority)
also: topical authority · compounding content · SEO moat
A content moat is a defensive advantage built from a deep, interconnected portfolio of articles on a topic such that search engines treat the publisher as the topical authority. Returns compound: each new article benefits from the authority of the existing set, while competitors cannot catch up without a comparable investment.
Brand-Performance Portfolio Optimization
also: brand-vs-performance allocation · Markowitz for marketing
Brand-performance portfolio optimization applies Markowitz mean-variance portfolio theory to the allocation of marketing budget between brand-building (long-duration, uncertain ROI) and performance (short-duration, tightly measurable). The efficient frontier reveals allocations that dominate the common 60/40 heuristic in both expected return and variance.
Strategy-Execution Gap
also: OKR cascade · goal decomposition
The strategy-execution gap is the systematic distance between stated strategy and observed team behavior. The reason most strategic plans fail is not bad strategy but poor decomposition into measurable objectives at the team and individual level. Quantitative goal cascades (OKRs done well) close the gap.
Market Sensing with LLMs
also: competitive intelligence · AI-driven competitor monitoring
Market sensing with LLMs is the practice of using large language models to continuously monitor competitive messaging, product changes, and strategic positioning across unstructured signal sources (news, SEC filings, job listings, changelogs). It replaces quarterly competitive-intelligence reports with continuous, structured signal feeds.
Full-Funnel Simulation
also: agent-based marketing simulation · funnel model · full funnel simulation · simulation model · marketplace simulation
Full-funnel simulation models the connected dynamics of awareness, consideration, conversion, and retention using agent-based or system-dynamics methods. It reveals nonlinear budget allocation effects — that doubling top-of-funnel spend may not double conversions if mid-funnel conversion rates are the binding constraint.
Hidden Cost of Over-Optimization
also: Goodhart's Law · brand erosion from performance
The hidden cost of over-optimization is Goodhart's Law applied to marketing: when a bidding algorithm targets short-term conversion KPIs exclusively, it systematically degrades brand equity and long-term customer value. The damage is invisible in the optimization metric but visible in downstream retention and organic traffic.