Bayesian Inference
Bayesian inference updates prior beliefs about a parameter using observed data via Bayes' theorem to produce a posterior distribution. In A/B testing it directly answers 'what is the probability that B beats A?' — the question product teams actually ask — unlike the indirect counterfactual framing of frequentist p-values.
Survival Analysis
Survival analysis models time-to-event data — how long until a customer churns, a subscription renews, a machine fails — accounting for censored observations where the event has not yet occurred. Cox proportional hazards is the standard semi-parametric model; deep recurrent survival models handle non-proportional hazards.
Cohort Analysis
Cohort analysis groups users by a shared origin event (acquisition month, first purchase, signup source) and tracks behavior over time for each group. It separates true retention from the compositional distortion caused by new-user dilution, and is the foundational unit of analysis for subscription economics.
Anomaly Detection
Anomaly detection identifies observations that deviate meaningfully from expected behavior, accounting for trend, seasonality, and variance. In revenue data it separates true incidents (payment outages, pricing bugs) from normal fluctuation. Isolation forests and Prophet-based decomposition are the practical workhorses.
Product-Market Fit
Product-market fit is the empirical condition where a cohort's retention curve flattens above zero — a group of users has found sufficient value to make the product a persistent part of their behavior. It is not a feeling; it is a quantifiable property of retention, NPS decomposition, and usage depth.
Analytics Engineering
Analytics engineering is the discipline of building reliable, tested, version-controlled transformations on top of a cloud warehouse, bridging data engineering and analysis. Tools like dbt, Dagster, and Airbyte formalize a software-engineering workflow for SQL transformations with tests, documentation, and lineage.
Metric Ontology
A metric ontology is a versioned, centrally-governed definition of every metric an organization uses, specifying the grain, filters, time-window, and source tables so that the same metric produces identical values regardless of tool, dashboard, or analyst. It prevents the drift that silently corrupts data-driven decisions.
Unit Economics
Unit economics is the financial performance of a single customer (or transaction) decomposed into acquisition cost, gross margin per period, retention, and payback period. Cohort-level unit economics — computed per acquisition cohort rather than rolled up — is the only form that survives growth-driven distortion.
Peeking Problem
The peeking problem is the inflation of false-positive rates that occurs when a frequentist A/B test is repeatedly evaluated before reaching its pre-registered sample size. A nominal 5% false-positive rate can become 20–30% under daily peeking. Bayesian testing and sequential-analysis methods eliminate the problem.
Cox Proportional Hazards Model
The Cox proportional hazards model is the semi-parametric workhorse of survival analysis: it estimates how covariates multiply the baseline hazard rate without requiring a parametric form for the baseline. It yields interpretable hazard ratios under the assumption that the ratio is constant over time.
Isolation Forest
Isolation Forest is a tree-based anomaly detection algorithm that scores observations by how easily they can be isolated via random recursive partitioning. Anomalies are isolated in few splits; normal points require many. The algorithm handles mixed feature types without density estimation or distance calculations.
Dashboards-to-Decisions Gap
The dashboards-to-decisions gap is the structural failure of analytics investment: teams produce more dashboards but decisions don't get better or faster. Closing the gap requires moving from descriptive reports to decision systems — pre-specified trigger thresholds, automated action routing, and outcome logging for calibration.