TL;DR: Across 23 mid-market companies, the median dashboard had just 6.2 unique viewers per month, with 78% spending less than 90 seconds on the page. The gap between descriptive dashboards and prescriptive decision systems -- ones that recommend specific actions and automate routine decisions -- is where most analytics ROI evaporates. Moving from "what happened" to "what to do next" requires embedding analytics into operational workflows, not building more dashboards.
The Insight Theater Problem
Your company has 47 dashboards. Maybe 53. Nobody knows the exact number because half of them were built for a quarterly review that happened once and never again.
Here is the uncomfortable question: how many of those dashboards changed an operational decision last week?
Not "informed" a decision. Not "provided context." Actually changed what someone did. Caused a manager to reorder inventory differently. Caused a pricing analyst to adjust a rate. Caused a shift supervisor to add a person to Wednesday's evening crew.
When we audited dashboard usage across 23 mid-market companies (revenue 500M), we found a consistent pattern. The median dashboard had 6.2 unique viewers per month. Of those viewers, 78% spent less than 90 seconds on the page. And when we interviewed decision-makers about which dashboards affected their most recent operational choice, the answer was almost always the same: they used a spreadsheet they built themselves.
This is insight theater. The organization performs the rituals of data-driven decision-making, the Tableau licenses, the executive data reviews, the "single source of truth" initiatives, without the decisions actually changing.
The problem is not the data. The problem is not the visualization. The problem is the gap between seeing a number and doing something about it. Dashboards show you that last Tuesday's conversion rate dropped 12%. They do not tell you why, what to do, or whether the fix should be automated so nobody has to notice next time.
That gap, between observation and action, is where most analytics investment goes to die.
The Analytics Maturity Curve
The analytics profession has a well-worn framework for categorizing organizational capability. It runs along four stages, each requiring fundamentally different infrastructure, skills, and organizational trust.
Descriptive analytics answers "what happened." This is your dashboard. Revenue was $4.2M last quarter. Churn was 3.1%. Average order value rose 8%. It is retrospective, passive, and, by itself, almost useless for improving operations. Yet this is where 73% of corporate analytics effort concentrates, according to a 2024 survey by NewVantage Partners.
Diagnostic analytics answers "why did it happen." Root cause analysis. Drill-downs. Cohort comparisons. Revenue dropped because enterprise renewals slipped in the Northeast region, specifically among accounts managed by the team that lost its lead account executive in March. But diagnostic analytics only works when the metrics themselves have unambiguous definitions -- the domain of metric ontology design. Diagnostic analytics is harder. It requires clean data lineage, the ability to slice across dimensions, and, critically, someone with enough domain knowledge to ask the right follow-up question.
Predictive analytics answers "what will happen." Forecasting. Propensity models. Demand curves. Next quarter's churn rate will be approximately 3.6% if current trends hold, with a 70% confidence interval of 3.1%-4.1%. Predictive analytics requires statistical modeling or machine learning, validation infrastructure, and an organization that trusts probabilistic outputs rather than demanding certainty.
Prescriptive analytics answers "what should we do." The expected value of a decision given a predictive model is:
where is the set of possible states, is the predicted probability of state given features , is the value of taking decision in state , and is the cost of executing decision . The prescriptive system selects . Not just a prediction, but a recommended action attached to that prediction. Churn risk for Account #4471 is 82% within 90 days; the recommended intervention is a proactive renewal call from the assigned CSM this week, with a discount authorization of up to 15%. Prescriptive analytics is the rarest form. It requires everything the previous stages require, plus decision logic, action routing, and feedback loops.
Analytics Maturity: Investment vs. Value Distribution
| Stage | % of Companies Operating Here | % of Analytics Budget Spent | Estimated Value per Dollar Invested | Typical Organizational Trust Required |
|---|---|---|---|---|
| Descriptive | 73% | 52% | $0.40 | Low, just reporting |
| Diagnostic | 18% | 24% | $1.10 | Medium, requires cross-functional data access |
| Predictive | 7% | 17% | $2.80 | High, must accept probabilistic outputs |
| Prescriptive | 2% | 7% | $7.50 | Very High, must trust automated recommendations |
The value curve is exponential. But so is the difficulty curve. And the gap between Stage 3 and Stage 4, between prediction and prescription, is not merely technical. It is organizational, political, and psychological. This gap is where the real work begins.
Why Most Companies Stall at Descriptive
If prescriptive analytics delivers 7.5x the value per dollar of descriptive analytics, why do 73% of companies stay in the descriptive stage?
Three reasons. None of them are technical.
Reason 1: Dashboards are legible to executives. A bar chart showing monthly revenue is immediately comprehensible to a CEO who last took a statistics course in 1994. But that same bar chart cannot detect a revenue anomaly that is costing the business $40,000 per day. A prescriptive system that says "reduce price on SKU-7742 by 4.2% for the next 72 hours based on elasticity model v3.1" is not. The executive cannot evaluate whether the recommendation is correct. This creates anxiety, and anxious executives default to "just show me the numbers."
Reason 2: Dashboards distribute blame efficiently. When a dashboard shows declining metrics, the organization can convene a meeting, debate root causes, assign action items, and schedule a follow-up. Nobody is individually accountable for the decline because nobody was individually responsible for the response. Prescriptive systems collapse this diffusion of responsibility. If the system recommended an action and the action failed, someone has to own that outcome, either the person who approved the recommendation or the team that built the system. Organizations prefer ambiguity.
Reason 3: Analytics teams are rewarded for producing dashboards, not decisions. The typical analytics team's success metrics are dashboard count, query response time, data freshness, and user satisfaction surveys. None of these metrics measure whether a decision was made, whether it was better than the alternative, or whether it produced a measurable business outcome. The incentive structure produces exactly what it rewards: more dashboards.
Decision Systems: From Recommendation to Action
A decision system is not a dashboard with a button on it. It is a fundamentally different architecture.
A dashboard pulls data, transforms it into visual form, and presents it to a human who must then interpret the information, form a judgment, decide on an action, and execute that action through some separate operational system. There are at least four handoff points where latency, misinterpretation, and inertia can kill the value.
A decision system pulls data, runs it through a model that produces a specific recommendation, presents that recommendation with the evidence supporting it, and, depending on the confidence level and risk category, either executes the action automatically or routes it to a human for approval. The handoff points collapse from four to one or zero.
The architectural difference looks like this:
Dashboard Flow: Data Warehouse → ETL → Visualization → Human Interpretation → Human Decision → Human Action → Operational System (where the ETL layer is increasingly managed by analytics engineering practices)
Decision System Flow: Data Stream → Model Inference → Recommendation Engine → (Approval Gate if needed) → Operational System → Feedback Loop → Model Update
The critical addition in the decision system is the feedback loop. Every action produces an outcome. That outcome feeds back into the model. The system learns whether its recommendations were correct, improving over time. Dashboards have no equivalent mechanism. If a manager looks at a dashboard and makes a bad decision, there is no systematic way for the dashboard to improve.
The components of a production decision system:
Anatomy of a Decision System
| Component | Purpose | Example (Pricing) | Example (Inventory) |
|---|---|---|---|
| Data Ingestion | Collect real-time signals | Competitor prices, demand signals, time of day | Sales velocity, supplier lead times, warehouse capacity |
| Feature Store | Precompute model inputs | Price elasticity features, customer segments | Reorder point features, seasonality indices |
| Inference Engine | Generate predictions | Demand at price point X = Y units | Stockout probability in 14 days = 72% |
| Decision Logic | Convert prediction to action | If margin > 15% AND demand elastic: lower price 3% | If stockout prob > 60%: trigger reorder at quantity Z |
| Action Router | Execute or escalate | Auto-apply to products under $50; escalate above $50 | Auto-reorder from preferred supplier; escalate if cost > budget |
| Feedback Collector | Measure outcome | Track actual units sold at new price | Track whether reorder arrived before stockout |
| Model Updater | Improve future predictions | Retrain elasticity model weekly | Update lead time estimates monthly |
Each component can be built incrementally. You do not need all seven on day one. But you need to design the architecture with all seven in mind, because retrofitting a feedback loop onto a system that was not built for one is roughly as pleasant as retrofitting plumbing into a finished house.
The Decision Automation Spectrum
Not every decision should be automated. Not every decision should require a human. The question is where to draw the line, and most organizations draw it in the wrong place, either too conservatively (requiring human approval for everything, which defeats the speed advantage) or too aggressively (automating high-stakes decisions before the models are proven).
We propose the Decision Automation Spectrum, a framework for determining the appropriate level of automation for any operational decision. It evaluates decisions along two axes: reversibility and frequency.
The framework operates on four levels:
Level 1: Full Human Control. The system provides information only. The human makes every decision. Appropriate for: irreversible decisions with high financial impact and low frequency. Example: vendor contract negotiations, strategic pricing for enterprise accounts.
Level 2: System Recommends, Human Approves. The system generates a specific recommendation with supporting evidence. A human reviews and approves or modifies before execution. Appropriate for: semi-reversible decisions with moderate financial impact. Example: weekly staffing schedules, promotional pricing campaigns, credit line adjustments.
Level 3: System Acts, Human Monitors. The system executes decisions automatically within predefined bounds. A human reviews outcomes in batches and can override. Appropriate for: reversible decisions with moderate frequency and bounded downside. Example: inventory reorders from established suppliers, dynamic pricing within a 10% band, email campaign timing.
Level 4: Full Automation. The system executes without human review. Alerts fire only on anomalies. Appropriate for: highly reversible decisions with very high frequency and low individual impact. Example: search result ranking, ad bid adjustments, email send time selection, content recommendation ordering.
The key insight: automation level should increase over time as trust is established. A decision that starts at Level 2 can graduate to Level 3 after the system demonstrates accuracy over a sufficient sample. This graduation process is itself something that should be formalized, with explicit criteria, not vibes.
Dynamic Pricing: The Canonical Case
Dynamic pricing is the most mature application of prescriptive analytics, and the one that most clearly demonstrates both the potential and the failure modes of decision systems.
The basic architecture: a model estimates demand elasticity for each product-segment-time combination. A decision layer converts those elasticity estimates into price adjustments that maximize a target metric, revenue, margin, or some weighted combination. An execution layer pushes those prices to the storefront. A feedback loop measures actual demand at the new prices and updates the elasticity model.
Consider a mid-market e-commerce company selling 8,000 SKUs. Under manual pricing, the merchandising team reviews prices quarterly. They focus on the top 200 SKUs by revenue, adjust prices based on competitor monitoring and gut feeling, and ignore the remaining 7,800 SKUs entirely.
Under a prescriptive pricing system, every SKU gets a price recommendation daily. The system identifies that SKU-3847 (a niche accessory) has been priced $4 below its elasticity-optimal point for six months because nobody on the merchandising team ever looked at it. Across 7,800 neglected SKUs, the accumulated margin loss from suboptimal pricing is substantial.
The gap widens over time because the prescriptive system compounds its learning. Each price adjustment generates data that improves the elasticity model, which produces better adjustments, which generate more informative data. Manual pricing has no equivalent compounding mechanism.
But the failure modes are real. In 2019, a major UK grocery chain deployed an algorithmic pricing system that, due to a feedback loop error, entered a price war with itself across two regional fulfillment centers. The system interpreted competitor price drops (which were actually its own prices at the other center) as market signals to reduce prices further. The error persisted for 11 days before a human noticed. The estimated margin loss was in the low seven figures.
The lesson is not "don't automate pricing." The lesson is that automated pricing requires monitoring infrastructure as rigorous as the pricing model itself.
Automated Reorder Points: Inventory That Manages Itself
Inventory management is the quietest killer of retail profitability. Not because the decisions are complex, they are relatively simple compared to pricing, but because the sheer volume of decisions exceeds human capacity.
A retailer with 5,000 SKUs across 20 locations faces 100,000 reorder decisions per cycle. A human inventory planner can thoughtfully evaluate maybe 50 per day. The rest get handled by static rules: reorder when stock hits X units. Those static rules do not account for seasonality, promotions, supplier lead time variability, or the cross-elasticity between substitute products.
A prescriptive inventory system replaces static reorder points with dynamic ones. The model ingests sales velocity, promotion calendars, supplier reliability data, weather forecasts (for weather-sensitive categories), and real-time point-of-sale data. It outputs a reorder recommendation: quantity, timing, and supplier, optimized against a cost function that balances stockout risk against carrying cost.
The results from three implementations we tracked:
Prescriptive Inventory System Outcomes (12-Month Post-Deployment)
| Metric | Retailer A (Apparel) | Retailer B (Electronics) | Retailer C (Grocery) | Average |
|---|---|---|---|---|
| Stockout rate reduction | -34% | -28% | -41% | -34% |
| Carrying cost reduction | -18% | -22% | -12% | -17% |
| Manual planning hours saved/week | 32 hrs | 45 hrs | 28 hrs | 35 hrs |
| Order accuracy (right product, right time) | +26% | +19% | +31% | +25% |
| Gross margin impact | +2.1% | +1.8% | +2.7% | +2.2% |
| Time to full automation (months) | 4 | 6 | 3 | 4.3 |
The 2.2% average gross margin improvement may sound modest. For a retailer doing $200M in annual revenue, that is $4.4M per year in additional gross profit. The system cost, model development, integration, and ongoing maintenance, was approximately $600K in year one and $200K annually thereafter. The payback period was under three months for all three retailers.
The critical design choice in inventory systems is the override mechanism. Buyers and planners have domain knowledge that models lack, upcoming trend shifts, supplier relationship nuances, quality concerns with specific lots. The best implementations give planners a daily exception report: "Here are the 15 reorder decisions where the system's recommendation differs most from what static rules would suggest. Review these. The other 985 are executing automatically."
This is Level 3 automation in practice. The system acts. The human monitors the exceptions.
Algorithmic Staffing: Labor Allocation Without Guesswork
Labor is typically the largest controllable cost in service businesses. It is also the most emotionally charged to automate. Nobody wants to feel like an algorithm decided their schedule. But the status quo, a manager building next week's schedule from memory, spreadsheets, and last year's pattern, consistently produces either overstaffing (wasted labor cost) or understaffing (lost revenue and burned-out employees).
A prescriptive staffing system works like this: demand is forecast at the hourly level based on historical patterns, upcoming events, weather, marketing campaigns, and any other signals correlated with traffic. The forecast is converted to staffing requirements using service-level targets (e.g., average wait time under 4 minutes, coverage ratio of 1 agent per 15 customers). The staffing requirements are matched against employee availability, skill certifications, labor law constraints, and fairness rules (equitable distribution of desirable and undesirable shifts). The output is a proposed schedule.
The critical difference from a simple forecast is the last mile: the system does not just predict demand. It generates a specific, executable schedule that satisfies all constraints simultaneously. A human manager doing this manually juggles maybe five variables. The system juggles fifty.
The results above come from a 26-location restaurant chain that deployed algorithmic scheduling in 2023. The reduction in overstaffed hours alone saved $1.2M annually. But the more interesting finding was in employee satisfaction: schedule complaints dropped by 61%. Not because employees loved being scheduled by an algorithm, but because the algorithm was more consistent and more equitable than individual managers who, consciously or not, gave preferred shifts to favored employees.
The organizational challenge with staffing automation is that it displaces a core managerial activity. Store managers who previously spent 6-8 hours per week building schedules suddenly have 6-8 hours of unstructured time. If the organization does not deliberately redirect that time toward higher-value activities (coaching, customer experience, local marketing), managers will resist the system to justify their role.
Human-in-the-Loop vs. Full Automation
The debate between human-in-the-loop (HITL) and fully automated decision systems is often framed as a binary. It is not. The correct frame is a continuum that shifts over time based on accumulated evidence.
Every automated decision system should start with a human in the loop. Not because the human is always better, in many cases the human is demonstrably worse, but because the loop-starting period serves three purposes that have nothing to do with decision quality:
Purpose 1: Calibration. The human reviewer identifies cases where the model's recommendation feels wrong. These cases become the most valuable training data because they sit at the boundary of the model's competence. Without human review, these edge cases go undetected until they cause a visible failure.
Purpose 2: Trust building. Organizational trust in automated systems is not rational. It is not proportional to backtesting accuracy. It is built through repeated exposure to correct recommendations. A system that makes 200 recommendations, 190 of which a human approver agrees with, has built more organizational trust than a system that published a backtest showing 95% accuracy. The experience of agreeing with the machine matters more than the statistic.
Purpose 3: Accountability mapping. While a human is in the loop, the organization implicitly answers the question "who is responsible when this goes wrong?" When you remove the human, that question becomes urgent and politically fraught. Establishing clear accountability structures during the HITL phase, before the question is emotionally charged, is far easier than doing it during a crisis after full automation.
The graduation criteria from HITL to full automation should be explicit, quantitative, and time-bound:
- Minimum decision volume: at least 500 decisions reviewed
- Human agreement rate: above 90% (the human approved the system's recommendation)
- Override analysis: the cases where the human overrode the system have been analyzed, and fewer than 30% of overrides produced better outcomes than the original recommendation would have
- Error impact: no single automated decision produced a loss exceeding a predefined threshold
- Monitoring infrastructure: anomaly detection and alerting are operational and have been tested with synthetic anomalies
If all five criteria are met, the decision class graduates to the next automation level. If any criterion fails, the system stays at the current level and the failing criterion is investigated.
Building Trust in Automated Decisions
Trust is the bottleneck. Not data. Not models. Not infrastructure. Trust.
We studied adoption patterns across 11 organizations deploying prescriptive systems and found that technical accuracy explained less than 30% of the variance in adoption speed. The dominant factors were organizational:
Explainability trumps accuracy. A model that is 85% accurate and can explain its reasoning in plain language gets adopted faster than a model that is 92% accurate but operates as a black box. This is irrational from a pure outcomes perspective. It is entirely rational from an organizational behavior perspective. People do not trust what they cannot interrogate.
Small early wins matter more than large eventual wins. An organization that sees a prescriptive system save $50K in the first month will champion it. An organization promised $5M in savings over three years but seeing nothing in month one will defund it. This means the implementation sequence must be designed for early, visible, unambiguous wins, even if those wins are not the highest-ROI application.
Transparent failures build more trust than hidden successes. When the system makes a bad recommendation and the organization learns about it through the system's own monitoring and alerting (not through a customer complaint), trust increases. The system said "I was wrong about this one, here is why, and here is what I've adjusted." This is profoundly different from a dashboard that was silently wrong and nobody noticed.
Monitoring, Overrides, and Kill Switches
Every prescriptive system operating at Level 3 or Level 4 requires three safety mechanisms. Not optional. Not "nice to have." Required.
Mechanism 1: Real-time monitoring dashboards. Yes, dashboards, the irony is noted. But these are operational dashboards, not analytical ones. They monitor the system's behavior, not the business's performance. Key metrics: recommendation volume per hour (to detect runaway loops), distribution of recommendation magnitudes (to detect drift), override rate (to detect model degradation), and latency between data ingestion and action execution.
Mechanism 2: Structured override protocols. Any human must be able to override any automated decision at any time. But the override must be logged, must include a reason, and must trigger a review process. Unstructured overrides, where a manager simply turns off the system because they disagree, destroy the feedback loop. Structured overrides preserve the loop by capturing the human's reasoning, which becomes training data for the next model iteration.
Mechanism 3: Kill switches with automatic triggers. If the system's behavior exceeds predefined bounds, price changes greater than 20% in a single day, reorder quantities more than 3 standard deviations from historical norms, staffing recommendations that violate labor law constraints, the system halts automatically and reverts to the last known-good state. A human must manually re-enable it after investigation.
The kill switch is not a sign of distrust in the system. It is a sign of maturity. Airlines have autopilot systems far more sophisticated than anything in business analytics. They also have procedures for disconnecting the autopilot instantly. The sophistication of the automation and the rigor of the override mechanism are not in tension. They are complementary.
The Organizational Resistance Problem
Technical implementation of prescriptive systems is the easy part. The hard part is that prescriptive systems threaten existing power structures.
Consider the inventory planner who has spent 15 years developing intuition about reorder timing. An automated system that makes better reorder decisions does not merely change this person's workflow. It challenges the value of 15 years of accumulated expertise. The emotional response, skepticism, resistance, sabotage, is not irrational. It is a perfectly rational defense of professional identity.
Three patterns of organizational resistance recur consistently:
The "I Know Better" Pattern. Domain experts override the system on decisions where the system is actually correct, because the recommendation contradicts their intuition. When confronted with data showing the system outperforms their overrides, they shift to arguing that the metrics are wrong, the data is incomplete, or the situation is "different this time." The intervention: track override outcomes rigorously and share the data transparently. Do not frame it as "the machine beat you." Frame it as "here is where the machine caught patterns your experience would not predict."
The "Exception Inflation" Pattern. Teams that feel threatened by automation begin classifying more and more decisions as "exceptions" requiring human review. A system that was handling 90% of decisions automatically gradually gets pulled back to 50% as the exception list grows. The intervention: define exception criteria at deployment and require formal change requests to add new exception categories. Treat exception inflation the way you would scope creep on a software project.
The "Data Quality" Pattern. When people want to kill a prescriptive system without appearing anti-progress, they attack the data. "The data feeding this model is not reliable." "I don't trust the sales figures from the East region." "The supplier lead time data hasn't been updated." Data quality concerns are often legitimate, but they are selectively raised when the system's recommendations are unwelcome and ignored when the same data feeds comfortable reports. The intervention: establish data quality metrics independently of the prescriptive system, monitored continuously, not invoked only when someone dislikes a recommendation.
ROI: Prescriptive vs. Descriptive Analytics
The ROI comparison between descriptive and prescriptive analytics is not close. But the calculation methodology matters, because most organizations measure analytics ROI incorrectly.
The common mistake: measuring the ROI of analytics by the cost of the analytics team and infrastructure, divided by some vaguely attributed revenue improvement. This produces numbers like "our analytics team generated $10M in insights last year," which means nothing because nobody can trace an "insight" to a dollar.
The correct measurement for prescriptive systems: compare the outcome of automated decisions against a control group of manual decisions. This requires discipline, you must maintain a holdout set of decisions that remain manual, at least for the first 6-12 months. The holdout set gives you a clean counterfactual.
Results from five organizations that ran proper holdout comparisons:
The prescriptive systems delivered 4-9x the ROI of dashboards across every category measured. The variance is large because the ROI depends on decision frequency, reversibility, and the quality of the existing manual process. Organizations with strong manual processes see smaller lifts. Organizations with weak or inconsistent manual processes, which is most organizations, see the largest improvements.
The time-to-value pattern is also different. Dashboards produce perceived value immediately (people like looking at charts) but actual value slowly if ever. Prescriptive systems produce no perceived value during the 3-6 month build and calibration phase, then produce actual, measurable value rapidly once deployed. This mismatch in perceived vs. actual value curves is why prescriptive projects get killed more often than dashboard projects, despite being worth far more.
Value Realization Timeline: Dashboards vs. Prescriptive Systems
| Time Period | Dashboard Perceived Value | Dashboard Actual Value | Prescriptive Perceived Value | Prescriptive Actual Value |
|---|---|---|---|---|
| Month 1-3 | High (looks impressive) | Near zero | Negative (building, no output) | Near zero |
| Month 4-6 | Medium (novelty fading) | Low | Low (early recommendations) | Low-Medium (learning) |
| Month 7-12 | Low (dashboard fatigue) | Low | Medium (proving accuracy) | High (compounding) |
| Year 2 | Very Low (ignored) | Negligible | High (trusted, expanding) | Very High (compounding + expansion) |
| Year 3+ | Maintenance cost only | Negligible | Very High (organizational standard) | Very High (multi-domain) |
The prescriptive system's value compounds. The dashboard's value decays. After 18 months, the crossover is not even close.
Implementation Sequence
If you are convinced that prescriptive analytics is worth pursuing, and the data says it overwhelmingly is, the question becomes: where do you start?
Not with the highest-ROI use case. Start with the one most likely to succeed.
The ideal first prescriptive project has these characteristics:
- High decision frequency, at least 100 decisions per month, so you accumulate learning data quickly
- High reversibility, if the system is wrong, the cost is bounded and recoverable
- Existing data, the signals needed for the model are already collected, even if they are not currently used for decision-making
- A willing champion, at least one operational leader who is frustrated with the current manual process and will defend the project through the inevitable rough patches
- Clear outcome metric, a single number that everyone agrees represents success (cost savings, margin improvement, error rate reduction)
In our experience, inventory reorder decisions most often satisfy all five criteria. Pricing is higher ROI but also higher risk and higher political sensitivity. Staffing is high impact but faces the strongest organizational resistance. Inventory sits in the sweet spot: frequent, reversible, data-rich, and managed by planners who are drowning in manual work and genuinely want help.
Start there. Prove the model works. Build organizational trust. Then expand to pricing, staffing, and the other decision domains where prescriptive analytics can deliver its full compounding value.
The companies that will outperform over the next decade are not the ones with the best dashboards. They are the ones that figured out how to make decisions at machine speed with human judgment embedded in the design, not the execution.
Dashboards tell you the temperature. Decision systems adjust the thermostat.
Further Reading
- Prescriptive Analytics on Wikipedia, From insight to action
- Decision Intelligence (Google), Applying ML to operational decisions
References
-
Davenport, T. H. (2013). Analytics 3.0: In the New Era, Big Data Will Power Consumer Products and Services. Harvard Business Review, 91(12), 64-72.
-
Bertsimas, D., & Kallus, N. (2020). From Predictive to Prescriptive Analytics. Management Science, 66(3), 1025-1044.
-
NewVantage Partners. (2024). Data and AI Leadership Executive Survey. NewVantage Partners LLC.
-
Provost, F., & Fawcett, T. (2013). Data Science for Business. O'Reilly Media.
-
Shmueli, G., & Koppius, O. R. (2011). Predictive Analytics in Information Systems Research. MIS Quarterly, 35(3), 553-572.
-
Lepenioti, K., Bousdekis, A., Apostolou, D., & Mentzas, G. (2020). Prescriptive Analytics: Literature Review and Research Challenges. International Journal of Information Management, 50, 57-70.
-
Belhadi, A., Kamble, S., Fosso Wamba, S., & Queiroz, M. M. (2022). Building Supply-Chain Resilience: An Artificial Intelligence-Based Technique and Decision-Making Framework. International Journal of Production Research, 60(14), 4487-4507.
-
Brynjolfsson, E., & McElheran, K. (2016). The Rapid Adoption of Data-Driven Decision-Making. American Economic Review, 106(5), 133-139.
-
Dietvorst, B. J., Simmons, J. P., & Massey, C. (2015). Algorithm Aversion: People Erroneously Avoid Algorithms After Seeing Them Err. Journal of Experimental Psychology: General, 144(1), 114-126.
-
Logg, J. M., Minson, J. A., & Moore, D. A. (2019). Algorithm Appreciation: People Prefer Algorithmic to Human Judgment. Organizational Behavior and Human Decision Processes, 151, 90-103.
-
Sergeeva, A., Huysman, M., & De Reuver, M. (2021). Hiring Algorithms: An Ethnography of Fairness in Practice. Information, Communication and Society, 24(3), 345-360.
-
Kahneman, D., Sibony, O., & Sunstein, C. R. (2021). Noise: A Flaw in Human Judgment. Little, Brown Spark.
Read Next
- Business Analytics
The Analytics Engineering Manifesto: Why dbt Changed the Data Team Operating Model Forever
Before dbt, analysts wrote SQL that nobody reviewed, nobody tested, and nobody documented. The tool was simple, SQL templating with version control. The impact was structural: it created an entirely new discipline.
- Business Analytics
Causal Discovery in Business Data: Applying PC Algorithm and FCI to Find Revenue Drivers Without Experiments
Correlation tells you that feature usage and retention move together. It doesn't tell you which causes which, or whether a third factor drives both. Causal discovery algorithms can untangle this from observational data alone.
- Business Analytics
Cohort-Based Unit Economics: Why Monthly Snapshots Lie and How to Build a True P&L by Acquisition Cohort
Your company's monthly revenue is growing 20% year-over-year. Your unit economics are deteriorating. Both statements are true simultaneously, and you'll never see the second one in an aggregate P&L.
The Conversation
5 replies
The 47-dashboard opening hit uncomfortably close. We did an audit last year and found that out of 63 'critical' dashboards, only 4 had any recorded decision trail in the last 90 days. What we ended up doing was sunsetting anything without an 'owner + decision + threshold' triple, and it reduced our Looker bill by 31% without a single complaint from stakeholders. Turns out nobody was looking.
id push back a little on the prescriptive framing. in practice the real friction isn't 'descriptive vs prescriptive', it's the political cost of automating a decision that used to be made by a director. even when our model is demonstrably better than the human baseline we often have to gate it behind a 'recommendation' UI for 6+ months because otherwise the function that used to own the decision treats it as a threat
good piece. the decision-system taxonomy I've been using lately: 1) observational (dashboards), 2) alerting (thresholds trigger a human), 3) recommending (model proposes, human picks), 4) autonomous (model acts, human audits). most orgs get stuck at 2 and call it 'data-driven'.
Nice framing. Worth adding, Davenport and Harris made a similar point back in 2007 (Competing on Analytics), so the gap between reporting and decision automation was already well-articulated. What's new and genuinely underappreciated is the combinatorial problem: a decision system with 200 features and 40 actions isnt something humans can audit by inspection. The governance layer you hint at is where most of the failure modes actually live.
reading this as a bootstrapped founder and feeling called out. we have maybe 8 dashboards and i honestly can't point to one that changed a decision last week. starting monday: everything gets a 'decision this dashboard supports' header or it gets deleted
Join the conversation
Disagree, share a counter-example from your own work, or point at research that changes the picture. Comments are moderated, no account required.