Forecasting has long been treated as a numbers game, where accuracy is the primary measure of success. But accurate forecasts don’t always lead to better decisions. The real gap lies in explainability—understanding not just what will happen, but how and why it will happen. Many traditional forecasting models rely on correlation, which can produce reliable predictions without revealing the underlying drivers. This creates friction for leaders who need to act on forecasts with confidence.
To move forward, organizations must shift from probabilistic models to causal systems that map how inputs like pipeline, conversion rates, and capacity interact to produce outcomes. These models don’t just predict results—they simulate them, making forecasts actionable and transparent. When leaders can see the cause-and-effect relationships behind a forecast, they can test scenarios, allocate resources more effectively, and adapt to change with greater confidence.
Ultimately, the future of forecasting isn’t about choosing between numbers and narrative—it’s about building systems where explanation is embedded in the model itself, enabling more informed, aligned, and decisive business leadership.
Here’s a question that sparked some fairly robust internal debates. Try to keep it in the back of your mind as you read this piece: “Should forecasting be more about the numbers or the narrative?”
In most revenue organizations, forecasting is treated as a number-based, technical exercise. Models are refined, input assumptions are debated, and accuracy is tracked with increasing precision to arrive at a number. Each quarter becomes a test of how closely the projected number aligns with reality. When forecasts land within an acceptable range, the system is considered to be working.
But if you’re reading this, then chances are you’ve sat in a meeting where the objective was to conduct a forecast review, with the executive team present. Anyone who’s been in that room knows that forecast accuracy alone doesn’t resolve the tension in the room.
What’s the source of that tension? Our theory is that a number may be directionally sound, even statistically defensible, but it may still fail to answer the most important question: how exactly will we achieve it? When leaders move from observing a forecast to acting on it, a second requirement emerges: forecasts must not only be accurate but also actionable and explainable.
The distinction is highly consequential. Forecast accuracy describes the proximity of a prediction to an outcome. Explainability based on a truly causal underlying model determines whether that prediction can guide action. And in modern revenue systems facing digital transformation, the gap between the two is often where forecasting breaks down.
The Obsession with Forecast Accuracy
It’s probably fairly uncontroversial to say that forecast accuracy has become the dominant measure of success in most RevOps environments. Accuracy is quantifiable, comparable, and directly linked to credibility. A team that consistently forecasts within a narrow margin of error is seen as disciplined and reliable.
The underlying business logic is sound:
- If forecasts are accurate, then decisions will improve,
- enabling leadership to plan with confidence and allocate resources efficiently,
- resulting in more predictable, high-quality revenue growth.
But in practice, even highly accurate forecasts can fail to drive alignment or meaningful action. That may be because teams agree on the number but remain uncertain about the path to achieving it. Discussions focus on whether the forecast is correct rather than on the factors that influence the outcome. Accuracy, in other words, can create the appearance of control without necessarily providing it.
What Forecast Accuracy Actually Tells You
Forecast accuracy answers a simple and specific question: How close was the predicted outcome to what actually occurred?
This makes it a useful retrospective measure. It allows organizations to evaluate the performance of their forecasting process over time, identify patterns of over- or underestimation, and update those prior inputs for future periods to reduce unpredictability and correct for subtle factors that throw off forecasts, such as overconfidence or intra-quarter seasonality.
However, accuracy is fundamentally a measure of the “what,” not the “how.”
A forecast may be accurate because the underlying system behaved as expected. Or, it may be that it was “accidentally” accurate because errors in one area were offset by compensating effects in another. Without additional context, the number itself does not reveal which drivers contributed to the outcome. All of which reinforces the point that knowing where the number will likely land does not explain how it will get there.
The Hidden Risk of “Accurate but Unexplainable” Forecasts
An accurate forecast that contains elements that can’t be readily explained introduces a subtle risk. Think of it a little like a highly skilled, intuitive chef preparing a delicious meal with the ingredients they find in their fridge and pantry, without keeping track of exactly what they used or how much.
The result is one that everyone enjoys. But could it be repeated, with a high degree of confidence? Or is it more likely that the excellent result was some combination of skill, luck, timing, and availability? Without knowing the “inputs” (ingredients, quantities, timing, skills required), the likelihood of preparing the same “output” (the finished meal) drops substantially.
It’s the same with a forecast that hits within the range it’s meant to, without the team knowing the answer to “why” that outcome occurred. At first glance, the forecast appears successful. However, when stakeholders attempt to understand the drivers behind the forecast, gaps begin to emerge. The underlying assumptions may be unclear, the relationships between variables may be poorly defined, and the connection between inputs and outcomes may be difficult to articulate.
This lack of transparency creates friction in leadership conversations.
That’s because executives aren’t only responsible for accepting forecasts but also for making the next set of decisions based on them. Decisions about hiring, investment, and strategic direction depend on confidence in the underlying performance drivers. When those drivers aren’t clearly understood, confidence erodes because diligent executive teams can’t be as certain whether the result was a solid base likely to recur or a “high water mark” driven by one-off factors.
If it’s the latter, a decision to increase engineering, saleS, and marketing headcount will result in cash burn exceeding revenue growth, potentially threatening the cash balance and the ongoing viability of the business as negative operating leverage emerges (costs growing faster than revenue). In other settings, if they aren’t confident about the inputs, leaders may hesitate to commit resources, challenge assumptions more aggressively, or delay decisions until they have more clarity.
The Real Problem: Forecasts Built on Correlation, Not Causation
There’s an uncomfortable idea that we have to confront at this point. It may be that the limitation of modern forecasting isn’t primarily a failure of communication. It’s a failure of model design. But the positive, hopeful part about grappling with that is that we now likely have the tools to do something about it.
Let’s start from the ground up. Most forecasting systems in RevOps are built on correlational logic. They analyze historical data, identify patterns, and extrapolate them forward. If deals at a certain stage historically close at a given rate, the model assumes similar behavior will persist. If pipeline coverage has previously translated into a predictable range of outcomes, that relationship is extended into the future.
This approach is mathematically coherent, but structurally fragile because correlation doesn’t establish causation. So a model may observe that higher pipeline coverage correlates with stronger revenue outcomes. Still, it doesn’t explain whether improved targeting, temporary demand spikes, or unsustainable discounting drive that pipeline.
Here’s the key part that tells us why that seemingly minor distinction matters. It’s because only causal relationships are stable under change. In contrast, when conditions shift, as they always do in fast-moving RevOps environments, correlational models begin to degrade. And without a stable “why,” explanation becomes an act of interpretation rather than a reflection of underlying system logic.
Why You Can’t Explain a Probabilistic Forecast
This leads to a more uncomfortable realization: many forecasts can’t be explained clearly because they were never designed to be explainable.
A probabilistic model produces an output based on weighted inputs and statistical inference, so when a conversion rate or other input is tweaked, the output shifts accordingly. And from an operational perspective, that creates a problem.
When leaders ask, “How do we get there?”, they’re actually asking for a causal chain to understand which actions will produce which outcomes, and how those outcomes will accumulate into the final number. We’ve been answering this question probabilistically because, to date, it’s been the best option we’ve had.
But a probabilistic model can’t provide that clarity. It can offer contributing factors and confidence intervals, but it can’t present a stable, linear explanation of cause and effect.
So the challenge isn’t that teams struggle to communicate forecasts effectively, but that they’re attempting to explain outputs that don’t have a single, stable explanation.
From Prediction Models to Causal Systems
So that’s the problem state. What’s the solution? If the limitation lies in correlational models, the solution isn’t better storytelling. It’s a different class of model.
This is where the concept of a Revenue Simulation Twin becomes relevant. A Revenue Simulation Twin isn’t designed to predict outcomes based solely on historical patterns. It’s designed to model the causal structure of the revenue system itself. It maps how inputs like pipeline creation, conversion behavior, sales capacity, pricing dynamics, and customer retention interact in the same environment to produce financial outcomes.
In this framework, the forecast results from examining and stress-testing a defined system of relationships. And this fundamentally changes the nature of forecasting.
Instead of asking, “What is likely to happen?” (a probabilistic question), teams can ask, “What will happen if we change this variable?” (a causal question). The model becomes a tool for simulation rather than prediction, allowing leaders to test possible interventions before committing to them.
Critically, the output of such a model is inherently explainable as a design feature of the system itself because it’s based on a solid cause-and-effect foundational structure. Because the relationships between variables are explicitly defined, the forecast is accompanied by a visible causal chain. Leaders can see how each component contributes to the final outcome, where sensitivities lie, and which levers have the greatest impact.
In that sense, the real evolution isn’t from numbers to narratives. It’s from probabilistic outputs to causal systems where explanation is no longer layered on top of the forecast after the fact. It is embedded within the model itself.
The Shift from Prediction to Understanding
The distinction between prediction and understanding is central to this evolution.
Prediction focuses on estimating outcomes based on available data. Understanding focuses on the relationships that produce those outcomes. Prediction is valuable, but it assumes that existing patterns will continue and that the system will behave as it has in the past. Understanding, by contrast, allows organizations to adapt when conditions change.
When leaders understand how different levers impact results, they can adjust strategy with greater confidence. They can test scenarios, evaluate trade-offs, and respond to emerging signals before outcomes are affected.
This shift aligns closely with broader developments in RevOps, where the emphasis is moving from reporting and analysis toward system-level management, and the end goal isn’t just to observe performance, but to influence it by helping create a system-based view of the inputs into the revenue model.
Building Trust Through Forecasting
Executives and boards rely on forecasts to make decisions that carry significant consequences. Long-term hiring plans, capital allocation, and strategic initiatives all depend on confidence in projected outcomes. If the forecasts these are based on aren’t built to a high degree of rigor, then large reversals of previous spending decisions become more likely, with all the internal disruption and loss of operational momentum that these actions bring.
That’s why forecast accuracy contributes to this confidence, but isn’t sufficient on its own. Explainability grounded in a coherent, causal foundation strengthens trust by making the forecast transparent. It allows stakeholders to see the logic behind the number, evaluate its assumptions, and understand its vulnerabilities. This transparency reduces uncertainty and supports more decisive action.
Over time, organizations that prioritize forecasts built on causal models build credibility. Their forecasts aren’t only accurate but also understandable and actionable, which, when repeated quarter after quarter, builds both internal trust within the business and external trust with investors.
Key Takeaways: A New Standard for Revenue Leaders
The evolution from accuracy to explainability built on truly causal models represents a broader shift in how revenue leadership is defined.
We think forecasting can no longer be treated as a probabilistic numbers-dominant, analytical function. It must be understood as a true causal system that connects data, strategy, and execution.
This shift creates a meaningful opportunity for organizations that embrace causal systems and concepts to reduce decision-making friction, improve cross-functional coordination, and respond more effectively to changing conditions.
In doing so, they transform forecasting from a retrospective probabilistic exercise into a cause-and-effect-based, forward-looking capability that drives higher-quality decision-making and compounds its value to the business every quarter.