Forecast accuracy tells you how wrong you were. It does not tell you how ready you were for being wrong.
Something I have been thinking about lately.
We spend a lot of time in supply chain obsessing over forecast accuracy. I have done it myself. You get the number wrong by 12.43%, and Monday morning becomes an autopsy — what went wrong, why did we miss, who is accountable.
But I am starting to wonder if we are measuring the wrong thing entirely.
The planning teams I have seen handle volatility well are not the ones with the best accuracy scores. They are the ones who walked into Monday with scenarios already thought through.
If demand lands here, we do this. If it lands there, we pull that lever. Allocation, substitutes, expediting. The decision was already sketched out before the number came in.
That shift — from “how accurate were we” to “how prepared were we” — is where planning is heading.
This is also why probabilistic AI is becoming more relevant than deterministic models right now.
Deterministic planning gives you one number and asks you to trust it. Probabilistic planning gives you a range and asks you to prepare for it. That is a fundamentally different conversation.
I do not think forecast accuracy disappears. But I think it stops being the headline metric.
The real question becomes: did we have a plan for when the forecast was wrong?