Comments on: IBF Year End Blog – What we Learned About Forecasting in 2010 https://demand-planning.com/2010/12/21/ibf-year-end-blog-what-we-learned-about-forecasting-in-2010/ S&OP/ IBP, Demand Planning, Supply Chain Planning, Business Forecasting Blog Mon, 21 Feb 2011 15:44:58 +0000 hourly 1 https://wordpress.org/?v=6.6.4 By: Mike Gilliland https://demand-planning.com/2010/12/21/ibf-year-end-blog-what-we-learned-about-forecasting-in-2010/#comment-167 Mon, 21 Feb 2011 15:44:58 +0000 https://demand-planning.com/?p=1032#comment-167 Hi Davis,

Quantifying the maximum achievable forecast accuracy is a difficult problem, and I don’t think there is a perfect answer. We can concoct examples, such as forecasting Heads or Tails in the toss of a fair coin, in which we can determine the limit of forecast accuracy. We can do this because we understand what governs the behavior being forecast (called the data generation process (DGP)), and our accuracy is limited only by the randomness in the tossing of the fair coin. In real life situations we usually don’t know the DGP, the DGP may change over time, and we don’t know the amount of randomness in the behavior.

There are some good articles on the general topic of forecastability and forecast performance measurement. I discuss these periodically on my blog (http://blogs.sas.com/forecasting) and provide links to some specific articles that may be of use. There is also an article “Setting accuracy targets for short-term judgemental sales forecasting” by Bunn and Taylor in the International Journal of Forecasting 17 (2001) 159-169.

Since we really don’t know what is the best accuracy that is possible to achieve, I prefer to set performance targets with respect to what is the worst we should be able to achieve. Thus, I would set the goal “Forecast no worse than a naive model” (where a naive model is something cheap and easy to implement, such as a random walk or moving average).

Since we don’t know in advance what accuracy the naive model will achieve, we don’t set a specific numerical target. Instead, over time, we evaluate our forecasting process accuracy vs. the accuracy the naive model achieved. If our process is doing WORSE than the naive model, obviously something is going very wrong!

]]>
By: Davis Wu https://demand-planning.com/2010/12/21/ibf-year-end-blog-what-we-learned-about-forecasting-in-2010/#comment-166 Tue, 01 Feb 2011 22:29:39 +0000 https://demand-planning.com/?p=1032#comment-166 Well written blog…

I have a question on forecastability.

We often compare forecast accuracy among products, categories, business units and even sometimes with competitors. As we all know, it is impossible to compare the performance when business dynamic and environement is different.

Often in many organisation, we are given a target to achieve our forecast accuracy to a certain level such as 80% this year, 85% next year… We all know in some cases the target is achievable but unrealistic in other cases by gut feeling…

Is there anyway to quantify the maximum achievable forecast accuracy for a given product/category/business unit…? We all know it is all dependent on how the business is run, what processes are in place, what tool you have, how skillful our demand planners are, how we communicate with our customers… These determine the possible level of accuracy that we could achieve.

Is it possible to quantify the achievable forecast accuracy target at all by taking into account all the underlying drivers?

Looking forward to any feedback.

Regards

]]>
By: Dan Rasoi https://demand-planning.com/2010/12/21/ibf-year-end-blog-what-we-learned-about-forecasting-in-2010/#comment-165 Thu, 06 Jan 2011 00:37:52 +0000 https://demand-planning.com/?p=1032#comment-165 One word: Respect!

]]>