range forecast – Demand Planning, S&OP/ IBP, Supply Planning, Business Forecasting Blog https://demand-planning.com S&OP/ IBP, Demand Planning, Supply Chain Planning, Business Forecasting Blog Tue, 17 Nov 2020 13:02:20 +0000 en hourly 1 https://wordpress.org/?v=6.6.4 https://demand-planning.com/wp-content/uploads/2014/12/cropped-logo-32x32.jpg range forecast – Demand Planning, S&OP/ IBP, Supply Planning, Business Forecasting Blog https://demand-planning.com 32 32 Stop Saying Forecasts Are Always Wrong! https://demand-planning.com/2020/11/16/stop-saying-forecasts-are-always-wrong/ https://demand-planning.com/2020/11/16/stop-saying-forecasts-are-always-wrong/#comments Mon, 16 Nov 2020 18:58:52 +0000 https://demand-planning.com/?p=8807

Stop saying forecasts are always wrong! It’s a pet peeve of mine because when you say this, you’re only hurting yourself and this field. Besides, you are wrong! Forecasts can be 100% correct.

Every business decision is built on a forward-looking projection which is, in essence, a forecast. So, if you are saying that forecasts are always wrong, then you are also saying every business decision is wrong. And we know that is not the case.

Saying “forecasts are always wrong” should not be an automatic response we have or an underlying assumption about this field. Why? Because when we say this and instill this belief in others, we lower the confidence in the forecasts we make, therefore reducing the effective of them. After all, if our forecasts underpin the demand plans used by our stakeholders in S&OP, and we expect them to be used, they must be credible.

Perpetuating this myth also justifies poor actions in both Demand Planners and our colleagues in other departments. As Demand Planners we use it to preempt criticism when our forecasts don’t meet that magic number, and our colleagues in other departments use it to justify their negative behaviors.

This field requires some thick skin, so why make it worse by putting yourself down and telling yourself and everybody around you that your forecasts don’t add value?

Forecasts Don’t Have To Be ‘Right’ To Be Valuable

And I get it, forecasts will rarely be 100% accurate. But that entirely misses the point.

We need to get away from this idea that the forecast number has to be correct to be valuable. Forecasting is not about hitting an exact number, but about providing insights, and for that, we don’t need an exact number; we need to understand uncertainty and the probability of something happening.

Uncertainty, far from being ‘wrong’, is important because it allows people to make better decisions because if you understand the uncertainty in your forecast, then you can plan for it better. If you’re forecasting a fixed number, you don’t have an idea of what the uncertainty is surrounding that number, meaning you cannot plan the business to manage that uncertainty.

After all, uncertainty is inherent in forecast models, the forecasting process, the supply chain, consumer behavior etc. Uncertainty is inherent in business period. And we mustn’t hide that uncertainty by saying ‘forecasts are always wrong’.

Focus On Probabilities Rather Than Single Points

And that brings us to probabilistic mindsets vs single point mindsets. Not everybody working in demand planning is comfortable with ambiguity or uncertainty – but we need to be. As forecasters we’re often predisposed to wanting fixed, absolute outcomes. But that just isn’t the case in business forecasting due to the uncertainty inherent in consumer behavior, supply constraints and a whole load of other business variables.

This where we can learn from data scientists because they think probabilistically. They think not in terms of a single number, but in terms of the probability of something happening. This kind of thinking considers the uncertainty/risk associated with a forecast which is what we really need.

So if you’re saying forecasting is always wrong, you’re lost in a false right/wrong dichotomy. The reality is that business planning is not black or white and it is impossible to say with 100% that something will happen.

Range Forecasting Beats Point Forecasting

If we roll one die, the probability of an exact outcome is equal every time we roll. Every time we roll, there is the same probability of rolling a 2, 3,4,5 and so on. But rolling two dice is a different matter. The probability of rolling two dice and getting 2 is different to the probability of getting 6, for example.

Let’s say I forecast that we roll a 7 plus or minus 2. That gives us a range that covers 5,6,7,8 or 9. That gives us over a 60% probability of our forecasted range occurring. If I forecast a 7, there’s a 16.7% probability that we’ll be correct whereas if I forecast a range, I’m right over 60% of the time. This is the difference between forecasting a point and forecasting a range. If this were a business forecast, which is more useful? The forecast range is more useful.

Why? Because with a single point forecast, you are throwing away good information. With a range forecast, we have a center point plus the top of the range, the bottom of the range, and the magnitude of the variability. Those are 4 critical pieces of information that you don’t get with a single point forecast.

You don’t know where that point fits into a range of uncertainty and you don’t know the magnitude of variability, i.e. you don’t know if you are roughly right or precisely incorrect – you could be very confident or not confident at all.

So let’s be less precise but more accurate. But how do you actually provide that range?

Coefficient of variation (CoV) or Demand Variation Index (DVI) can determine the inherent variability in a given dataset. With CoV we square the magnitude from the mean whereas as DVI takes the delta from the mean without squaring. But both essentially look at the same thing, looking at historical data and how much variability from the mean that data set has. That tells us how much variability we may see going forward. And there we have a forecast range that incorporates the uncertainty we need to be aware of to properly plan the business.

There is more to it than that, but I hope this serves as starting point to think probabilistically, get comfortable with ambiguity, and start providing valuable range forecasts that consider the inherent variability in business.

For further information, read this primer on probabilistic forecasting published in the Journal of Business Forecasting.

This article was taken from this episode of IBF On Demand, the leading podcast in the fields of demand planning, forecasting, S&OP and predictive analytics.

 

]]>
https://demand-planning.com/2020/11/16/stop-saying-forecasts-are-always-wrong/feed/ 1
Introducing (E)Score: MAPE For Range Forecasts https://demand-planning.com/2018/08/16/introducing-escore-simplified-mape-for-todays-business/ https://demand-planning.com/2018/08/16/introducing-escore-simplified-mape-for-todays-business/#comments Thu, 16 Aug 2018 16:21:14 +0000 https://demand-planning.com/?p=7231

Eric Wilson CPF introduces a new measurement of probabilistic forecast performance that measures error, range and probability all in one formula. It goes beyond grading the forecaster, instead providing a valuable snapshot of a forecast’s accuracy, reliability and usefulness – in a way that is significantly easier to perform than existing methods.

Editor’s comment: This formula may be the first to holistically measure forecast error, range, and probability, allowing you to compare models, compare performance, completely measure the forecaster, and identify outliers. Think MAPE, MPE and FVA but for range forecasts, and providing more insight. This easy to use formula could be a game changer in forecast performance metrics. Eric welcomes feedback directly or via the comments section below.

A New, Improved Approach to MAPE

 This article introduces a new theoretical approach I am considering for measuring probabilistic or range forecasts. Measures of forecast performance have already been developed over the past decades with TPE, Brier Score, and others proving to be effective. The problem that I encountered was that, whilst some evaluate accuracy and others evaluated performance of the distribution, few looked at both on the same scale in a way that was easy to use. To overcome this, I have developed a single scoring function for probabilistic and range forecasts that will allow a forecaster to measure their forecasts consistently and easily. This includes even judgmental or empirical range forecasts.

Why We Predict Ranges Instead Of An Exact Number

Probabilistic and range forecasts are forms of predictive probability distributions of future variability and have enjoyed increased popularity in recent years. What we’re talking about is the difference between trying to precisely predict exact sales in the future and predicting a range of sales during that period, or the expected variability. It is also the difference between being accused of always being wrong to proudly forecasting with 100% accuracy. For more insight into this see my previous article entitled Stop Saying Forecast Are Always Wrong.

Just because you are accurate does not mean you are precise, however, or that you couldn’t do better, and it does not mean all probabilistic forecasts are created equal. With the proliferation of these probabilistic models and range forecasts arises the need for tools to evaluate the appropriateness of models and forecasts.

Range Forecasts Give Us More Information

A point forecast has only one piece of information and you only need to measure that point against the actual outcome which is generally expressed in some type of percentage error such as MAPE. The beauty of range forecasts is you are providing more information to make better decisions. Instead of one piece of information you have possibly three pieces of information, all of which are helpful. They are:

1) The upper and lower range or limit

2) The size of the total range or amount of variability

3) The probability of that outcome. To appropriately evaluate this, you should look and measure all three components and look at accuracy and precision and reliability.

So, when we say that we will sell between 50 and 150 units and are 90% confident, how do we know if we did well? A potential way I am proposing is to use a type of a scoring rule (which, for the purposes of this article, I am referring to as a (E)score. Conceptually, scoring rules can be thought of as error measures for range or probability forecasts. This score is not an error measurement but helps measure the relative error and reliability of probabilistic forecasts that contain all three components.

The scoring rule I have developed simply takes the square root of the sum of mean squared errors using the upper and lower limit numbers as the forecast, divided by the actual plus a scoring function for probabilistic forecasts.

(E)score = RMSE / Actual + BSf

Or

Escore probalistic performance measurement

Where:

Uf=upper limit

Lf=lower limit

Outcome = if in or out of range (1 or 0)

P = probability assigned to the range

Ex. Let us assume that we have historical information that shows the probability of falling within the 50 units range +/- for each period (this could also be empirical data). We create our forecast with a wide range estimating actuals to fall somewhere between 50 and 150 units. Being a larger range, we are fairly confident and give it a 90% probability of falling within that range. In this example actuals come in exactly between the range at 100. For this forecast our (E)score= .72.

Sqrt((|100-150|+|100-50|)^2)/2+(1-.9)^2=.72

Dissecting  The Components Of The (E)Score Formula

 Step 1:  Breaking this down to its components begins with first determining the relative error. What is done is to measure the Mean Squared Error for the sum of the deviation of the actual to the upper limit and to the lower limit divided by 2 (total number of observations 1 upper and 1 lower).

((|actual – upper limit|+|actual-lower limit|)^2/2=MSE

((|100 – 150|+|100-50|)^2)/2=5000

One of the things that stands out here is that no matter the actual value, the numerator would be the same for actuals that fall within the range. So that if our range is 150 upper limit to 50 lower limit, no matter if the actual was 125 (absolute value of 125 – 150 + absolute value of 125 – 50 = 100) or if actual was 75 (absolute value of 75 – 150 + absolute value of 75 – 50 = 100), both are the same result of 100.  This is correct since the forecast is a range and consists of all possible numbers within that range and would be equal deviation from the actual. This also rewards precision and smaller ranges. Conversely, it penalizes larger ranges.

Step 2: The next step is taking the square root of this mean squared error and then dividing this by the actual. The square root brings back “units” to their original scale and allows you to divide by the actual. What you end up with a comparison of the RMSE to the observed variation in measurements of the actual and may be represented as a percentage where the smaller error, the closer the fit is to the actual.

Sqrt(MSE)/actual=Uncertainty Error (E)

Sqrt(5000)/100=71%

Step 3: For the reliability of the of the range forecast we combine this with a probabilistic scoring function. One way to do it is to compare the forecast probabilities with outcomes on a forecast-by-forecast basis. In other words, subtract the outcome (coded as 1 or 0) from the assigned forecast probability and square the result.

Here we are not looking at the precision of the forecast but measuring the skill of binary probabilistic forecasts and if the actual occurred or not within the stated range. If the actual result falls between the upper and lower limit, the outcome is treated as true and given the value of 1. If the actual falls outside of the range it is given a 0.

For example, if our upper limit was 150 and lower limit 50 and we gave this a 90% probability of occurring within that range and the actual was 100, then the statement is true. This would give a value of 1 minus the probability of 90% then squared which equals .01. If the actual had fallen anywhere outside the predicted range, our score would be much higher at .81 (value of 0 minus the probability of 90% then squared). Given it answers a binary probabilistic question (in or out of range) it does not give value to the size of the range or how far you may be out of range – only a score from 0 to 1. This is the reason for the uncertainty error and size and range precision, and what makes this new scoring function unique.

(Outcome Result – Probability Assigned)^2=Goodness of Fit (score)

(1-.9)^2=.01

It should also be noted that if you had given your forecast the same range (150 to 50) with only 10% probability and it did fall outside the range (if the actual was 200 for example) you would get the same score of .01 (value of 0 minus the probability of 10% then squared). Following the logic, it is understandable. What you are saying with the statement with 10% probability is that most likely the actual result will not fall within the range. The inverse of your statement is there is a 90% probability the actual will be anywhere outside of your range and your most likely range is all other numbers. So, if the actual is 200 or 20,000 you are more correct. While this is important, once again it is only half of the equation and why we look not just at the goodness of fit and reliability but put this together with measuring how wide of a range and how far from that range you are to get a complete picture.

Step 4: The final step is simply adding these two parts together to end up with a single score between zero and an unlimited upside.

Uncertainty Error (E) + Goodness of Fit (score) =(E)score

.71+.01=.72

The lower the score is for a set of predictions, the better the predictions are calibrated. A completely accurate forecast would have a (E)score of zero, and one that was completely wrong would have a (E)score only limited to 1 plus the forecast error. So, if you had forecasted exactly 100 units with no range up or down and forecasted this at a 100% probability of occurring, in our example with the actual of 100, you would be absolutely perfect and have a (E)score of zero. Your range in our equation is 100 to 100 making your numerator zero and for the reliability you have the equation of value of 1 being true minus 1 for 100% probability which also equals zero.

Sqrt((|100-100|+|100-100|)^2)/2+(1-1)^2=0

Forecasts are generally surrounded by uncertainty and being able to quantify this uncertainty is key to good decision making. For this reason, probability and range forecasts help provide a more complete picture so that better decisions may be made from predictions. With this we still have the need to understand and measure those components and metrics like the (E)score may help not to grade the forecaster but communicate the accuracy, reliability and usefulness of those forecasts.  

 

]]>
https://demand-planning.com/2018/08/16/introducing-escore-simplified-mape-for-todays-business/feed/ 3
Stop Saying Forecasts Are Always Wrong https://demand-planning.com/2018/02/20/forecasts-are-always-wrong/ https://demand-planning.com/2018/02/20/forecasts-are-always-wrong/#comments Tue, 20 Feb 2018 17:09:53 +0000 https://demand-planning.com/?p=6233

For many of us, the words “the forecast is always wrong” has become something we instinctively say. There’s nothing wrong with acknowledging there is variation in demand or admitting we may miss a projection. But when it becomes your automatic response to any miss and is believed to be an unavoidable part of forecasting, it is highly limiting. This seemingly harmless habit can actually lower the effectiveness of forecasts and the business’s confidence in them. What’s more, it justifies other people’s poor actions and focuses attention on the wrong things. 

As Demand Planners, We Need To Give Ourselves More Credit

I cannot help but believe that when everyone constantly says that forecasts are always wrong, it needlessly creates guilt in the poor Demand Planner’s mind and undermines their self-esteem. It’s hard to feel good about yourself when you keep falling on your own sword.

Maybe we should stop saying we are sorry and stop saying forecasts are always wrong. Repeating this mantra also sends the message that you’d rather be agreeable than be honest, when in fact our job is not to provide a number but to offer solutions. We need to stop using the crutch of inevitable forecast error and start having honest conversations and focus on what we can predict and what we can control.

When others say “the forecast is always wrong” what they really mean is that demand variability is perfectly normal.

It Actually Is Possible To Be 100% Accurate

Yes, it really is. But let us start with what constitutes accuracy. Accuracy is the degree of closeness of the statement of quantity to that quantity’s actual (true) value. While I accept that one’s ability to create an accurate forecast is related to demand variability, an accurate forecast does not reduce demand variability. Demand variability is an expression of how much the demand changes over time and, to some extent, the predictability of the demand.  Forecast accuracy is an expression of how well one can predict the actual demand, regardless of its volatility.

So, when others say “the forecast is always wrong”, what they really mean is that demand variability is perfectly normal. What we should be focusing on is that “while we can’t predict demand perfectly due to its inherent variability, we can predict demand variability” (Stefan de Kok). This is the difference between trying to precisely predict the exact point and accurately predicting a range or the expected variability.

A common example of this is trying to guess the outcome of rolling two fair dice compared to accurately predicting the range of possible outcomes. For the throw of the two dice, any exact outcome is equally probable and there is too much variability for any prediction to be useful. But the different possibilities for the total of the two dice to add up to are not equally probable because there are more ways to get some numbers than others. We can accurate predict that 16.7% of the time the two dice will add up to seven, and we can predict the range of possible outcomes as well as the probability of each outcome. While we may not know exactly what will happen, we can exactly predict the probability of it occurring. And if you predict the outcome within the probabilities, guess what? You are correct. Even though 100% precise is not an option looking at ranges or probabilistic forecast, 100% accuracy most certainly is within the realm of possibilities!

Bingo! We have officially proven everyone wrong and have our 100% accuracy.

Forecasts are always wrong

Accurately predicting an outcome within a range of probabilities is more valuable than trying to forecast a single number.

Range Forecasts Give Us So Much More Information Than Single Point Forecasts

Besides being able to more accurately predict the probabilities of outcomes and ranges, we are also providing more relevant and useful information. When you predict the variability, this not only grounds our initiatives in reality but also gives us the power to make better business decisions. One way to counteract variability is to ask for range forecasts, or confidence intervals. These ranges consist of two points, representing the reasonable “best case” and “worst case” scenarios. Range forecasts are more useful than point predictions.

With any single point forecast you are providing a single point of information which you know is not 100% correct. With a range you are providing four pieces of valuable information: we not only know the point or mean but we also know the top, the bottom, and the magnitude of possible variability.

Measuring the reduction in error rather than the increase in accuracy is more valuable to us because there is a stronger correlation between error and business impact than there is between accuracy and business effect.

It doesn’t take much to see that such a probabilistic forecast, or even just a forecast with ranges and a better prediction of uncertainty, is useful information in supply chain planning. Now we know how much variability we need to plan for and can better understand the upside or downside risk involved. In addition, accurately predicting uncertainty can add enormous value. That’s because you are focusing on improving not only the average demand prediction, but the entire range of possible demand predictions including the extreme variability that has the biggest impact on service levels.

Your KPIs For Measuring Forecast Error Are Based On A False Assumption

Part of the problem with saying we are always wrong is that we measure our performance ineffectively. This is because our definitions of forecast error are too simplistic or misrepresented. Many people look at forecast accuracy as the inverse of forecast error, and that is a major problem. Most definitions of forecast error share a fundamental flaw: they assume a perfect forecast and define all demand variability as forecast error. The measures of forecast error, whether it be MAPE, WMAPE, MAD or any similar metric, all assume that the perfect forecast can be expressed as a single number.

I mentioned above that we can provide more information in a range of forecast probabilities and subsequently be more accurate. All we need now is a way to measure this and prove it. A metric which helps us measure the accuracy and value of these types of forecasts is Total Percentile Error (TPE). Borrowing Stefan de Kok’s definition, TPE “measures the reduction in error – rather than the increase in accuracy – since there is a stronger correlation between error and business impact than between accuracy and business effect.” For more detailed information about this calculation see Foresight Magazine’s Summer 2017 issue.

Nassim Nicholas Taleb described this type of forecast accuracy measurement in his book, The Black Swan. He explains the difference in measuring a stochastic forecast (using probability distributions) and more traditional approaches (using a single point forecast).  He states that if you predict with a 20% probability that something will happen (and across many instances it actually happens 20% of the time) that the error is 0%. Naturally, it would also need to be correct for every other percentile (not just the 20 percentile) to be 100% accurate.

Bingo! We have officially proven everyone wrong and have our 100% accuracy.

You need to stop using the crutch of inevitable forecast error and start honest conversations about what we can predict and what we can control.

Focus On The Process

Even though we should know there is no such thing as being“wrong”, we should still look at what we are measuring and incentivize the right behavior. Mean Absolute Percentage Error (MAPE) or Mean Percentage Error (MPE) will tell us how much variability there is and the direction, but they do not tell us if the Demand Planning process is adding value. The question shouldn’t be whether we are right or wrong, but whether the steps we are taking actually improve the results. And if so, by how much.

Forecast Value Added (FVA) analysis can be used to identify if certain process steps are improving forecast accuracy or if they are just adding to the noise. When FVA is positive, we know the step or individual is adding value by making the forecast better. When FVA is negative, the step or individual is just making the forecast worse. [Ed: for further insight into FVA, see Eric’s guide to implementing FVA analysis in your organization.]

The obvious advantage to focusing on these types of metrics and KPI’s is that we are not casting blame but discovering areas of opportunities, as well as identifying non-value added activities. By eliminating the non-value adding steps or participants from the forecasting process, those resources can be redirected to more productive activities. And by eliminating those steps that are actually making the forecast worse, you can achieve better forecasts with no additional investment.

I Beg Of You, Please Change Your Vocabulary!

At the end of the day, our goal is not necessarily to be precise but to make a forecast more accurate and reliable so that it adds business value to the planning process. We need to stop saying we are sorry for what is out of our control and start controlling what we know is possible. To do this, we must not only change our vocabulary but also change the way we are doing our jobs.

Most people are fixed on traditional forecasting process and accuracy definitions. The goal is for you to start thinking in terms of the probability of future demand. From there, you need to be the champion inside your organization to help others understand the value of what forecasts provide. You need to stop using the crutch of inevitable forecast error and start honest conversations about what we can predict and what we can control.

]]>
https://demand-planning.com/2018/02/20/forecasts-are-always-wrong/feed/ 5