forecast accuracy – Demand Planning, S&OP/ IBP, Supply Planning, Business Forecasting Blog https://demand-planning.com S&OP/ IBP, Demand Planning, Supply Chain Planning, Business Forecasting Blog Tue, 26 Apr 2022 16:18:26 +0000 en hourly 1 https://wordpress.org/?v=6.6.4 https://demand-planning.com/wp-content/uploads/2014/12/cropped-logo-32x32.jpg forecast accuracy – Demand Planning, S&OP/ IBP, Supply Planning, Business Forecasting Blog https://demand-planning.com 32 32 Tracking Forecasting Error With An Excel Model (With Free Download) https://demand-planning.com/2022/04/22/tracker/ https://demand-planning.com/2022/04/22/tracker/#respond Fri, 22 Apr 2022 17:25:50 +0000 https://demand-planning.com/?p=9566

Peter Drucker’s famous axiom “You can’t improve what you don’t measure” is particularly relevant to business forecasting. As Demand Planners, we want to measure our forecast performance so we can iterate and improve. Here I present an Excel-based Forecast Performance Tracker (free download available below) that you can use for your own error measurement.

There are  various methods and metrics to track and assess Forecast performance. A few of the most widely-used metrics are MAPE, WMAPE, MAD, MSE, RMSE, BIAS, Tracking Signal, as well as Michael Gilliland’s FVA (Forecast Value Added). Demand Planning teams monitor and report the forecast performance. When tracking forecast error through such metrics, it is essential to know why the error has occurred so the root cause can be addressed. There will be always be a certain amount of innate volatility and variability in forecasts. And, since the forecast is validated by human interference and judgements, bias is always present to some degree. 

Having an understanding of the error enables us to make decisions that will reduce it. Forecast error can be problematic for organizations – not only within supply chain/operations, but at an enterprise level. Though the steps taken based on the understanding of forecast errors are reactive, we can use those steps to reduce future errors. 

Forecast error simply defined is the difference between the actual demand (sales) and forecasted demand. Forecast Error = (Forecast – Actual) / Actual. Root Cause Analysis (RCA) can be split into 3 classifications: Over Forecasting, Product Unavailability and Under forecasting. The following table (Table 1) gives an insight into these  3 RCA classifications. 

Figure 1 | Root Cause Analysis Classification Model

The RCA Classification model above gives details our 3 classifications of Over, Under and Product Unavailability. The framework also gives details about negative or positive bias.  Importantly, it also displays a few of the potential impacts on the business. There is also one more factor we should be aware of that isn’t included in the table – Random Variation. In cases of Random Variation, the error generally corrects itself. 

Model To Track Root Cause Analysis Of Forecast Error

Over forecasting and under forecasting are widely discussed in the demand planning literature. However, I haven’t seen much discussion about product unavailability. An Excel-based forecasting KPI tracker is prepared (see a snapshot below).

[CLICK TO DOWNLOAD THE FORECAST TRACKER]

The most important elements are Forecast, Actual Sales, and Inventory (closing) for the given forecasting period (month, week, etc.). For simplicity, we are using 2 products (P1, P2) and 3 locations (L1, L2 and L3). The forecasting horizon is monthly, from January to April. Other details like Sales Representative, Product segment, and Categories can be added as per your business requirements. The purpose is to monitor forecasting performance by product and location on a monthly basis. 

You’ll also see the different error metrics: Error, Absolute Error, MAPE/WMAPE, Bias, Over Forecasting, Under Forecasting and Product unavailability. 

Screenshot of forecast tracker

Cont.

In this tracker, when you add the monthly forecast, actuals, and inventory data, the rest of the report updates accordingly. All the data analytics are managed in Excel with formulas, pivot tables, and charts. 

Forecasting Performance Dashboard 

The model contains an interactive dashboard which, at the end of the month, can be used to share forecast error in demand planning/S&OP meetings sessions as a standard report. The dashboard present the data via effective visualizations that depict the narrative behind key performance indicators, including key insights and recommendations on a single screen. 

The most important component of the dashboard is the key insights and recommendations. Going into any meetings where the dashboard is used, Demand Planners should have a good understanding of the major forecast errors and be ready to facilitate discussion surrounding actionable steps to remedy the causes. The aim is for senior management to make informed decisions. 

Below (figure 2) you can see the dashboard. The key features are the MAPE monthly trend, and top locations and products with highest MAPE for the month. For example, location L1 is experiencing error from under forecasting and therefore needs to be addressed in the meeting to identify what can be done to remedy it. Location L2 is facing under forecasting. To a certain extent this under forecasting is correlated to product unavailability since sales tried to compensate for the forecast target with available and on-demand products. 

Figure 2 | Snapshot of forecast tracker dashboard

Cont.

Benefits Gained From Forecasting Root Cause Analysis

As Arthur C. Clarke said, “ I don’t pretend we have all the answers. But the questions are  certainly worth thinking about.This methodology enables exactly that – allowing you to measure forecast error and discuss root causes in a simple yet effective way. With insight into root causes, you can optimize your supply responses better and shape demand accordingly. Improved forecast accuracy will naturally follow.

 Key Takeaways 

1 – Demand Planners should demonstrate strategic value by bringing key insights and recommendations to facilitate informed decision-making.

2 – The purpose of such models is not to highlight ‘WHO’ (any function/role areas) but to  effectively address the ‘WHAT’ (cause for over or under forecasting). 

3 – Art is an important trait required for Demand Planners. They should convey the key  insights, and just the data. 

4 – Demand and supply variability is great these days, so be aware that forecast error improvement has a limit as we have no control over external factors impacting demand.

5 – Estimating all the components of error from the demand history is not possible (or even appropriate). Uncertainty is intrinsic. 

6 – Demand Planners should persistently develop data analytics skills with a clear approach to storytelling instead of only providing reports based on convoluted mathematical formulas. 

7 – Emphasis on forecast accuracy numbers will result in bias. Hence, the focus should be on providing key highlights to the Management team. The most consuming part of the reports is the insights and recommendations sections which enable businesses to take better decisions. 

8 – As mentioned in my previous blog, Segmentation Framework For Analyzing Causal Demand Factors, Forecast Accuracy is not the goal but a means toward the larger goals of the enterprise. 

Do you find this model useful? Is there any further enhancement that can be done? I am  open to hearing from you. 

Do you want to understand the logic behind this forecasting performance tracker and  dashboard in Excel? Connect me for a session. I will be happy to take you through the tracker and dashboard. 

Connect with Manas on LinkedIn and follow him on Medium.  


For more demand planning insight, join us at IBF’s Global S&OP & IBP Best Practices Conference in Chicago from June 15-17. You’ll learn the ingredients of effective planning, whether you’re just getting started or are finetuning an existing process. Early Bird Pricing now open – more details here.

]]>
https://demand-planning.com/2022/04/22/tracker/feed/ 0
How To Improve Forecast Accuracy During The Pandemic? https://demand-planning.com/2021/07/01/how-to-improve-forecast-accuracy-during-the-pandemic/ https://demand-planning.com/2021/07/01/how-to-improve-forecast-accuracy-during-the-pandemic/#respond Thu, 01 Jul 2021 12:19:32 +0000 https://demand-planning.com/?p=9186

Q) During the current pandemic we are facing a very difficult time in preparing forecasts. Our forecast accuracy is far below what used to be. Can you suggest any way to improve it?

A.) We are certainly in a new economic phase, something we have never experienced before. In the past we had disruptions either in supply or demand—not in both as we are currently experiencing. This may be short-lived but we must make sure we deal with it. This means we need to change the way we forecast. Firstly, we should keep in mind that the sharp increases or decreases in sales data are not outliers but a reflection of new data patterns. When an outlier repeats itself again and again, it is no longer an outlier, but a part of new pattern. This means that old data is not relevant for future forecasts. Secondly, you need to know how the data pattern is changing. The data pattern of many products has drastically changed and the sooner we learn about it, the better. To learn about the change in patterns and to respond quickly enough, we need to work with not monthly or weekly data but with daily data. Compute the percentage change in cumulative sales from one day to the next, and then compute the average weekly change. If the weekly percentage change is rising, it means that the trend is upward; if it is falling, it is downward. We can use this trend to make a forecast for the next period. It may not be long before the pandemic is over. With that, the pattern will change again. The weekly percentage change in sales will quickly tell us which way the data is trending, and how strong it is.

I hope this helps.

 

Happy forecasting!

 

Dr. Chaman. L. Jain,

Editor-in-Chief,

Journal of Business Forecasting

]]>
https://demand-planning.com/2021/07/01/how-to-improve-forecast-accuracy-during-the-pandemic/feed/ 0
Stop Worrying About Forecast Accuracy! https://demand-planning.com/2021/06/04/stop-worrying-about-forecast-accuracy/ https://demand-planning.com/2021/06/04/stop-worrying-about-forecast-accuracy/#respond Fri, 04 Jun 2021 15:21:22 +0000 https://demand-planning.com/?p=9137

“Ah, my forecast accuracy was bad last month because marketing added a promotion too late…”

 “My forecast accuracy is fantastic; I achieved 70% for month 3!”

How many times have you heard one of the above from your Demand Planners? And in both situations what will be your reaction? I’m guessing in the first scenario you might be sympathetic towards the planner, and in second you might be excited for them!

If you find yourself in the first scenario, do not beat yourself up – it’s not the end of the world. It happens. While in second scenario, if you manage 70% accuracy that’s great!

Why Do We Create Forecasts?

But have you ever wondered why we build a forecast? Is it only for the sake of having good forecast accuracy? Let’s dive into why we need forecast accuracy as one of our KPIs.

In my opinion, we need forecast accuracy as a KPI for 2 reasons:

  1. We need to measure the result of our consensus forecast vs. what happens.
  2. Once we find the measurement, we need it to be easy enough for other stakeholders to understand.

Thus, as result we have either MAE or MAPE (or WMAPE) to measure how good our consensus forecast is vs. reality. Those KPIs are commonly used to communicate the result to stakeholders to inform them how accurate we are, accompanied by forecast bias to explain our tendency for either over-forecasting or under-forecasting.

And since Demand Planners build the forecast and lead the consensus meeting which results in the validated forecast, naturally forecast accuracy becomes part of their KPIs (and in many cases, also their bonus). This explains why Demand Planners are always very sensitive when it comes to the topic of forecast accuracy.

For some companies, this accuracy is also part of the business stakeholders’ KPIs. I personally agree with this because it is a good thing to make everyone aware that their decisions will have an impact on the wider business.

The Ugly Side Effects Of Forecast Accuracy

Now, let’s look at what I call “the ugly side effects” of forecast accuracy. I have seen some cases where Planners are doing too much or being too rigid for the sake of forecast accuracy, or even being judged based on this KPI alone.

1. The forecast is adjusted to deliver accuracy or “I do not trust the statistical model…”

I have seen this happen too often; the statistical model is overwritten with massive adjustments in an attempt to achieve good accuracy! If you are a planner and you are still doing this, before you continue, ask yourself (please!) “Is the adjustment meaningful? How much time will I spend doing this, and what will be the percentage accuracy gain?”

For the majority of the scenarios, the adjustment is not that meaningful and you will not gain much from your accuracy, and you spend a lot of time doing those adjustments. Next time, before you do this, please think about these points.

And if you need to do this because you do not trust your forecasting tools, I suggest you spend time understanding how the forecast is derived rather than to continue overriding it.

2. “But We can’t bring the launch/promo forward, it will hurt my KPIs”

I believe you have seen this scenario. Demand Planners arguing with Sales or Marketing about shifting the launch promotion as it will hurt KPIs!

Demand Planners, again I understand your reaction. But whenever this scenario plays out in your meeting, rather than argue about the KPIs (trust me it is not an interesting thing to argue about), ask your business stakeholders the ‘why’ questions. Why do we need to bring forward the promo/launch? Why do we need to postpone it? By doing this, you will get their point of view and why they are proposing this course of action. And based on their answer, you can see if it is reasonable. What would be the risk/opportunity for us here? What are the consequences?

From my experience, when those scenarios occurred, they did have a valid reason such as a launch being delayed as marketing needs to rework media to support the launch. The current one perhaps did not gain good ratings with the test consumers, or bringing forward a specific campaign would really help.

You can accept their point of view as valid and support it on the basis that it helps the overall business, while explaining what the consequences are on KPIs and inventory. Alternatively you disagree that it’ll benefit the business, or that it cannot be decided right away and the GM’s approval might be needed.

Remember, always think bigger! Think in terms of impact to the business, not from a KPI perspective only. So, be a bit braver ask “why?” and, based on the answer, you can work back to ‘Can this be supported and what would be the consequences be?” and “What would this mean in term of risk/opportunity?”

3. “You must be a bad planner… I can tell by looking at your forecast accuracy”

Worst one ever! Do not associate a bad result with a bad Planner. There could be lot of factors to explain why the forecast accuracy is bad, other than ‘bad Planner’. Do not be too quick to judge.

After all, how can we tell what good accuracy is? Benchmark vs. industry trend? Internal benchmarks? For me, to have those benchmarks is only half the picture. I will always look at forecastability to set my own expectations of what good looks like for a particular brand. For example, 50% for some brands might be all we can get.

From my own experience, I worked with one Planner for our make-up portfolio whose forecast accuracy on average was around 55%. Is she a bad planner? Oh, my goodness no! She is one of the best Planners I have ever met!

To explain why it is very tough to achieve above 55% accuracy, we did a variability calculation (not only for her portfolio, but for our total division). From there, we were able to understand based on variability alone what the best possible accuracy is for each brand in our total division. So, it can be 70% for some brands while for hers, 55% is the best we can get.

Back to her story – how did she compensate for her ‘low’ forecast accuracy? She did a great job in building her safety stock parameters and monitoring stock (managing excess/obsolete inventory) and worked closely with her major accounts which resulted in an improvement of her portfolio’s service level. And oh, the best part of the story? She managed to turn Marketing’s mindset from “Why even bother, our accuracy is bad anyway” to “Let’s do our forecast meeting so we can run our brand and serve our accounts”.

For me, as her manager, that was the highest compliment ever! When our partners are motivated to come to our forecast meetings and want to have conversations with us, we are doing something right and adding value to the business!

So, What Is The Point Of Our Forecasts?

If we take a step back, what are the points of building this forecast? The answers are surprisingly simple:

1. To facilitate decision making

Really, if you think about it, what is the main objective of all those forecast meetings you are having? It is for everyone to align on the forecast based on some scenarios we have all agreed upon. So that is the first impact of our forecast and then, based on that forecast, we will plan according to our supply needs that are translated into production planning (raw material planning too in factory side). And eventually, there is also an impact on logistics such as transportation planning and warehouse inbound activity.

All those decisions are taken from the forecast you build!

When I put it that way, I hope you now see that the time you spend in adjusting those forecasts for x% gain in accuracy might not be that significant. You might improve the accuracy, you might have improved your supply planning, but those results could be to a limited extent. We can spend that time doing something else like reviewing your safety stock or checking if your ordering strategy per month is the best for your warehouse inbound team. For example, if we receive the same item 4x by layer and if 4 layers make one pallet, is it possible to switch to receive it in one pallet?

If you address these points, you will have used your forecast in the best way imaginable; to facilitate decision making and enable efficient supply chain planning. So please remember that the forecast is built to serve this purpose.

2. To Ensure Customer Satisfaction

This is the next point – after all this planning, we want the customer to be happy, i.e to ensure the stock is there to serve our customers’ demand. So here is when other KPIs such as Service Level and On Time Fill Rate are used to measure our success as Demand Planners.

In conclusion, Demand Planners, the next time before you make an adjustment to your forecast or get in a heated argument with Sales or Marketing over a topic that could possibly hurt your accuracy please stop, pause for a while, and think “How does this fit in the bigger context? Can we afford this? What are the consequence and the risk/opportunity? Is it worth doing?”

Remember, forecast accuracy is just a way to measure and communicate the decision we made in our consensus forecast. We have a bigger purpose of doing forecasting: to enable decision making (for efficient supply chain planning too!) and to guarantee customers satisfaction. So, think bigger! Think Supply Chain 😊!

 

]]>
https://demand-planning.com/2021/06/04/stop-worrying-about-forecast-accuracy/feed/ 0
Why Forecast Accuracy Is Hiding The Truth About Performance https://demand-planning.com/2020/07/20/why-forecast-accuracy-is-hiding-the-truth-about-performance/ https://demand-planning.com/2020/07/20/why-forecast-accuracy-is-hiding-the-truth-about-performance/#respond Mon, 20 Jul 2020 12:28:19 +0000 https://demand-planning.com/?p=8613

Having worked in demand planning, S&OP and supply planning for several years, I have found that organizations often try to improve forecasts as a means to improve the overall supply chain. Indeed—looking at the pure math—improved forecast accuracy enables us to reduce end-to-end variability and optimize inventory and production efficiency. But the consistent focus on optimizing item accuracy takes our attention away from the real issues, i.e., our ability to manage variations and uncertainty to add value to our supply chain.

Regardless of our ability to improve accuracy by a certain number of points, we will still be short of 100%. The fact is we pay far less attention to driving business value than we do hunting for the next few points of improvement. In all the companies I have worked for, and the many companies I worked with during my years in consulting, I am yet to come across business processes that manage uncertainty in the forecast in an efficient way.

Using forecast accuracy as an input to the size of buffers needed and to assess the uncertainty in the plans is well established in many companies. However, in the operational execution, the acknowledgement of variability is absent. We still discuss the largest deviations between the forecast and actuals as opposed to deviations that are out of range. With that, our planning systems take us back to the target inventory level even though the consumption of our inventory may be perfectly in line with the expected variability, which may be different from forecasts. This kind of strategy causes a long sequence of changes all the way through the value chain, resulting in the well-documented bullwhip effect.

From a strictly forecasting perspective, measurement of forecast accuracy provides very little insight to improve it. Often, the way to improve accuracy is out of the hands of those working with the forecast. Accuracy is often affected by the way we incentivize our customers with different payment terms and shipping charges. I see three important steps that are needed to improve the way we manage uncertainty of the forecast.

Focus On Bias Rather Than On Accuracy

If we want to improve our forecasts, we should focus on forecast errors that are systemic such as forecast bias. Bias in a forecast is very harmful to the value chain. Measuring bias will help the business call out incorrect use of statistical models, where the sales history needs cleansing, and where qualitative “intelligence” that is manually added to the forecast is not intelligent enough. Bias is a main source of forecast error, which can be taken care of.

Using Segmentation For Allocating Forecasting Resources

Another way of dealing with accuracy is to assess the error against the natural variability in sales. A portfolio of items can be classified into segments based on their historical variability. Based on that, we can determine an expected level of error. The best way, therefore, is to chase errors that exceed our expected levels, and not ones that have large deviations. Say we have two items that are selling on average at a rate of 100 units per week. Item A has a historic variation of 30% and item B has a historic variation of 75%. Let’s say we sold 140 units of item A and 150 units of item B last week. Traditionally, we would investigate item B because the deviation of 50 units is larger than that of item A. Knowing that historically item A has a low variation and item B has a high variation, we should spend more time on understanding what happened to item A and investigate whether it needs re-forecasting, and not on item B which performed well within expectation.

Further, we should pay more attention to high value items, and less to others. As shown in Figure 1, we can expect higher deviations in products in segments 1 and 4, and lower deviations in segments 2 and 3. By looking at the volume (or revenue) as opposed to only deviations, we can identify the most important items to concentrate on. We don’t need to spend time on items in segment 4 (“exception only”) because they are of low importance to the business and have high natural deviations. Assessing deviations out of range provides an opportunity to refine our statistical models (as in the case of segment 2), as well as suggest how we can further improve the forecast of products in segment 1 by collaborating more with our stakeholders such as sales representatives.

Don’t Change Operational Planning If Deviations Are Within The Expected Range

Most planning systems work by trying to meet the target inventory. In that case, the plan is changed every time there is a change in actual demand compared to expected demand, no matter how small it is. In the network of inventory points and production sites, these little changes add up to much bigger changes. The cost associated with these changes throughout the value chain is in most cases not measured and accounted for.

Therefore, the best strategy is to have a system that stops re-planning whenever inventory is within acceptable range. In supply chain planning, it is often seen as an issue when actual inventory on hand is below target safety stock. Safety stock is meant to take care of uncertainty. If it does not, we are not using safety stock properly. Instead we are biasing our planning system, and unnecessarily putting pressure upstream. Every time inventory goes below a threshold, the system tries to replenish it. To avoid this, we should have a planning system like a forecast range. It should use inventory target ranges instead of fixed inventory targets. In other words, when the projected inventory is within certain limits, no change to the plan is necessary. When it exceeds the expected limit, as shown in Figure 2, the plan may have to be changed.

By using these three steps we can not only manage uncertainty more efficiently but also reduce cost. Some planning systems such as Kanban (reorder point planning) and pull-based decoupling points are good ways of reducing over reaction to variability in our operational planning. However, we do need a way to manage planning within ranges of forecast as well as inventory. In order to start reaping the rewards of managing uncertainty in this way, we must change how we think about variability and stop pointlessly chasing forecast accuracy.

 

This article was originally published in the Winter 2018/2019 issue of the Journal of Business Forecasting. Subscribe to get it delivered to your door quarterly, or become a member and get subscription to the journal plus discounted events, members only tutorials, access to the entire IBF knowledge library, and more.

]]>
https://demand-planning.com/2020/07/20/why-forecast-accuracy-is-hiding-the-truth-about-performance/feed/ 0
8 KPIS EVERY DEMAND PLANNER SHOULD KNOW https://demand-planning.com/2020/06/01/8-kpis-every-demand-planner-should-know/ https://demand-planning.com/2020/06/01/8-kpis-every-demand-planner-should-know/#comments Mon, 01 Jun 2020 14:47:19 +0000 https://demand-planning.com/?p=8531

Without KPIs, it is impossible to improve forecast accuracy. Here are 8 highly effective metrics that allow you to track your forecast performance, complete with their formulas.

Forecast Accuracy

This KPI is absolutely critical because the more accurate your forecasts, the more profit the company makes and the lower your operational costs. We choose a particular forecasting method because we think it will work reasonably well and generate promising forecasts but we must expect that there will be error in our forecasts. This error is a function of the time difference between the actual value (Dt) and the forecast value (Ft) for that period. It is measured as:

 Forecast Accuracy: 1 – [ABS (Dt – Ft) / Dt]

Where,

Dt: The actual observation or sales for period t

Ft: The forecast for period t

Our focus on this KPI is to provide insights about forecasting accuracy benchmarks for groups of SKUs rather than identifying the most appropriate forecasting methods. For example, achieving 70-80% forecast accuracy for a newly-launched and promotion-driven product would be a good considering we have no sales history to work from.

SKUs with medium forecastability (volatile, seasonal, and fast-moving SKUs) are not easy to forecast owing to seasonal factors like holidays and uncontrollable factors like weather and competitors’ promotions etc., their benchmark is not recommended to be less than 90-95%.

Tracking Signals

Tracking signals (TS) quantify bias in a forecast and help demand planners to understand whether the forecasting model works well or not. TS in each period is calculated:

 TS: (Dt- Ft) / ABS (Dt – Ft)

Where,

Dt: The actual observation or sales for period t

Ft: The forecast for period t

Once it is calculated, for each period, the numbers are added to calculate the overall TS. When a forecast, for instance, is generated by considering the last 24 observations, a forecast history totally void of bias will return a value of zero. The worst possible result would return either +24 (under-forecast) or -24 (over-forecast). Generally speaking such a forecast history returning a value greater than (+ 4.5) or less than (-4.5) would be considered out of control. Therefore, without considering the forecastability of SKUs, the benchmark of TS needs to be between (-4.5) and (4.5).

Bias

Bias, also known as Mean Forecast Error, is the tendency for forecast error to be persistent in one direction. The quickest way of improving forecast accuracy is to track bias. If the bias of the forecasting method is zero, it means that there is an absence of bias. Negative bias values reveal a tendency to over-forecast while positive values indicate a tendency to under-forecast. Over the period of 24 observations, if bias is greater than four (+4), forecast is considered to be biased towards under-forecasting. Likewise, if bias is less than minus four (- 4), it can be said that the forecast is biased towards over-forecasting. In the end, the aim of the planner is to minimize bias. The formula is as follows:

Bias:  [∑ (Dt – Ft)] / n

Where,

Dt: The actual observation or sales for period t

Ft: The forecast for period t

n: The number of forecast errors

Forecaster bias appears when forecast error is in one direction for all items, i.e they are consistently over- or under-forecasted. It is a subjective bias due to people to building unnecessary forecast safeguards like increasing the forecast to match sales targets or division goals.

By considering the forecastability level of SKUs, the bias of low forecastability SKUs bias can be between (-30) and (30). When it comes to medium forecastability SKUs, since their accuracy is expected to be between 90-95%, bias should not be less than (-10) nor greater than (+10). Regarding high forecastability SKUs, due to their moderate contribution to the total, bias is not expected to be less than (-20) or greater than (20). The less bias there is in a forecast, the better the forecast accuracy, which allows us to reduce inventory levels.

Mean Absolute Deviation (MAD)

MAD is a KPI that measures forecast accuracy by averaging the magnitudes of the forecast errors. It uses the absolute values of the forecast errors in order to avoid positive and negative values cancelling out when added up together. Its formula is as follows:

MAD: ∑ |Et| / n

Where,

Et: the forecast error for period t

n: The number of forecast errors

MAD does not have specific benchmark criteria to check the accuracy, but the smaller the MAD value, the higher the forecast accuracy. Comparing the MAD values of different forecasting methods reveals which method is most accurate.

Mean Square Error (MSE)

MSE evaluates forecast performance by averaging the squares of the forecast errors, removing all negative terms before the values are added up. The squares of the errors achieves the same outcome because we use the absolute values of the errors, as the square of a number will always result in a non-negative value. Its formula is as follows:

MSE: ∑(Et)² / n

Where,

Et: forecast error for period t

n: the number of forecast errors

 

Similar to MAD, MSE does not have a specific benchmark to check accuracy but the smaller value of MSE, the better forecast model, which means more accurate forecasts. The advantage of MSE is that it squares forecast errors, giving more weight to large forecast errors.

Mean Absolute Percentage Error (MAPE)

MAPE is expressed as a percentage of relative error. MAPE expresses each forecast error (Et) value as a % of the corresponding actual observation (Dt). Its formula is as follows:

MAPE: ∑ |Et / Dt |/n * 100

Where,

Dt: Actual observation or sales for period t

Et: the forecast error for period t

n: the number of forecast errors

Since the result of MAPE is expressed as a percentage, it is understood much more easily compared to other techniques. The advantage of MAPE is that it relates each forecast error to its actual observation. However, series that have a very high MAPE may distort the average MAPE. To avoid this problem, SMAPE is offered which is addressed below.

Symmetrical Mean Absolute Percentage Error (SMAPE)

SMAPE is an alternative to MAPE when having zero and near-zero observations. Low volume observations mostly cause high error rates and skew the overall error rate, which can be misleading. To address this problem, SMAPE come in handy. SMAPE has a lower bound of 0% and an upper bound of 200%. It does not treat over-forecast and under-forecast equally. Its formula is as follows:

SMAPE: 2/n * ∑ | (Ft – Dt) / (Ft + Dt)|

Where,

Dt: Actual observation or sales for period t

Ft: the forecast for period t

n: the number of forecast errors

Similar to other models, there is no specific benchmark criteria for SMAPE. The lower the SMAPE value, the more accurate the forecast.

Weighted Mean Absolute Percentage Error (WMAPE)

WMAPE is the improved version of MAPE. Whilst MAPE is a volume-weighted technique, WMAPE is more value-weighted. When generating forecasts for high value items at the category, brand, or business level, MAPE cancels plus and minus values. WMAPE, however, weights both forecast errors and actual observations (sales). When considered at the brand level, high value items will influence overall error because they are highly correlated with safety stock requirements and development of safety stock strategies. Its formula is as follows:

WMAPE: ∑(|Dt-Ft|) / ∑(Dt)

Where,

Dt: The actual observation for period t

Ft: the forecast for period t

Like other techniques, WMAPE does not have any specific benchmark. The smaller the WMAPE value, the more reliable the forecast.

 

]]>
https://demand-planning.com/2020/06/01/8-kpis-every-demand-planner-should-know/feed/ 2
Do Forecasts Have To Be Right To Be Valuable? https://demand-planning.com/2019/06/12/do-forecasts-have-to-right-to-be-valuable/ https://demand-planning.com/2019/06/12/do-forecasts-have-to-right-to-be-valuable/#comments Wed, 12 Jun 2019 11:16:35 +0000 https://demand-planning.com/?p=7778

“I hate this part of my job -there’s no way to be right.” I’d had the same thought many times.

I was talking to a fellow demand planner, who was under pressure to provide more accurate forecasts. It was a no-win situation.

“So stop trying to be right”, I told her. “Present your forecast, document and clearly outline your assumptions and ask for feedback from your team. Let the group own the decision.”

“I’ve never thought of that”, she replied. “It would take the pressure off me and get the team to collaborate.”

Which is exactly what I wanted her to see.

Being Right Is A Trap

It’s a standing joke that in forecasting you are either wrong or you are lucky. Yet there is often pressure on both salespeople and demand planners to provide accurate forecasts, especially for revenue and production planning. But forecasting is an inexact process. If we were correct all the time, we would not call the work forecasting, but prediction.

In my experience a large part of being a demand planner is educating people about what is and is not possible to forecast. We also need to help set reasonable expectations around what can be accomplished by forecasting.

How Being Wrong Can Be Good

“We missed our forecast again. Why aren’t our forecasts improving?”

I hear this often. I see 2 basic causes for this.

1 – We don’t understand the data that we are using to forecast, usually because we haven’t taken the time to analyze it properly. Do we understand the seasonality and trends in the data? Have we calculated the variability? Have we identified the items that are truly unforecastable or that have such extreme seasonality that they have to be handled manually, and removed these from our forecast models?

Demand planning is mostly about data, and the more we are familiar with what our data is telling us about the business overall and the item performance in specific, the more likely we are to be able to forecast accurately.

2 – We are not close enough to our customers to understand how they plan, so that we can align our forecasting with their planning. It’s not enough to rely on the salespeople talking to their buyers; we need the demand planners talking to the replenishment, planning, logistics and customer service teams on the customers’ side as well. We need a variety of perspectives on our customers’ business and planning practices in order to be able to align our process to support them.

So how can being wrong be good? Each time we miss a forecast we need to reassess our interactions with our customers and our analysis of our data. At the same time we need to be realistic and recognize that sometimes forecasts are wrong due to unusual circumstances: hurricanes strike, rivers flood, containers fall of ships or get lost, dockworkers strike and freight ships to the wrong locations. Some causes are truly beyond our ability to predict, and therefore to forecast.

Forecasts Don’t Have To Be Right To Be Useful

In my opinion one of the major contributions we can make to our business is to help people understand the value of forecasting. We need to continually remind people that a forecast is not going to be 100% accurate, and that it does not need to be 100% accurate to be useful. We need to help the sales teams and the leadership see that even inaccurate forecasts can be useful.

How to do we do this?

Document, Document, Document

This is an area where I believe most demand planners are deficient – including myself. We can’t learn how to improve our forecasting unless we consistently analyze our failures, that is, understand what drives our misses. Unless we understand what caused us to miss a forecast, we will likely repeat the error. This analysis can be tedious and time consuming, but it is also where we can add the most value.

An example from my own current work is the forecasting for an item that is selling better than either I or my customer predicted. We are both struggling to understand how this particular item has outperformed all our forecasts for the past 3 months, and what we should forecast for the coming months. By documenting what we have learned we now know the following:

  • Last year we had significant supply issues, so our sales history – which is driving our forecasts – is severely skewed.
  • The item is being promoted this year, and this drove additional sales volume.
  • Comparable products are priced higher this year, while this item has the same retail as last year.
  • The customer is carrying higher inventory levels this year.

This information will help us understand what is happening this year, and will help us improve our forecasts for next year. It can also help us better understand the interaction of in-stock, promotional activity, competitive pricing and monthly inventory levels.

Documentation Does Not Equal Excuses

Even if we can document the causes of our forecast misses, this does not excuse us from improving our forecasts. Production efficiency, fill rates, on-time delivery and customer satisfaction all depend on reliable forecasts. The purpose of documentation is to learn how to improve our forecasting practices, not to excuse poor performance.

So What Should I Document?

Here’s what I recommend for regular reporting:

  • Weekly sales and comp sales (customer POS)
  • Weekly shipments
  • Monthly unit consumption vs. forecast
  • Daily short shipments
  • Promotions (item, quantity sold vs. non-promotional periods, time period and promotional pricing and rebates)
  • Monthly forecast misses in units

For your monthly S&OP meetings I would also add:

  • Risks – factors that can restrict sales (i.e., supply issues, raw material shortages, delays in product availability, etc.)
  • Constraints – production capacity, labor shortages, packaging issues, etc.
  • Assumptions – your overall assumptions behind your planning; is the business growing? Has the seasonality shifted? Is the customer carrying more or less inventory than in previous years?)
  • Opportunities – changes that can add incremental sales (new items, promotional opportunities, product expansions to more locations, decreased competition, rebates, etc.

Document these in detail and review them each month so you can track their impact on item performance.

All Plans Are Wrong – Some Are Useful

The more we can shift the focus from forecasts being “right” or “correct”, and emphasize what we can learn from our incorrect forecasts, the more we can learn about how to properly forecast our business. As demand planners our task is to present the most realistic picture of future demand, together with the risks, assumptions, constraints and opportunities that are contained within our numbers. And by educating our fellow team members how to make the most of “bad” forecasts, we can add significant value to our company’s planning processes.

For world-leading insight into forecasting and planning from industry leaders, join us in Orlando for IBF’s Business Planning, Forecasting & S&OP Conference from October 20-23, 2019. Learn from directors and SVPs of planning and supply chain at the world’s biggest and most innovative companies, and hear our keynote, Garry Ridge, CEO of WD-40, share his thoughts on leadership, employee engagement and more.

 

]]>
https://demand-planning.com/2019/06/12/do-forecasts-have-to-right-to-be-valuable/feed/ 3
Forecast Accuracy Benchmarking Is Dead (Long Live Forecastability) https://demand-planning.com/2019/04/15/forecastability/ https://demand-planning.com/2019/04/15/forecastability/#comments Mon, 15 Apr 2019 14:05:31 +0000 https://demand-planning.com/?p=7708

How valuable is forecast accuracy benchmarking? It’s always interesting to see how your competitors’ are faring, but does knowing other companies’ forecast accuracy help improve your own, and does it help to set realistic forecast accuracy targets for your demand planners?

I think there are some lessons we can learn from Montessori here. My 10 year old daughter has attended a Montessori school for over 5 years with amazing results. As a parent, I have always loved their approach and we have seen the philosophy in action. My daughter has blossomed over the last few years.

“A student’s progress should be measured in terms of the questions they are asking, not merely by the answers that they are reciting” (Robert John Meehan).

If you are not familiar with the Montessori method, its foundation is on self-directed learning. Students are free to choose the activities they work on from a range of carefully prepared, developmentally appropriate materials. One of the things that other parents find surprising about Montessori is the fact that they do not give “tests.” The idea behind the test-less approach is not about creating a careless environment but one where each child is recognized as different, and where self-motivation and mastery at their level is the focus.

The same approach can be applied to demand planning. This approach is destroyed by using forecast accuracy benchmarks.

The (Severe) Limitations Of Forecast Accuracy Benchmarking

With this in mind, I was sitting on the couch listening to my daughter talk about the wonderful day she had and how she loves school when I open an email. The sender of the email is asking me for forecast accuracy benchmarks. I get this question a lot and my answer is always the same:

The best benchmark is no benchmark. Stop trying to benchmark forecast accuracy against an industrial average!

Far too often I see annual goals (and even bonuses) tied to an arbitrary number of what someone else at another company is achieving. We treat forecast error and demand uncertainty as a monthly test where it is pass or fail and, measuring against what everyone else is doing.

The obvious truth is, even within the same industry, the items or item combinations are different, the time horizon you are forecasting may vary, market share can impact volume and variation, and a host of other factors like systems and operational limitations and data lead to different levels of forecast error.

Using forecast accuracy benchmarks to set your own targets is like comparing an apple to a grapefruit.

The dirty little secret is that items are different, companies are different, and demand uncertainty should be expected to be different. Using forecast accuracy benchmarks to set your own targets is like comparing an apple to a grapefruit.

Use Forecastability Instead To Set Your Accuracy Targets

Many times, the companies at the top of the benchmark list are there not because they’re the best at forecasting but because they have the easiest demand to forecast. They could be forecasting a lag zero with only 12 items. We need to look at what the individual planner is trying to forecast and the forecastability of that particular item based on its own merits. (Learn how to gauge forecastability here.)

The typical approach is to look at averages, so 30% WMAPE is good, right? If this is the attitude in your planning team your demand planners will realize they can do a lot less work with equal results and never reach their full potential. The forecastibilty of your items could be above average but this benchmarking mindset won’t allow you to improve.

Using forecast accuracy benchmarks sets up demand planners for failure, or stops them from trying to improve

What if you have much more difficult items to forecast, and forecastability is lower than average? You give the demand planner unrealistic goals and set them up for failure.

This is about understanding where each product line or item or customer is different and having self-motivation and mastery at at the center of your approach. A better way is to benchmark the underlying forecastablity of the demand patterns and measure the improvements to their own baselines. To do this you can focus on forecast value added (FVA%) to measure against a naïve model or demand variation index (DVI) of the same data.

FVA Allows You To Improve Forecast Accuracy

The question shouldn’t be if you pass or fail but if the steps we take improve the results and by how much. Measuring FVA begets managing forecast processes because FVA adds visibility into the inputs and provides a better understanding of the sources that contributed to the forecast, so one can manage their impact on the forecast properly.

Companies can use this analysis to help determine which forecasting models, inputs, or activities are either adding value or are actually making it worse. You can also use FVA to set targets and understand what accuracy would be if one did nothing, or what it could be with a better process.

Most of all, FVA encourages mastery and grades you on what you can do or have done instead of what some unknown company with unknown forecastability has done. Benchmarks done incorrectly against industrial averages only can tell us what accuracy the so-called best in class companies achieve and do nothing to test your individuality and what you are capable of doing.

Maria Montessori’s approach is powerful and universal. She would have us not constantly testing ourselves against an arbitrary average, but have us focus on our own individual forecasting processes, and reach mastery in the process.

 

 

 

]]>
https://demand-planning.com/2019/04/15/forecastability/feed/ 2
How Can I Aggregate Demand & Maintain Visibility At Customer Level? https://demand-planning.com/2019/03/06/forecast-accuracy-aggregation/ https://demand-planning.com/2019/03/06/forecast-accuracy-aggregation/#comments Wed, 06 Mar 2019 15:44:35 +0000 https://demand-planning.com/?p=7634

Question

Dear Dr. Jain,

We are currently working on improving our forecast accuracy. One of the ideas is to create an ‘all other’ group for the smaller customers with low volume. While it improves FA, my concern is the visibility we lose of that lower level downstream. For instance, each customer gets its own labels. In an all other group, how do we trigger procurement to buy the correct components and how does supply know where to make the item if 20 customers are now one entity? Any ideas on how that can work? Has anyone else used this method and how did they make it work?

Answer

It is true forecast accuracy improves as we forecast at a higher level of aggregation, but by doing that we lose visibility at a lower level. There is a way to get around this, though. For procurement, we certainly need forecasts by customer. The best way is to breakdown the aggregate forecast into customers by using rolling percentage shares of each customer, which can be computed from the sales data of last 9 or 12 months. The rolling percentages will give us, on average, what percentage of sales come from each customer. By applying these percentages to the aggregate forecast, we can get forecasts of each customer. This is not unusual—many companies do this. The only difference is they apply it to SKUs that are difficult to forecast at that level. They prepare their category level forecast, and then use these percentages to arrive at SKU level forecasts. How far back to go in calculating percentages depends on how quickly percentages change.

I hope this helps.

Dr. Chaman Jain,

St. John’s University

 

Do you have a demand planning, forecasting or S&OP question for Dr. Jain? Then submit it here. All questions are reviewed and receive a response. 

]]>
https://demand-planning.com/2019/03/06/forecast-accuracy-aggregation/feed/ 2
How Much Does Forecasting Software Cost, & How Much Will It Save? https://demand-planning.com/2018/07/12/how-much-does-forecasting-software-cost/ https://demand-planning.com/2018/07/12/how-much-does-forecasting-software-cost/#comments Thu, 12 Jul 2018 13:15:09 +0000 https://demand-planning.com/?p=7142

In the last blog we started our discussion on selecting the right forecasting and demand planning software to fit your organization’s needs. We started with first understanding your needs before you talk to vendors, and what to include in your list of requirements. 

When it comes to forecasting software, it should be noted that popular doesn’t always equate to quality, and an ERP or an advanced planning system that claims to do it all may not have the forecasting tools you need. There are plenty of specialized and lesser-known products that deliver brilliant results. But how do you narrow them down, and what does something like that cost?

Most companies making the decision to move to a new forecasting/planning system are primarily driven by one or more of the following:

  • Obvious forecast accuracy challenges
  • A highly variable process that requires dedicated technology to support
  • Detail-level forecasts required to support a more efficient manufacturing or distribution system
  • Downstream inventory problems that are clearly driven by demand variability
  • An attempt to drive more cooperation and ownership between sales and operations through a consensus-based forecast

Figure Out How Much You Can Save With Improved Forecast Accuracy

A mountain of research today shows that improving forecast accuracy delivers a high ROI. Improved forecast accuracy, when combined with software that translates the forecast into meaningful actions, will decrease inventory and operating costs, increase service and sales, improve cash flow and GMROI, and increase pre-tax profitability. [Ed: IBF has a forecast error calculator that will tell you how much your company can save for every percentage improvement in accuracy – this will help you decide how much to spend.]

From our experience, a 15% forecast accuracy improvement will deliver a 3% or higher pre-tax improvement

The forecasting error, no matter how small it is, has a significant effect on the bottom line. From our experience, a 15% forecast accuracy improvement will deliver a 3% or higher pre-tax improvement. In an IBF previous study of 15 US companies, we found that even one percentage point improvement in under-forecasting error at a $50 million turnover company gives a saving of as much as $1.52 million, and for the same amount of improvement in over-forecasting, $1.28 million.

These potential savings should help you decide what is worth paying when it comes to buying forecasting software.  That leads us to our next question in our series of blogs to help you with demand planning and forecasting system of many company’s digital transformation:

What Is This Forecasting Software Going To Cost Me?

Cost can be determined by users, size, or complexity of your processes and data. You’ll pay either via a subscription or have an annual service contract that provides access to the software along with support.  In addition to these annual or monthly fees, there is generally an up front consulting or installation fee that should include project design, configuration, assistance of data extracts and education. While sometimes this may be flat rate, it is usually on a per hour or per project phase basis. The scope of the project with key deliverables should be provided by the vendor before you start, and an indication of what it will cost.

Typical software costs anywhere from $5,000 to $30,000 per user

Typical software costs (assuming data repository has already been licensed) anywhere from $5,000 to $30,000 per user or very roughly about $2,000-$6,000 for every $100,000 of revenue. These are ballpark numbers and vary based on packages, features and other cost over and above basic systems.

Typical consulting service costs range from $110 to $220 an hour

Typical consulting service costs range from $110 to $220 an hour per resource depending on the collaborative process and you’’ll require anywhere from 600 to 2000 hours depending on the complexity (these costs exclude travel and other expenses). This information can be difficult to get from some vendors upfront because they know costs can add up when dealing with teething problems following installation.

Never rush into a deal. If you try to do things as quickly as possible you will likely miss the full scope of what you need and end up with a solution that fails to deliver the functionality you need, putting you in a position where you have to spend more money down the line.  With all of this said, cost should be based on what you get out of it – it is important to understand what your benefits will be before you look at what vendors are charging. The benefits derived from an automated Demand Forecasting solution can be realized in both soft and hard cost savings, as well as overall process improvements.

Indirect Cost Savings From Forecasting Software

Forecast process automation will reduce the time spent on creating and managing the overall forecast process but rarely results in hard labor cost savings due to redeployment. There should be operational efficiency gains from planning and scheduling improvements resulting from more accurate and (sometimes more detailed) forecasts.  More predictable financial planning resulting from a more accurate forecast as well as consensus planning driven by the collaborative process changes.

There should be operational efficiency gains from planning and scheduling improvements

Other soft benefits include saving time and energy by focusing resources on the right items. Do I really need to forecast hundreds of C items, or can they be grouped into more natural segments, allowing me to focus on the highest revenue/margin products and customers?  This is soft, non-quantifiable benefit. What ROI would you put on having a system that captures the planning process and business intelligence of your teams? Most companies have this spread across hundreds of spreadsheets owned by just a few users –  it is impossible to dollarize improvements like this, but they are valuable.

Hard Cost Savings From Forecasting Software

The reduction in downstream finished goods inventory resulting from forecast accuracy improvements provide a one-time saving, as well as recurring savings arising from reduced carrying costs. In a pure make-to-stock or distribution company, the downstream inventory reduction could range from 10% to 20% since forecasting inaccuracies typically drive around 75% of the required safety stock.

Forecast accuracy can translate to increased revenue of 0.5% to 3% with improved inventory availability or demand shaping capabilities

Many companies are leaving money on the table with lost sales or poor service levels. Forecast accuracy can translate to increased revenue of 0.5% to 3% with improved inventory availability or demand shaping capabilities. Total annual direct material purchase, along with logistics related expenses arising from demand variability, can see direct improvements of 3% to 5%. We can also benefit from a 20% reduction in airfreight costs.

It is important to understand these average savings amounts and you should determine what savings you believe you can drive with technology. Sometimes you need to know what finance and executive leadership anticipate in terms of benefits – you need to be on the same page in terms of expectations. It is here that many software providers can shed some light on what is realistic based on past implementations (keep in mind they are trying to sell you a product). Do your own analysis and reach a consensus with any key people in your company before signing on the dotted line.

When To Expect A Return On Investment

Most technology should reasonably have a payback in less than 24 months with many showing ROI in under 18 months. If you’re looking at a particular solution and the numbers are not adding up, you may consider a less expensive solution that meets your company size, and reconsider some of your functionality requirements. Remember not to settle and you just may want to keep looking for other providers that will not only give what you need but also at the price you can afford, complete with the benefits you want. Shop around – there’s a lot out there.

Which brings us to the next question: how do you know you are getting everything you need, and should you consider a third party to help you in this journey? Subscribe or check back soon for the next blog and I’ll answer this question.

I and other S&OP leaders will be discussing this topic and more at IBF’s Leadership Forum in Orlando on October 17, 2018.  Designed for leaders in planning, forecasting and S&OP, it’s the best of its kind, and is designed to help managers with implementation and management of people, process and technology. It’s a great event – you can see the schedule here.

 

]]>
https://demand-planning.com/2018/07/12/how-much-does-forecasting-software-cost/feed/ 2
Ask Dr. Jain: What Are The Forecast Accuracy Benchmarks In Retail? https://demand-planning.com/2018/06/22/what-are-the-benchmarks-in-retail-forecasting-accuracy/ https://demand-planning.com/2018/06/22/what-are-the-benchmarks-in-retail-forecasting-accuracy/#comments Fri, 22 Jun 2018 12:40:49 +0000 https://demand-planning.com/?p=7049

Question:

Dear Dr. Jain,

What are the benchmarks in forecast accuracy in the retail industry, specifically for companies that use JDA Demand software?

Answer

These are benchmarks of forecast errors in the retail industry, based on the last five years of IBF surveys. Numbers represent the total industry, and not those of who use just JDA. Because of few observations in each survey, we have to combine the numbers.

Level 1 Month Ahead 2 Months Ahead 1 Quarter Ahead
SKU

Category

Aggregate

30%

18%

9%

34%

20%

9%

33%

25%

8%

 

I hope this helps.

Dr. Chaman Jain,

St. John’s University

 

]]>
https://demand-planning.com/2018/06/22/what-are-the-benchmarks-in-retail-forecasting-accuracy/feed/ 2