forecast error – Demand Planning, S&OP/ IBP, Supply Planning, Business Forecasting Blog https://demand-planning.com S&OP/ IBP, Demand Planning, Supply Chain Planning, Business Forecasting Blog Mon, 11 Jul 2022 08:28:54 +0000 en hourly 1 https://wordpress.org/?v=6.6.4 https://demand-planning.com/wp-content/uploads/2014/12/cropped-logo-32x32.jpg forecast error – Demand Planning, S&OP/ IBP, Supply Planning, Business Forecasting Blog https://demand-planning.com 32 32 How I React When Forecasted Demand Deviates from Actual Demand https://demand-planning.com/2022/06/03/how-i-react-when-forecasted-demand-deviates-from-actual-demand/ https://demand-planning.com/2022/06/03/how-i-react-when-forecasted-demand-deviates-from-actual-demand/#respond Fri, 03 Jun 2022 08:59:34 +0000 https://demand-planning.com/?p=9635

As I have written about before in a previous article, Forecasting Materials Usage At An Electric Utilities Company, forecasting efforts for an electric utility company are imperative. 

Forecasting is an essential part of our demand management tactics and without forecasting diligence and a proactive approach to managing inventory, our organization is exposed to the risk of not delivering electricity to customers.

Forecasting helps us investigate the future to make decisions on what quantities of materials we need to proactively order. This exercise helps us maintain appropriate inventory levels so that our organization can fulfill demand while at the same time maintain a lean inventory that does not commit unnecessary capital.

But what happens when our forecasting efforts are ineffective?  What happens when we under-estimate, or over-estimate what we will need in the future? It’s not hard to imagine the consequences of the forecasted demand being off from actual demand. We will risk stocking out on materials, or we will have excessive inventory sitting idly.

Recognizing Error

I respond to errors in the forecast in a couple of ways. The first step is to recognize the error. Was the forecast over or under actual demand, and by how much?  The magnitude of the error is of more importance to me. If the forecast was only slightly off, I will not pay it much attention.

We understand that forecasting will generally be different from what occurs, to some extent, so being close is a win. If the forecast is off by a large amount, then it’s time to act.

Calculating Size of the Error

What is a large amount? That will vary from forecast to forecast. For example, a widget’s forecasted consumption may have an absolute deviation (absolute error) of 1,000 units in a month. Said differently, the forecasted quantity could have been 10,000 units while actual consumption was only 9,000 units – we over-forecasted by 1,000 units. Being off by 1,000 units may sound like a lot but, if we take the absolute percentage error formula

Absolute Value((Actual – Forecasted)/Actual X 100))

Absolute Value(9,000 -10,000)/9,000) X 100 = 11%

we get a mere 11 percent. With the absolute percentage error, the lower the number the better. The absolute percentage error metric provides a deeper understanding of how close the forecast was.

If I am using a weighted forecasting model, in which I take the average demand of the same month over different years e.g., January 2018, January 2019, January 2020, and January 2021, I can then assign a different weight to each year. I typically assign a higher weight to more recent years because the recent past will often – but not always – be a better predictor of the near term.

The below table exhibits how I typically assign weights to years in a weighted forecast.

After assigning the above weights in a forecast model for a particular widget, I found that the forecast was a fair amount off in one month.  The below graph helps to illustrate how the forecast in February 2022 was off from the actual.

The red circle around the February 2022’s data shows the vertical line between the turquois blue (actual consumption) and the light blue dashed line (Weighted Average Forecast) to illustrate how off the forecast was from actual consumption.

In February 2022, our actual consumption for this widget was 324 units, but the forecast was 2,328 Units.  This was an absolute deviation (aka absolute error) of 2,004 units.  The absolute percentage error was 619 percent.  Here is where the next step of the action comes in.

After noticing this significant deviation, I considered the usage from the years 2020 and 2021.  As you can see from the graph above, the consumption reached dramatically higher points in these years than in 2018 and 2019; this is one of the instances where the recent past is not a better predictor of the near future.

It is safe to say that the months of consumption in 2020 and 2021 may result in an overstated forecast if we give more deference to these years.  To address this concern, I adjusted the weights from my pervious table.  I reduced the weights from the years 2021 and 2020 both by 10 percent and added 10 percent to both 2019 and 2020.

The new weights are reflected in the below tables.  I made these adjustments for 2022’s Weighted Average forecast and made similar adjustments to the year 2023’s Weighted Average Forecast.

After adjusting the weights, the forecasted quantity for February 2023 was reduced from 1,638 units to 1,235 units and appears to be more in line with previous Februarys.

The adjustments I made to the weights in the model project a forecast for the rest of 2022 and for the first quarter of 2023 that I feel quite comfortable with.  February 2023’s new forecasted consumption of 1,235 is still rather high.  The average monthly consumption over the past four years is only 462 units.  Though it is unlikely that 1,235 units will need to be in our available inventory in February 2023, I would much rather have our organization be in the position where we are somewhat overstocked than substantially understocked.

The strategy of whether to carry a heavy inventory or lean inventory will vary from widget to widget. Some widgets will have far less of a risk appetite than others.  For these items, the approach of over-estimating, and the consequence of excessive inventory, is one that we want to live with as opposed to the risk of stocking out.  The aforementioned scenario shows us how connected forecast modeling is to an inventory strategy – a topic that certainly deserves more explanation in another article.

How I React  When The Error is Large

These are the steps in the process I use to react to forecasts when there is a significant deviation from actual demand.  A summary of the process is as follows.

Step 1: Review the item’s forecasted demand to actual demand and identify where there are big variances. Use metrics such as the absolute error (absolute deviation) and absolute percentage error to validate that there was a deviation that deserves attention.

Step 2: Review the historical demand and then adjust the calculations in the forecast as necessary. In the case of the weighted average forecast, one can easily make weight adjustments to the years, or time periods, in which historic demand may be understating or overstating the forecasting model.

Another possible step may include adjusting the historic demand data sample.  For example, if the years 2020 and 2021 demonstrate a clear downward trend of decreased demand from 2018 and 2019, it may make sense to completely remove the years 2018 and 2019 from the forecast’s data sample.  The new sample would include only the past 24 months of demand instead of the past 48 months.

There is nearly an endless number of ways in which you could change the approach to forecasting when there is a significant difference in forecasted demand from actual demand.

Perhaps an even simpler approach is to change the forecasting model all together.  Instead of applying a Weighted Average forecast, one could use a Simple Linear Regression model, or Exponential Smoothing model. Whichever the forecasting methodology you use, the same basic steps of reviewing the differences in the forecast against the actual will be necessary to determine your next course of action.


IBF’s Supply Chain Planning Boot Camp returns to Las Vegas from August 10-12.  Learn the fundamentals and best practices across the supply-demand chain and how leading companies balance supply and demand, from the demand plan to the master scheduling process. More information here.

 

 

]]>
https://demand-planning.com/2022/06/03/how-i-react-when-forecasted-demand-deviates-from-actual-demand/feed/ 0
Tracking Forecasting Error With An Excel Model (With Free Download) https://demand-planning.com/2022/04/22/tracker/ https://demand-planning.com/2022/04/22/tracker/#respond Fri, 22 Apr 2022 17:25:50 +0000 https://demand-planning.com/?p=9566

Peter Drucker’s famous axiom “You can’t improve what you don’t measure” is particularly relevant to business forecasting. As Demand Planners, we want to measure our forecast performance so we can iterate and improve. Here I present an Excel-based Forecast Performance Tracker (free download available below) that you can use for your own error measurement.

There are  various methods and metrics to track and assess Forecast performance. A few of the most widely-used metrics are MAPE, WMAPE, MAD, MSE, RMSE, BIAS, Tracking Signal, as well as Michael Gilliland’s FVA (Forecast Value Added). Demand Planning teams monitor and report the forecast performance. When tracking forecast error through such metrics, it is essential to know why the error has occurred so the root cause can be addressed. There will be always be a certain amount of innate volatility and variability in forecasts. And, since the forecast is validated by human interference and judgements, bias is always present to some degree. 

Having an understanding of the error enables us to make decisions that will reduce it. Forecast error can be problematic for organizations – not only within supply chain/operations, but at an enterprise level. Though the steps taken based on the understanding of forecast errors are reactive, we can use those steps to reduce future errors. 

Forecast error simply defined is the difference between the actual demand (sales) and forecasted demand. Forecast Error = (Forecast – Actual) / Actual. Root Cause Analysis (RCA) can be split into 3 classifications: Over Forecasting, Product Unavailability and Under forecasting. The following table (Table 1) gives an insight into these  3 RCA classifications. 

Figure 1 | Root Cause Analysis Classification Model

The RCA Classification model above gives details our 3 classifications of Over, Under and Product Unavailability. The framework also gives details about negative or positive bias.  Importantly, it also displays a few of the potential impacts on the business. There is also one more factor we should be aware of that isn’t included in the table – Random Variation. In cases of Random Variation, the error generally corrects itself. 

Model To Track Root Cause Analysis Of Forecast Error

Over forecasting and under forecasting are widely discussed in the demand planning literature. However, I haven’t seen much discussion about product unavailability. An Excel-based forecasting KPI tracker is prepared (see a snapshot below).

[CLICK TO DOWNLOAD THE FORECAST TRACKER]

The most important elements are Forecast, Actual Sales, and Inventory (closing) for the given forecasting period (month, week, etc.). For simplicity, we are using 2 products (P1, P2) and 3 locations (L1, L2 and L3). The forecasting horizon is monthly, from January to April. Other details like Sales Representative, Product segment, and Categories can be added as per your business requirements. The purpose is to monitor forecasting performance by product and location on a monthly basis. 

You’ll also see the different error metrics: Error, Absolute Error, MAPE/WMAPE, Bias, Over Forecasting, Under Forecasting and Product unavailability. 

Screenshot of forecast tracker

Cont.

In this tracker, when you add the monthly forecast, actuals, and inventory data, the rest of the report updates accordingly. All the data analytics are managed in Excel with formulas, pivot tables, and charts. 

Forecasting Performance Dashboard 

The model contains an interactive dashboard which, at the end of the month, can be used to share forecast error in demand planning/S&OP meetings sessions as a standard report. The dashboard present the data via effective visualizations that depict the narrative behind key performance indicators, including key insights and recommendations on a single screen. 

The most important component of the dashboard is the key insights and recommendations. Going into any meetings where the dashboard is used, Demand Planners should have a good understanding of the major forecast errors and be ready to facilitate discussion surrounding actionable steps to remedy the causes. The aim is for senior management to make informed decisions. 

Below (figure 2) you can see the dashboard. The key features are the MAPE monthly trend, and top locations and products with highest MAPE for the month. For example, location L1 is experiencing error from under forecasting and therefore needs to be addressed in the meeting to identify what can be done to remedy it. Location L2 is facing under forecasting. To a certain extent this under forecasting is correlated to product unavailability since sales tried to compensate for the forecast target with available and on-demand products. 

Figure 2 | Snapshot of forecast tracker dashboard

Cont.

Benefits Gained From Forecasting Root Cause Analysis

As Arthur C. Clarke said, “ I don’t pretend we have all the answers. But the questions are  certainly worth thinking about.This methodology enables exactly that – allowing you to measure forecast error and discuss root causes in a simple yet effective way. With insight into root causes, you can optimize your supply responses better and shape demand accordingly. Improved forecast accuracy will naturally follow.

 Key Takeaways 

1 – Demand Planners should demonstrate strategic value by bringing key insights and recommendations to facilitate informed decision-making.

2 – The purpose of such models is not to highlight ‘WHO’ (any function/role areas) but to  effectively address the ‘WHAT’ (cause for over or under forecasting). 

3 – Art is an important trait required for Demand Planners. They should convey the key  insights, and just the data. 

4 – Demand and supply variability is great these days, so be aware that forecast error improvement has a limit as we have no control over external factors impacting demand.

5 – Estimating all the components of error from the demand history is not possible (or even appropriate). Uncertainty is intrinsic. 

6 – Demand Planners should persistently develop data analytics skills with a clear approach to storytelling instead of only providing reports based on convoluted mathematical formulas. 

7 – Emphasis on forecast accuracy numbers will result in bias. Hence, the focus should be on providing key highlights to the Management team. The most consuming part of the reports is the insights and recommendations sections which enable businesses to take better decisions. 

8 – As mentioned in my previous blog, Segmentation Framework For Analyzing Causal Demand Factors, Forecast Accuracy is not the goal but a means toward the larger goals of the enterprise. 

Do you find this model useful? Is there any further enhancement that can be done? I am  open to hearing from you. 

Do you want to understand the logic behind this forecasting performance tracker and  dashboard in Excel? Connect me for a session. I will be happy to take you through the tracker and dashboard. 

Connect with Manas on LinkedIn and follow him on Medium.  


For more demand planning insight, join us at IBF’s Global S&OP & IBP Best Practices Conference in Chicago from June 15-17. You’ll learn the ingredients of effective planning, whether you’re just getting started or are finetuning an existing process. Early Bird Pricing now open – more details here.

]]>
https://demand-planning.com/2022/04/22/tracker/feed/ 0
3 Sources Of Forecast Error To Avoid https://demand-planning.com/2022/02/28/3-sources-of-forecast-error-to-avoid/ https://demand-planning.com/2022/02/28/3-sources-of-forecast-error-to-avoid/#comments Mon, 28 Feb 2022 09:05:02 +0000 https://demand-planning.com/?p=9499

Those seeking to reduce error can look in three places to find trouble: The data that go into a forecasting model, the choice of a forecasting method, and the organization of the forecasting process. Let’s look at each of these elements to understand where error can be introduced into forecasting so we can mitigate it and improve our forecast accuracy.

1. Error Caused by Data Problems

Wrong data produce wrong forecasts. I have seen an instance in which computer records of product demand were wrong by a factor of two! Those involved spotted that problem eventually, but a less obvious error – but still damaging – can easily slip through the cracks and poison the forecasting process. In fact, just organizing, acquiring, and checking data is often the largest source of delay in the implementation of forecasting software. Many data problems derive from the data having been neglected until a forecasting project made them important.

Data Anomalies

Even with perfectly curated forecasting databases, there can be wildly discrepant  – though accurately recorded – data , i.e., anomalies. In a set of, say, 10,000 products, some items are likely to have endured strange things in their demand histories. Depending on when the anomalies occur and what forecasting methods are in use, anomalies can drive forecasts seriously off track if not dealt with.

2. Error Caused by the Wrong Forecasting Method

Traditional forecasting techniques are called extrapolative methods because they try to find any patterns in an item’s demand history and project (extrapolate) that same pattern into the future. The most used extrapolative methods go by the names of exponential smoothing and moving averages. There are variants of each type, intended to match the key characteristics of an item’s demand history. Is demand basically flat? Is there a trend? Is there a seasonal cycle?

However, where there is choice, there is the possibility of error. Choosing an extrapolative method that misses trend or seasonality is sure to create avoidable forecast error, so is one that wrongly assumes trend or seasonality.

“Using classical extrapolative methods on intermittent data is asking for trouble.”

Further, extrapolative methods are designed to work with data that are “regular,” which is to say non-intermittent. Intermittent data have a large percentage of zero demands, with random non-zero demands mixed in. Spare parts and big-ticket, slow-moving items are usually intermittent.

High-volume items like CPG products are usually non-intermittent. Intermittent demand data requires specialized forecasting methods, such as those based on Markov modeling and statistical bootstrapping. Using classical extrapolative methods on intermittent data is asking for trouble.

Even when the assumptions underlying a forecasting method are satisfied by an item’s demand history, the method might still be considered “wrong” if there is a better method available. In some cases, methods based on regression analysis (also called causal modeling) can outperform extrapolative methods or specialized methods for intermittent demand. This is because regression models leverage data other than an item’s demand history to forecast future demand.

“Although regression models have great potential, they also require greater skill, more data, and more work.”

Although regression models have great potential, they also require greater skill, more data, and more work. Unlike extrapolative and intermittent methods, they are not available in software as automatic procedures. The first problem is to determine what outside factors drive demand. Then one must acquire historical data on those factors to use as predictor variables in a regression equation. Then one must separately predict all those predictors. This process demands a level of statistical sophistication that is usually lacking among Demand Planners, opening up possibility for error.

Pro tip: Any proposed statistical forecasting method should be benchmarked against the simplest method of all, known as the naïve forecast. If the data are non-seasonal, then the naïve forecast boils down to “tomorrow’s demand will be the same as today’s demand.” If the data are seasonal, it might be something like “next April’s demand will be the same as this April’s demand.” If a fancy method can’t do better than the naïve method (and sometimes they can’t), then why use it?

3. Error Caused by Flaws in the Forecasting Process

Forecasting always starts out as an individual sport but usually includes a team component. Each phase can go wrong. We’ve already discussed errors caused by individual forecasters, such as deciding to use the wrong model or feeding the model data of poor quality.

Forecasting always starts out as an individual sport but usually includes a team component.

The team component usually plays out in periodic Sales and Operations Planning (S&OP) meetings. In these gatherings, various relevant departments gather to argue out what the company’s official forecast will be. While the aim is to achieve consensus, the result may work against the goal of reducing forecast error.

Participants often come to these meetings with their own competing forecasts. The first mistake may be trying to pick just one as the “official” forecast for all. Various functions – Marketing, Sales, Production, Finance – often have different priorities and different planning tempos. For instance, Finance may need quarterly forecasts, but production might need weekly forecasts.

These differences in forecast horizon imply different levels of aggregation, which can greatly influence the choice of a forecasting method. For example, day-of-week seasonality in demand may be critical for Production but irrelevant for Finance.

Assuming there are competing forecasts at the same time scale, the second mistake may be the way these forecasts are evaluated. At this stage, relative accuracy is usually the deciding criterion. The mistake is not recognizing this as an empirical question that cannot be settled by arguments about relative expertise or sophistication.

Too often, companies do not take the time to acquire and analyze retrospective assessments of forecast accuracy. If the task is to forecast next month’s demand using a certain technical approach, how has that approach been doing? Forecasting software often includes the means to do this analysis, but it is not always exploited when available. If it is not available, it should be made so.

“S&OP meetings often fail when the participants suggest changes to statistical forecasts.”

S&OP meetings often work, or fail, when the participants suggest changes to statistical forecasts. Since statistical forecasts are inherently backward-looking, these management overrides should, in principle, reduce error by accounting for factors like future promotions or market conditions that are not encoded in an item’s demand history. The third mistake is failing to monitor and confirm their value. Many of us believe we have a “golden gut” and can adjust forecasts without risk. Not necessarily true; trust but verify.

 

 

]]>
https://demand-planning.com/2022/02/28/3-sources-of-forecast-error-to-avoid/feed/ 1
Ask Dr. Jain: Where To Put The Forecasting Function For Lowest Forecast Error? https://demand-planning.com/2019/09/23/where-to-put-the-forecasting-function/ https://demand-planning.com/2019/09/23/where-to-put-the-forecasting-function/#respond Mon, 23 Sep 2019 13:53:16 +0000 https://demand-planning.com/?p=7983

[ Q ] Do you have any research/survey data regarding where to place the forecasting function? I have a client who is in the process of migrating to the S&OP process and wants to decide where to put it. They do not want it within Supply Chain, and are looking for a data-supported alternative. Anything you can provide would be greatly appreciated.

[ A ] Based on an IBF survey, most companies house their forecasting function within Supply Chain (49%), followed by Sales. The reason may very well be that forecasts are used most by the Supply Chain. Further, data shows that forecasting error is the lowest when they have it within Marketing (16.30%), and highest in Supply Chain (26.32%).  You read more on the relationship between different departments and forecast error in IBF’s research report, ‘The Impact of People and Processes on Forecast Error in S&OP’.

[Ed: You can find all of IBF’s research reports here.]

 

Happy forecasting,

Dr. Chaman Jain,

St. John’s University

]]>
https://demand-planning.com/2019/09/23/where-to-put-the-forecasting-function/feed/ 0
Ask Dr. Jain: What Are The Forecast Accuracy Benchmarks In Retail? https://demand-planning.com/2018/06/22/what-are-the-benchmarks-in-retail-forecasting-accuracy/ https://demand-planning.com/2018/06/22/what-are-the-benchmarks-in-retail-forecasting-accuracy/#comments Fri, 22 Jun 2018 12:40:49 +0000 https://demand-planning.com/?p=7049

Question:

Dear Dr. Jain,

What are the benchmarks in forecast accuracy in the retail industry, specifically for companies that use JDA Demand software?

Answer

These are benchmarks of forecast errors in the retail industry, based on the last five years of IBF surveys. Numbers represent the total industry, and not those of who use just JDA. Because of few observations in each survey, we have to combine the numbers.

Level 1 Month Ahead 2 Months Ahead 1 Quarter Ahead
SKU

Category

Aggregate

30%

18%

9%

34%

20%

9%

33%

25%

8%

 

I hope this helps.

Dr. Chaman Jain,

St. John’s University

 

]]>
https://demand-planning.com/2018/06/22/what-are-the-benchmarks-in-retail-forecasting-accuracy/feed/ 2
Stop Saying Forecasts Are Always Wrong https://demand-planning.com/2018/02/20/forecasts-are-always-wrong/ https://demand-planning.com/2018/02/20/forecasts-are-always-wrong/#comments Tue, 20 Feb 2018 17:09:53 +0000 https://demand-planning.com/?p=6233

For many of us, the words “the forecast is always wrong” has become something we instinctively say. There’s nothing wrong with acknowledging there is variation in demand or admitting we may miss a projection. But when it becomes your automatic response to any miss and is believed to be an unavoidable part of forecasting, it is highly limiting. This seemingly harmless habit can actually lower the effectiveness of forecasts and the business’s confidence in them. What’s more, it justifies other people’s poor actions and focuses attention on the wrong things. 

As Demand Planners, We Need To Give Ourselves More Credit

I cannot help but believe that when everyone constantly says that forecasts are always wrong, it needlessly creates guilt in the poor Demand Planner’s mind and undermines their self-esteem. It’s hard to feel good about yourself when you keep falling on your own sword.

Maybe we should stop saying we are sorry and stop saying forecasts are always wrong. Repeating this mantra also sends the message that you’d rather be agreeable than be honest, when in fact our job is not to provide a number but to offer solutions. We need to stop using the crutch of inevitable forecast error and start having honest conversations and focus on what we can predict and what we can control.

When others say “the forecast is always wrong” what they really mean is that demand variability is perfectly normal.

It Actually Is Possible To Be 100% Accurate

Yes, it really is. But let us start with what constitutes accuracy. Accuracy is the degree of closeness of the statement of quantity to that quantity’s actual (true) value. While I accept that one’s ability to create an accurate forecast is related to demand variability, an accurate forecast does not reduce demand variability. Demand variability is an expression of how much the demand changes over time and, to some extent, the predictability of the demand.  Forecast accuracy is an expression of how well one can predict the actual demand, regardless of its volatility.

So, when others say “the forecast is always wrong”, what they really mean is that demand variability is perfectly normal. What we should be focusing on is that “while we can’t predict demand perfectly due to its inherent variability, we can predict demand variability” (Stefan de Kok). This is the difference between trying to precisely predict the exact point and accurately predicting a range or the expected variability.

A common example of this is trying to guess the outcome of rolling two fair dice compared to accurately predicting the range of possible outcomes. For the throw of the two dice, any exact outcome is equally probable and there is too much variability for any prediction to be useful. But the different possibilities for the total of the two dice to add up to are not equally probable because there are more ways to get some numbers than others. We can accurate predict that 16.7% of the time the two dice will add up to seven, and we can predict the range of possible outcomes as well as the probability of each outcome. While we may not know exactly what will happen, we can exactly predict the probability of it occurring. And if you predict the outcome within the probabilities, guess what? You are correct. Even though 100% precise is not an option looking at ranges or probabilistic forecast, 100% accuracy most certainly is within the realm of possibilities!

Bingo! We have officially proven everyone wrong and have our 100% accuracy.

Forecasts are always wrong

Accurately predicting an outcome within a range of probabilities is more valuable than trying to forecast a single number.

Range Forecasts Give Us So Much More Information Than Single Point Forecasts

Besides being able to more accurately predict the probabilities of outcomes and ranges, we are also providing more relevant and useful information. When you predict the variability, this not only grounds our initiatives in reality but also gives us the power to make better business decisions. One way to counteract variability is to ask for range forecasts, or confidence intervals. These ranges consist of two points, representing the reasonable “best case” and “worst case” scenarios. Range forecasts are more useful than point predictions.

With any single point forecast you are providing a single point of information which you know is not 100% correct. With a range you are providing four pieces of valuable information: we not only know the point or mean but we also know the top, the bottom, and the magnitude of possible variability.

Measuring the reduction in error rather than the increase in accuracy is more valuable to us because there is a stronger correlation between error and business impact than there is between accuracy and business effect.

It doesn’t take much to see that such a probabilistic forecast, or even just a forecast with ranges and a better prediction of uncertainty, is useful information in supply chain planning. Now we know how much variability we need to plan for and can better understand the upside or downside risk involved. In addition, accurately predicting uncertainty can add enormous value. That’s because you are focusing on improving not only the average demand prediction, but the entire range of possible demand predictions including the extreme variability that has the biggest impact on service levels.

Your KPIs For Measuring Forecast Error Are Based On A False Assumption

Part of the problem with saying we are always wrong is that we measure our performance ineffectively. This is because our definitions of forecast error are too simplistic or misrepresented. Many people look at forecast accuracy as the inverse of forecast error, and that is a major problem. Most definitions of forecast error share a fundamental flaw: they assume a perfect forecast and define all demand variability as forecast error. The measures of forecast error, whether it be MAPE, WMAPE, MAD or any similar metric, all assume that the perfect forecast can be expressed as a single number.

I mentioned above that we can provide more information in a range of forecast probabilities and subsequently be more accurate. All we need now is a way to measure this and prove it. A metric which helps us measure the accuracy and value of these types of forecasts is Total Percentile Error (TPE). Borrowing Stefan de Kok’s definition, TPE “measures the reduction in error – rather than the increase in accuracy – since there is a stronger correlation between error and business impact than between accuracy and business effect.” For more detailed information about this calculation see Foresight Magazine’s Summer 2017 issue.

Nassim Nicholas Taleb described this type of forecast accuracy measurement in his book, The Black Swan. He explains the difference in measuring a stochastic forecast (using probability distributions) and more traditional approaches (using a single point forecast).  He states that if you predict with a 20% probability that something will happen (and across many instances it actually happens 20% of the time) that the error is 0%. Naturally, it would also need to be correct for every other percentile (not just the 20 percentile) to be 100% accurate.

Bingo! We have officially proven everyone wrong and have our 100% accuracy.

You need to stop using the crutch of inevitable forecast error and start honest conversations about what we can predict and what we can control.

Focus On The Process

Even though we should know there is no such thing as being“wrong”, we should still look at what we are measuring and incentivize the right behavior. Mean Absolute Percentage Error (MAPE) or Mean Percentage Error (MPE) will tell us how much variability there is and the direction, but they do not tell us if the Demand Planning process is adding value. The question shouldn’t be whether we are right or wrong, but whether the steps we are taking actually improve the results. And if so, by how much.

Forecast Value Added (FVA) analysis can be used to identify if certain process steps are improving forecast accuracy or if they are just adding to the noise. When FVA is positive, we know the step or individual is adding value by making the forecast better. When FVA is negative, the step or individual is just making the forecast worse. [Ed: for further insight into FVA, see Eric’s guide to implementing FVA analysis in your organization.]

The obvious advantage to focusing on these types of metrics and KPI’s is that we are not casting blame but discovering areas of opportunities, as well as identifying non-value added activities. By eliminating the non-value adding steps or participants from the forecasting process, those resources can be redirected to more productive activities. And by eliminating those steps that are actually making the forecast worse, you can achieve better forecasts with no additional investment.

I Beg Of You, Please Change Your Vocabulary!

At the end of the day, our goal is not necessarily to be precise but to make a forecast more accurate and reliable so that it adds business value to the planning process. We need to stop saying we are sorry for what is out of our control and start controlling what we know is possible. To do this, we must not only change our vocabulary but also change the way we are doing our jobs.

Most people are fixed on traditional forecasting process and accuracy definitions. The goal is for you to start thinking in terms of the probability of future demand. From there, you need to be the champion inside your organization to help others understand the value of what forecasts provide. You need to stop using the crutch of inevitable forecast error and start honest conversations about what we can predict and what we can control.

]]>
https://demand-planning.com/2018/02/20/forecasts-are-always-wrong/feed/ 5
Product Portfolio Optimization – Journal of Business Forecasting (Special Issue) https://demand-planning.com/2016/02/29/product-portfolio-optimization-journal-of-business-forecasting-special-issue/ https://demand-planning.com/2016/02/29/product-portfolio-optimization-journal-of-business-forecasting-special-issue/#respond Mon, 29 Feb 2016 17:09:24 +0000 https://demand-planning.com/?p=3148 COVER_Winter_2015-2016_Product_Portfolio_Optimization_HIGH_RESWithin the pages of this particularly exciting issue, you will read articles written by the best minds in the industry to discuss multiple important aspects of Product Portfolio Optimization. This is an important topic because in today’s highly competitive market, it is becoming more important than ever to look for ways to cut costs, and increase revenue and profit. Markets are now demand driven, not supply driven.

Globalization has intensified competition. Every day, thousands and thousands of new products enter the market, but their window of opportunity is very narrow because of shorter life cycles. Plus, too much uncertainty is associated with new products. Their success rate is from poor to dismal—25% according to one estimate. Despite that, they are vital for fueling growth. Big box retailers are putting more pressure on suppliers to provide differentiated products. Consumers want more choices and better products. All these factors contribute to the greater than ever number of products and product lines, making management of their demand more complex, increasing working capital to maintain safety stock, raising liability of slow-moving and obsolete inventory, and increasing cost of production because of smaller lots and frequent change overs. Product portfolio optimization deals with these matters.

Product portfolio optimization includes the following: one, how to rationalize products and product lines and, two, how to manage most effectively their demand. Product rationalization includes deciding which products and product lines to keep and which ones to kill, based on the company’s policy. Demand management, on the other hand, is leveraging what Larry Lapide from University of Massachusetts and an MIT Research affiliate calls 4Ps (Product, Promotion, Price, and Place) to maximize sales and pro‑t. The sales of low-performing product lines may be bumped up with a price discount, promotion, line extensions, or by finding new markets.

[bar group=”content”]
Although the S&OP process has a component of product portfolio optimization, its team members pay nothing more than lip service to it. Pat Bower from Combe Incorporated discusses in detail the process of product portfolio optimization in the framework of new products. How new products should be filtered from ideation to development and, after launch, how they should be leveraged. Their window of opportunity is very small; most CPG products flame out within the first year of their existence, says Pat.

Mark Covas from Coca-Cola describes in detail 10 rules for product portfolio optimization. He suggests companies should divest low margin brands, no matter how big they are. Many companies such as ConAgra Foods, General Mills, Procter & Gamble, and Estée Lauder are doing it. This makes the allocation of marketing dollars more productive—taking funds away from low performing brands and giving to high performing ones.

Charles Chase from SAS and Michael Moore from DuPont recommend the Pareto principle of 80/20 to determine which products or product lines to concentrate on in their portfolio optimization e­fforts. Greg Schlegel from SherTrack LLC. Goes even further and proposes that this principle should be extended even to customers. He categorizes customers into four: 1) Champions, 2) Demanders, 3) Acquaintances, and 4) Losers. He then describes a strategy for dealing with each one of them. Greg Gorbos from BASF points out hurdles, political and others, that stand in the way of implementing the optimization policy, and how to deal with them. Clash occurs among different functions because of difference in their objectives. Sales looks to achieve revenue targets, while Marketing looks to hold market share and increase profit. Finance also looks at profit, but seeks to reduce cost and increase capital flow, while Supply Chain looks at cost savings. Communication is another issue Greg points out. The company may decide to deactivate a product, but information about it is not communicated to all the functions. Je­ff Marthins from Tastykake talks, among other things, about the exit strategy, which he believes is equally important. He says that we cannot deactivate a product without knowing its inventory position, as well as holding of raw and packaging materials for it.

For survival and growth in today’s atmosphere, it is essential to streamline the product portfolio to reduce costs, and increase revenue, profit, and market share. This issue shows how.

I encourage you to email your feedback on this issue, as well as on ideas and suggested topics for future JBF special issues and articles.

Happy Forecasting!

Chaman L. Jain
Chief Editor, Journal of Business Forecasting (JBF)
Professor, St. John’s University
EMAIL:  jainc [at] stjohns.edu

DOWNLOAD a preview of this latest Journal of Business Forecasting (JBF) Issue

Click HERE to join IBF and receive a JBF Complimentary Subscription

 

]]>
https://demand-planning.com/2016/02/29/product-portfolio-optimization-journal-of-business-forecasting-special-issue/feed/ 0
Risk-Adjusted Supply Chains Help Companies Prepare for the Inevitable https://demand-planning.com/2016/02/19/risk-adjusted-supply-chains-help-companies-prepare-for-the-inevitable/ https://demand-planning.com/2016/02/19/risk-adjusted-supply-chains-help-companies-prepare-for-the-inevitable/#respond Fri, 19 Feb 2016 16:25:51 +0000 https://demand-planning.com/?p=3116 Each time I get in my car and drive to work, or the grocery store or wherever, there are a myriad of dangers that I might encounter. I could get t-boned at an intersection by a distracted driver; I might blow a tire and swerve into a ditch or a piece of space debris could crash through my windshield. Some perils are, obviously, less likely than others, but the reality is, anything can happen.

While I don’t obsessively worry about every possible risk, I am aware of the possibilities and I take measures to lower both the odds and severity of a mishap. I keep my vehicle well maintained, I buckle up and I pay my auto insurance. Similarly, today’s supply chain professionals must be more conscientious and proactive in their efforts to mitigate the risk of a supply chain disruption and to minimize the impact when the inevitable does occur.

As much as we may feel at the mercy of disruptions from severe weather, natural disasters, economic instability or political and social unrest, members of today’s high tech supply chain have never been better equipped to minimize the risks and capitalize on the opportunities that may arise from a supply chain disturbance.

One of the most simple, but powerful, tools at our disposal is information. Twenty-four hour news stations, social media and cellular communications give us literally instant access to events occurring in the most remote reaches of the world.

More tactically, mapping the physical network of the supply base, including manufacturing facilities, warehouses and distribution hubs, is an important part of any risk management strategy. The key here is mapping the entire supply chain network, not just top-spend suppliers or first-tier contract manufacturers. Most of this information is relatively accessible through supplier audits and, with the help of Google maps, you can create a pretty comprehensive picture of your physical supply chain.

[bar group=”content”]

Remember, though, supply chains are much more fluid than they have ever been. Today’s multinationals are likely to rely on three to five different contract manufacturers (CMs) and original design manufacturers (ODMs), and scores of other suppliers around the world for the tens of thousands of parts needed to build and maintain their products. With outsourced production so commonplace, production lines can be shifted between locations within a matter of weeks, so frequent monitoring and updating of supply chain shifts is critical.

IoT technology such as sensors and RFID tracking can also provide meaningful intelligence that may be used to identify and mitigate risk throughout the end-to-end supply chain process. The ability to gather and analyze these constant data inputs is a recognized challenge throughout the supply chain profession. Those who master the digital supply chain sooner, will enjoy a substantial competitive advantage.

Once these various vehicles are used to create a composite picture of the risk landscape, then risk mitigation strategies take center stage. These efforts can range from traditional techniques such as the assignment of a cache of safety stock to more intricate maneuvering of storage facilities and full network design. Deployment of these mitigation strategies requires a detailed recovery and communications plan.

In my upcoming presentation at IBF’s Supply Chain Forecasting & Planning Conference at the DoubleTree Resort by Hilton in Scottsdale, AZ, February 22-23, 2016, I will delve deeper into the growing range of potential disruptors in the high tech supply chain. I will outline the core elements of a comprehensive supply chain risk management strategy, including how to define and map the physical supply chain, the landscape around supply chain risks and their impact on financial metrics, and how to proactively assess potential risk. I hope to see you there.

]]>
https://demand-planning.com/2016/02/19/risk-adjusted-supply-chains-help-companies-prepare-for-the-inevitable/feed/ 0
Forecasting & Planning Learnings from Day 2 of IBF Academy: An Attendee’s Perspective https://demand-planning.com/2015/09/16/forecasting-planning-learnings-from-day-2-of-ibf-academy-an-attendees-perspective/ https://demand-planning.com/2015/09/16/forecasting-planning-learnings-from-day-2-of-ibf-academy-an-attendees-perspective/#comments Wed, 16 Sep 2015 14:23:57 +0000 https://demand-planning.com/?p=3054 Last Month, I had the opportunity to attend IBF’s Business Forecasting & Planning Academy held in Las Vegas. I recently shared some insights from the first day of the program. Day 2 was similarly eventful. Here are some highlights.

Forecast Error

The first session I attended on Tuesday was “How to Measure & Reduce Error, and the Cost of Being Wrong” an advanced session presented by Dr. Chaman Jain from St. John’s University.  Dr. Jain reviewed the basic methods and mechanics of how to compute forecast error and the pros and cons of each technique. It was interesting that IBF has found that more and more companies are moving from MAPE (Mean Absolute Percentage Error) to a Weighted MAPE (WMAPE) to focus their attention on errors that have a relatively larger impact or little to no impact at all.  Standard MAPE treats all errors “equally”, while WMAPE places greater significance on errors associated with the “larger” items. The weighting mechanism can vary, typically unit sales are used, but I was intrigued by the notion of using sales revenue and profit margin as well.  If a company has low volume items but they are big revenue and profit items, they would not want to miss an opportunity to focus attention on why they have significant errors on these items.

Another interesting concept that Dr. Jain discussed was the use of confidence intervals around error measurements.  Many companies report their error measurement as a single number and rarely present the error measure in terms of a range of potential errors that are likely. Having a view into the potential range of errors can allow firms to exercise scenario planning to understand the impact to supply chain operations and the associated sales based upon multiple forecast errors instead of a single number.

My last takeaway is related to the question of how much history should be used to support time series analysis. Dr. Jain stated, and I believe rightly so, that it depends. Are there potential seasonality, trend, business cycles, or one-time events? How much does one need to see these? What if the past is really not a good indicator anymore of the future? What if the drivers of demand for a product have substantially shifted? One technique suggested that seems sound is to test the forecasting model’s performance using different periods of historical data. Use a portion of the history to build the model, and the remaining portion to test the accuracy of the forecast against the actuals held out of model construction. Try different lengths until you find the one that has the lowest error and also allow the process to have different history lengths for each time series forecast.

Lean Forecasting & Planning

Next I attended another advanced session led by Jeff Marthins from Tasty Baking Company/Flowers Foods on “Lean Forecasting & Planning: Preparing Forecasts Faster with Less Resources”. The session focused on doing more with less, a common theme that has permeated the business world these last several years. Marthins’ session was really about how to focus on what matters in demand planning: looking at the overall process, agreeing to and sticking with the various roles and responsibilities in the process, and understanding how the resulting forecasts and plans are to be used by various consumers in the business which drives the level of detail, accuracy and frequency of updates.

To gain an understanding of the demand planning process, Marthins asked the participants to look at a picture of his refrigerator and answer “Do I have enough milk?” This relatively simple, fun question elicited numerous inquiries from the participants around consumption patterns, replenishment policies and practices, sourcing rules, supplier capacity and financial constraints that illustrated the various types and sources of information that are required to develop a solid, well-thought-out demand plan. It was a very effective approach that can be applied to any product in any company.

To illustrate the need to understand the level of accuracy required of a forecast, Marthins used the weather forecast. How accurate is the weather forecast? How often is it right? How precise does it need to be? Once we know the temperate is going to be above 90 degrees fahrenheit, does it matter if is 91 or 94 degrees?  Is there a big difference between at 70% chance of rain or an 85% chance of rain?  What will you do differently in these situations with a more precise weather forecast? Should I plan to grill tonight? Will I need to wear a sweater this evening? Can we go swimming?  If the answer is nothing, then the precision does not really matter and spending time and effort creating or searching for greater forecast accuracy is a “waste” and wastes should be eliminated or reduced in Lean thinking. Marthins also stressed the value of designing your demand planning process with the usage of information in mind. Adopting a Forecast Value Add (FVA) mentality to assess whether each step in your forecasting and demand planning process is adding value will help to accomplish this. Start by asking if the first step in your forecasting process results in greater accuracy than a naïve forecast such as using the same number as last time you forecasted, or a simple moving average? When your accuracy improves with each step in the process, is it worth the effort or time it takes? Can I be less accurate and more responsive and still not have a negative impact? If I can update my forecast every day with 90% accuracy versus once a week with 92% accuracy, or once a month with 96%, which is better? How responsive can I be to the market by making daily adjustments that are nearly as accurate as weekly ones?

In yet another session, the topic of scenario analysis was raised. The team at IBF are getting this one right making sure it is discussed in multiple sessions. What I wonder is how many companies are adopting scenario analysis in the demand planning and S&OP processes? From my experience it is not the norm.  Marthins suggested testing the impact of various forecasts, and hence forecast accuracies, would have on supply chain performance and even using scenario analysis to understand if a systematic bias, either high or low, might make sense. I have known companies that have employed the policy of allowing overestimating to ensure their resulting demand plan was on the high side. Carrying more inventory even with all the associated costs was of greater benefit to the company than a lost sale or backorder. Bias is not a bad thing if you understand how it is used and its resulting impact, just like inventory is not an evil when used in a planned and methodical manner.
[bar group=”content”]

Data Cleansing

After lunch I attended my second session delivered by Mark Lawless from IBF “Data Cleansing: How to Select, Clean, and Manage Data for Greater Forecasting Performance”. As in any analytical process, the quality of the inputs are crucial to delivering quality results. Unfortunately I had another commitment during the session and I could not stay for all of it.

Lawless discussed a variety of ways to look at the data available, decide if it should be used, update or modify it, fill in missing values and apply various forecasting techniques.  Simple reminders and tips such as consideration and awareness for how data is provided in time periods, e.g., fiscal months (4/4/5) or calendar months, and how they should be reported was a good reminder to make sure the data inputs are clearly understood as well as how the output from the forecasting process will be used.

While most of what I heard was related to the data going into the forecasting process, Lawless did spend time talking about various analytics associated with assessing the output of the process. You might be expecting me to talk about various error and bias metrics again but that is not the case. Rather, the idea is to look at the error measurement over time.  What is the distribution of errors? Do they have a pattern or random? If there is a pattern, there is likely something “wrong” with the forecasting process. It made me think about the application of Statistical Process Control (SPC) techniques that are most often applied to manufacturing processes but can be applied to any process. SPC control charts can be applied to check for patterns such as trends, systematic sustained increases, extend periods of time at unexpected very high or very low errors, randomness of errors, and many more. It gets back to the notion that in order to improve the quality of the demand planning process it must be evaluated on a regular basis and causes for its underperformance understood and corrected as much as possible or warranted.

Regression Analysis/ Causal Modeling

The final advanced session of the Academy was delivered by Charles Chase from the SAS Institute on “Analytics for Predicting Sales on Promotional Activities, Events, Demand Signals, and More”.  This session was about regression modeling on steroids.  As someone who has used regression models throughout my career I could easily relate to and appreciate what Chase was discussing.  In two hours Chase did a great job exposing attendees to the concepts, proper use, and mechanics of multivariate regression modeling that would typically be taught as an entire course over weeks.

While time series models are a staple used to forecast future demand, they provide little to no understanding of what can be done to influence the demand to be higher or lower. They can be used to decompose the demand into components such as trend, seasonality and cycles which are important to understand and respond to.  They are focused on the “accuracy” of the predicted future.  Regression models however describe how inputs effect output. They are an excellent tool for shaping demand. Regression models can help us understand the effect internal factors such as price, promotional activity, and lead-times, as well as external factors such as weather, currency fluctuations, and inflation rates have on demand. The more we can create predictive models of demand based on internal factors the more we can influence the resulting demand as these factors are ones we control/influence as a firm. If external factors are included, forecasts for the future values of these inputs will be needed and we become more reliant on the accuracy of the input forecasts to drive our model demand.

In case you missed it, you can see pictures from the 2015 IBF Academy HERE.

I trust I have brought some insight into IBF’s recent Academy in Las Vegas and perhaps offered a nugget or two for you to improve your forecasting and demand planning activities. If only I would have learned something to apply forecasting success at the gaming tables :).

]]>
https://demand-planning.com/2015/09/16/forecasting-planning-learnings-from-day-2-of-ibf-academy-an-attendees-perspective/feed/ 1
Are You Effectively Leveraging Point-of-Sale (POS) Data In Your Forecasting & Inventory Management? https://demand-planning.com/2015/09/09/are-you-effectively-leveraging-point-of-sale-pos-data-in-your-forecasting-inventory-management/ https://demand-planning.com/2015/09/09/are-you-effectively-leveraging-point-of-sale-pos-data-in-your-forecasting-inventory-management/#comments Wed, 09 Sep 2015 17:39:09 +0000 https://demand-planning.com/?p=3039 Today, we have an explosion of data. It is estimated that 2.5 quintillion bytes of data are created every day with 90% of the world’s data created in the past 2 years!

The key question becomes what do we do with all this data? In the past, companies have always struggled with managing and analyzing large sets of data and could seldom generate any insights.

However, what’s different today vis-à-vis five years ago, is that we now have the ability to cleanse, transform and analyze this data to generate actionable insights. Moreover, today’s retail consumers are extremely demanding and want choices on “When”, “Where” and “How” to purchase product. Whether it is a traditional stand-alone retail store, shop-in-shop, website or mobile app; consumers want the flexibility to research, purchase and return product across multiple channels.

Today, many retailers and wholesalers have a vast amount of POS data available. However, many of them still don’t use the data at the lowest level of detail in their demand planning cycle. The result is significant out of stocks and inability of consumer to find product at the stores.

For a company to be successful in today’s Omni-channel environment, three key steps are needed:

1) Use Point-of-Sale (POS) data as a key input into demand plans: POS is the data that is closest to the consumer and is the purest form of demand- it is critical to leverage this data at the right level of detail into a product’s demand plans. Information available at stock-keeping-unit (SKU) level- should be aggregated and disaggregated to ensure that all attributes of a product are factored into the planned forecast.
[bar group=”content”]

2) Link Point-of-Sale (POS) data to your Allocation & Inventory Management Systems: Today’s allocation systems have the ability to read sell-thru at POS and react and replenish based on what product is selling and what is not. It is critical to make sure that these systems are linked together so that the process is automated and seamless. Linking these systems will allow retailers to send the right product to the right store at the right time- thereby maximizing the chances of making a sale. This will not only contribute to top-line, but will also make our inventory investments more productive.

3) Collaboration with Value Chain Partners to share Point-of-Sale (POS) data: Today’s retail world is complex, many companies have multi-channel operations and work with a number of channel partners to distribute their products. In such a scenario, it is not always easy to gain access to POS data. However, it is important for companies to invest in a CPFR program (Collaborative Planning, Forecasting and Replenishment) that can give them access to downstream POS data which can be used to build better forecasts. It is critical to emphasize a “Win-Win” relationship for both companies and channel partners to bring everyone along on the collaboration journey

Along with Rene Saroukhanoff, CPF, Senior Director at Levi’s Strauss & Co, we’ll be talking about the above, as well as how to use size forecasting, optimized allocation, and visual analytics at IBF’s Business Planning & Forecasting: Best Practices Conference in Orlando USA, October 18-21, 2015.  I look forward to hopefully meeting you at the conference!  Your comments and questions are welcomed.

]]>
https://demand-planning.com/2015/09/09/are-you-effectively-leveraging-point-of-sale-pos-data-in-your-forecasting-inventory-management/feed/ 1