Michael Gilliland – Demand Planning, S&OP/ IBP, Supply Planning, Business Forecasting Blog https://demand-planning.com S&OP/ IBP, Demand Planning, Supply Chain Planning, Business Forecasting Blog Mon, 01 Oct 2018 19:50:01 +0000 en hourly 1 https://wordpress.org/?v=6.6.4 https://demand-planning.com/wp-content/uploads/2014/12/cropped-logo-32x32.jpg Michael Gilliland – Demand Planning, S&OP/ IBP, Supply Planning, Business Forecasting Blog https://demand-planning.com 32 32 Should the Naive Forecast be Your Default Forecast? https://demand-planning.com/2018/08/01/should-the-naive-forecast-be-your-default-forecast/ https://demand-planning.com/2018/08/01/should-the-naive-forecast-be-your-default-forecast/#comments Wed, 01 Aug 2018 13:25:39 +0000 https://demand-planning.com/?p=1376

Naive Forecast

Short answer: No.

I have long argued against arbitrary forecasting performance objectives, suggesting instead that the goal should be “Do no worse than a naive forecast.” We don’t know in advance how well a naive forecast will perform, so we can’t in advance set a numerical performance goal. But we can track performance over time, and determine whether we are meeting this (woefully pathetic sounding) objective.

Can You Actually Beat A Naive Forecast?

A recent LinkedIn discussion on this topic wove through 20 separate comments, there was an exchange between Sam Smale and Richard Herrin that I’d like to address. Sam pointed out, quite correctly, that the goal of being “better than a naive model” could make life too easy. It is true that just about any decent statistical forecast model should forecast better than a random walk (the standard naive forecasting model). However, due to the biases and politics and personal agendas that plague most organizational forecasting processes, I believe that beating a random walk is still a legitimate “first test.”

(Note: If your forecasting process is doing worse than a naive forecast, it is probably pretty bad!)

Richard then turns the conversation toward slightly sophisticated models, perhaps including trend or seasonality. Is it appropriate to consider these as “naive” models against which to do comparisons?

There is a very important role for these “slightly sophisticated” models that Richard brings up. They fall between the random walk and the more sophisticated models we typically use for business forecasting. Let’s call these “default” models.

A default model is something simple to compute, that you could actually use to run your business.  This is the important distinction. A default model is simple to compute (like a random walk is), but you would never use a random walk to run your business because of the instability of future forecasts.

The Problem With Naive Forecasts

Recall that with a random walk, whatever is the most recent observation becomes your forecast for all future periods. If you sold 100 last month, your forecast for all future months is 100. If you sell 500 this month, the forecast for all future months is changed to 500. If you sell 10 next month, the forecast for all future months changes to 10. This is problematic! You would not want to whipsaw your supply chain with such radical changes to the forecast in each future period.

To conclude, even if your forecasting process does worse than a random walk (the “first test” of process performance), you would never want to start using the random walk as your forecast. A more useful “second test” would be to compare your performance against a slightly sophisticated “default” forecast (e.g. moving average, single exponential smoothing, etc.). It would still be a reasonable (if slightly more challenging) goal to beat the default forecast. And if your process were failing to do so, you could simply ignore the process and start using the default.

]]>
https://demand-planning.com/2018/08/01/should-the-naive-forecast-be-your-default-forecast/feed/ 2
Forecast Value Added (FVA) – Interview Series 4 – Moen Incorporated https://demand-planning.com/2015/04/29/forecast-value-added-fva-interview-series-4-moen-incorporated/ https://demand-planning.com/2015/04/29/forecast-value-added-fva-interview-series-4-moen-incorporated/#comments Wed, 29 Apr 2015 19:42:39 +0000 https://demand-planning.com/?p=2939 Erin Marchant

Erin Marchant

This month’s interview is with Erin Marchant, Senior Analyst in Demand Management at Moen, Incorporated.

Erin has over ten years of varied experience in Supply Chain, from strategic sourcing to material planning, production scheduling, and demand management. She is also Moen’s Demand Planning and Analytical Systems Power User. Erin is a graduate of Heidelberg University, and also earned her MBA from Ashland University and M.S. in Information Architecture and Knowledge Management from Kent State University.

Moen is in the early stages of an FVA pilot, and Erin shares her experience to date.

Mike: When did you first hear about FVA, and how did you introduce the FVA approach to your management?

Erin: Our new Director of Demand Management brought the concept of FVA to us early in his tenure. He had learned about FVA from your book and articles, and in turn asked my boss and me to try to pilot the concept with our small segment of forecast responsibility.

Since our Director is the one that introduced the idea to us, we didn’t have to get buy-in from our organization. However, there is a need to get commitment from our business unit partners. In our pilot, we led the conversation to show the business unit why we wanted to start analyzing this data and the potential benefit of its measurement on the forecast process.

One pivotal element of FVA that we have had to reiterate time and time again at all levels of the organization is that FVA is not a measurement of who has the “right” forecast. FVA is a measurement of the value added at each step of the forecast process. Restating that fact constantly helps keep the discussion from devolving into an “us versus them” proposition.

Mike: Was the data needed to do FVA analysis readily available? Or did you have to start collecting it?

Erin: Yes and no. We have always collected the data for each layer (or, in the APO-DP world, “key figure”) of forecast data in our BI systems, so it was possible to determine the value of each step of the forecast. However, until the introduction of FVA, we had never been particularly disciplined about what kind of information went into each key figure. We have now assigned a purpose to each key figure, so that when the data is transferred to our BI systems it is very easy to determine FVA for each process step.

Mike: What are the steps in your forecasting process?

Erin: Process steps are:

  • Statistical Forecasting models run
  • Analyst override at item/DC and mid-level
  • Collaboration meetings to get business unit input
  • Tie to business unit topline forecast $ by fiscal month

Mike: What forecasting performance metric are you using and at what level do you measure?

Erin: In terms of FVA, we measure T-3 (3-month) Topline (Percent) Error and WAPE. While the data is available at the Item level, only a “mid-level” and topline FVA is published for both of these metrics. In the FVA pilot, that mid-level was defined as Product Line.

Mike: Are you measuring forecast bias? What are your findings?

Erin: We do not measure bias in terms of each forecast step defined in FVA, but we do measure a 3- and 6- month rolling MPE at the item, mid- and topline level. This is more of an internal metric that our analysts use to adjust item and mid-level forecasts. The idea of measuring bias in our forecast process something I am now curious to analyze!

Mike: Are you comparing performance to a naïve model?

Erin: Yes – we use a 3-month rolling average of actual sales by customer requested ship date as our naïve forecast.

Mike: What FVA comparisons are you measuring?

Erin: We are comparing:

  • Naïve to Statistical Forecast
  • Statistical to Analytical (Analyst Override)
  • Analytical to Collaborative
  • Collaborative to Topline

Mike: What FVA analyses / reports do you use?

Erin: Because we are so early in the process of using FVA, our reports are pretty rudimentary. We have set up some mapping in our BI tools so that the calculations are updated automatically. The output populates a very simple Excel file.

Monthly Report:

Monthly_Report_Moen1

 

Month-Over-Month Tracker (WAPE Example):

WAPE_example_moen

 

Mike: Can you point to any specific results / benefits / improvements from using FVA at your organization?

Erin: We are new to this process, but even now I am seeing the benefits of FVA in one very profound area: measuring our forecast process in this manner takes the emotion out of the conversation and allows Demand Management to start a dialogue with our partners about our forecast process that is compelling and data-driven.

For example, if you look at the charts above, you notice that tying to a topline forecast number significantly de-values our forecast. I think that the people in Demand Management have thought this for a long time, but have not been able to put a measurement around their assumption that would make a strong case when talking through the issue to the business unit. Now, we can see how each action we complete has an impact on our forecast, and that is a powerful message. Demand Management can be a rather politicizing activity at times, but FVA helps to temper that.

Mike: Any process changes as the result of FVA findings?

Erin: Not yet! The Demand Management department is building a compelling case for some process changes based on the FVA data. But we haven’t been measuring long enough to make changes as of yet.

Mike: Anything else you’d like to say about FVA? Including advice for other companies considering the application of FVA?

Erin: If your company is considering applying the FVA concept, I would stress yet again how critical it is to not make the conversation an “us versus them” proposition. FVA is NOT a tool for Demand Management to take to the business unit and say, “Look how much better we are at forecasting than you.” FVA is a measurement of process – and Demand Management should be a unified, collaborative process.

If your FVA data shows that collaborating with the business unit, for example, is a step in the process that de-values your forecast, don’t let the data become the flame to burn the business unit with. Use the data as the start of a conversation on how the groups can collaborate better or differently in order to achieve a more accurate, shared forecast.

Yes, FVA data can pinpoint steps in your process that can be eliminated, but it is not an excuse for Demand Management to create the demand forecast in a silo, either. When used properly, FVA is a useful tool that can refine and unify the organization’s forecast process.

Willing to share your experiences with FVA? Please contact IBF info@ibf.org to arrange an interview for the blog series.

]]>
https://demand-planning.com/2015/04/29/forecast-value-added-fva-interview-series-4-moen-incorporated/feed/ 1
Forecast Value Added (FVA) – Series 3 Interview https://demand-planning.com/2015/03/02/forecast-value-added-fva-series-3-interview-2/ https://demand-planning.com/2015/03/02/forecast-value-added-fva-series-3-interview-2/#respond Mon, 02 Mar 2015 18:00:37 +0000 https://demand-planning.com/?p=2867 steveInterviewer: Michael Gilliland, SAS

This month’s interview is with Steve Morlidge of CatchBull.

Steve has 30 years of practical experience in designing and running performance management systems at Unilever, and is the author of Future Ready: How to Master Business Forecasting. His book is aimed at a general business audience, although he has also written extensively in journals on forecastability, FVA and other technical subjects. Steve is the creator of ForecastQT, a forecasting performance management application in the Cloud, which exploits the insights and innovations described in his articles.

I asked Steve about the application of FVA analysis with his clients.

Mike: What forecasting performance metric(s) are you using?

Steve: Our goal is to measure forecast performance in a way that provides a rigorous measure of the value that forecasting adds compared to the alternative – which is simple replenishment based on prior period actual demand. Consequently we measure error at the level, frequency and lag that reflects the replenishment process, which is usually very detailed.

Our key metric is Relative Absolute Error (RAE) based on the ratio of absolute error to the naïve error, since this tells us how much better the forecast is than simple replenishment and also allows for forecastability, in a way that conventional metrics like MAPE do not.

Mike: In comparing performance to a naïve model, what are your findings?

Steve: Although there are other benefits (e.g. simplicity, no judgmental input, allowing for forecastability, etc.) we compare forecast error to the naïve forecast error because it mirrors the performance of simple replenishment and so the most meaningful comparison that can be made.

Typically we find that most businesses struggle to beat the performance of the naïve forecast by more than 10-15% and that 40-50% of low-level forecasts perform worse than the naive and so destroy value – a huge potential performance gain, once addressed.

Mike: Are you measuring forecast bias? What are your findings?

Steve: Our approach involves decomposing the RAE measure into its two constituent parts – bias (the result of systematic under or over-forecasting) and variation (unsystematic error – relative to the naïve forecast) as they impact the business in different ways and have different causes requiring different solutions.

Most often we find supply chain forecasts are over-forecast on aggregate. But measuring this at the most granular level usually reveals significantly higher over-forecasting AND under-forecasting, that is hidden by the process of aggregation – and would be missed by traditional metrics like MAPE

Mike: What FVA comparisons are you making?

Steve: ForecastQT does not produce forecasts; it provides the analytics layer that continuously measures their performance (however they are produced). This provides a rigorous, scientific measure of value added along with the tools to help identify the source and cause of issues.

We find that statistical forecasts rarely escape judgmental intervention and we provide users with the ability to differentiate between the contribution of judgment and statistical methods to value added.

Mike: What FVA analyses / reports do you use? Can you share any examples?

Steve: We are able to measure value added in terms of avoidable error and cost and then analyze it across any dimension – product, channel or geography – thereby focusing effort where there is the greatest scope for improvement.

insert 10

The sample chart above shows 5 product groups ranked according to forecast quality – as measured by the Value Added Score (VAS). Although it does not have the best forecast (ranked third with a VAS of 16) B80085 adds the most value (21k). But it also has the most SKU’s with critically high levels of bias or variation (as shown by the alarmed value of 75k), so this is where we should focus our attention. This chart also shows that no single product group beats the VAS for a simple statistical benchmark forecast (bVAS) indicating that there are problems with the forecast process as a whole in this business.

Mike: Have you developed any other new ways to conduct FVA analysis or report the results?  

Steve: A recent innovation is the use of rectangular tree maps (aka “tile charts”). This data visualization tool enables us to see the impact of avoidable forecast error at multiple levels in a hierarchy in a single glance.

insert2

In this example the large tiles show Product Groups and the smaller tiles are the SKUs within them. The size of each tile indicates the amount of avoidable error and the colour whether this level of error is statistically significant. The largest red tiles represent the biggest improvement opportunities

Mike: Have you done any volatility analysis (e.g. “comet chart”), or otherwise attempted to assess “forecastability”?  

Steve: Since the naïve forecast error is a measure of the volatility of demand (and its forecastabililty) our methodology explicitly incorporates this perspective. 

Mike: Has FVA been adopted as a key performance indicator at your clients?

Steve: Because it is easy to understand what it means and what needs to be done to improve it, we find that most users start their forecast improvement journey by tackling bias. Users with a more sophisticated understanding of the forecast process and senior executives who are only interested in the impact of forecast performance on the business are the biggest fans of the value added approach.

Mike: Can you point to any specific benefits from using FVA at your clients? 

Steve: Typically we uncover the potential to double the value added by forecasting. 

Mike: Are there any process changes as the result of the FVA findings? 

Steve: Where our value added methodology flags up forecasts that destroy value, we usually find these are products with relatively stable demand being forecast with overly complex methods and inappropriate judgmental overrides. Conversely exceptional performance is often associated with volatile demand and well-judged interventions.

Ironically, because metrics like MAPE do not allow for forecastability, this is often the exact opposite of the picture presented by traditional measures.

Mike: How do you advise setting forecast performance goals?

Steve: The goal should be set to improve value added continuously until it becomes uneconomic to go further. Fortunately, where value is being destroyed it is usually possible to improve performance by doing less.

Mike: Anything else you’d like to say about FVA? Including advice for other companies considering the application of FVA?

Steve: Forecasting is, arguably, one of the largest hidden sources of waste in modern businesses. REL (a Hackett Group company) estimates that over $1 trillion of unnecessary inventory is tied up in the balance sheets of the top US companies (worth 7% of GDP) and that poor forecasting is the biggest single contributor to the problem.

Unlike conventional approaches to performance measurement, adopting a value added perspective helps to expose the problem and engage management in efforts to make improvements. What we find is that there is no software silver bullet. Making headway depends on identifying and relentlessly chasing down the source of problems and adopting a measured, rational approach to process design and method selection.

Willing to share your experiences with FVA?   Please contact the IBF at info@ibf.org to arrange an interview for the blog series.

]]>
https://demand-planning.com/2015/03/02/forecast-value-added-fva-series-3-interview-2/feed/ 0
Forecast Value Added (FVA) – Series 2 Interview https://demand-planning.com/2015/01/21/forecast-value-added-fva-series-2-interview/ https://demand-planning.com/2015/01/21/forecast-value-added-fva-series-2-interview/#respond Wed, 21 Jan 2015 14:10:46 +0000 https://demand-planning.com/?p=2739 Shaun Snapp

Interviewer: Michael Gilliland, SAS

This month’s interview is with Shaun Snapp, founder and editor of SCM Focus, where he provides independent supply chain software analysis, education, and consulting.

Shaun’s experience and expertise spans several large consulting companies and at i2 Technologies before staring SCM Focus. He has a strong interest in comparative software design, maintains several blogs, and has authored 19 books, including Supply Chain Forecasting Software and most recently, Promotions Forecasting. He holds an MS in Business Logistics from Penn State University.

I asked Shaun about the application of FVA analysis with his clients.

Mike: What forecasting performance metric are you using (e.g., MAPE, weighted MAPE, forecast accuracy), and at what level do you measure (e.g. by Item / Distribution Center / Week with a 3-week lag)?

Shaun: I really only use MAPE or weighed MAPE. In most cases I am comparing different effects on forecast accuracy, so a relative measure is the most appropriate. As I have to export forecasts and actuals from systems to calculate global figures, weighed MAPE, while certainly the most accurate, is a bit more work to calculate, and of course there are different ways of weighing MAPE, which brings up a separate discussion.

I try to get companies to measure at the Item/DC. I bring up the topic that the relevant duration estimate is over the replenishment lead time. I don’t use any lagging.

Mike: Are you measuring forecast bias?  What are your findings?

Shaun: Yes very frequently. My finding is the same as the literature, sales inputs have a consistent bias — which in my clients is not addressed through anything but planner adjustment. 

Mike: Are you comparing performance to a naïve model?

Shaun: No. I tend to compare the forecast of my clients against a best fit. I do have an approximation of the percentage of the database which does not need very much forecasting energy, as I know what percentage of the database has a level forecast applied — these are both highly variable items, and very stable items. 

My work pretty much stops at getting the system to generate a decent forecast. I don’t have any involvement in what the planners do after that. Most companies I work with have either walked away from the statistical forecast or only use a very small portion of the statistical forecast that are generated. The planners are free to make any adjustment or change the model applied.

Mike: What are the steps in the forecasting processes you see (e.g., stat forecast, analyst override, consensus meeting override, executive approval)? What FVA comparisons are you measuring?

Shaun: I do all of these comparisons for clients. I am trying to understand what the FVA is at each step so poor quality inputs can be de-emphasized and quality inputs can be emphasized.

The bigger problem is impressing the importance of the FVA on clients. I can’t recall finding any work of this type done at clients before I arrive. I think this is because it does take work, and demand planners are busy doing other things. Because so many manual adjustments have to be made and because so many meetings are necessary with groups that provide forecasting input most demand planning departments seem overworked versus their staffing level.

Most of the forecasting consulting that comes before me is of a system focused nature. Adding characteristics to a view, creating new data cubes, that sort of thing. There seems to be a much smaller market for forecast input testing. It is something I bring to clients, but normally not something they ask for. Many decisions are still very much made based upon opinions and “feel.” In fact I find it very rare for the attribute/characteristics which is used to create a disaggregated forecast to have been proven to improve forecast accuracy before it is implemented in the system.

Mike: Anything else you’d like to say about FVA? Including advice for other companies considering the application of FVA?

Shaun: I have never seen any forecasting group that based its design upon FVA.

This is not to say that lip service may not be paid to FVA. If you bring up the topic, most people will tend to agree it makes sense. However, really using FVA means being very scientific in how one measures different forecast inputs, and while businesses use math, businesses are generally not particularly aligned with scientific approaches.

There are an insufficient number of people, either in companies or working as consultants that have an understanding of how to perform and document comparative studies. Documentation is a very important part of the process, and again this is a serious limitation for every company I have ever come into contact with, from the biggest to the smallest and the industry affiliation does not seem to matter very much in this regard.

On a different topic, as the literature points out and as I can certainly attest, there are some groups that have a negative interest in FVA. That is, some groups want to provide input to the forecast and don’t particularly care if they are right, and don’t particularly want to be measured. Some groups just want to ensure the in-stock position of their items. These groups are very powerful and exert great deal of pressure on the supply chain forecasting group to accept their forecasting input.

Further, this gets into the topic that there is not simply “one forecast.” There are really multiple forecasts, and while there is discussion of unifying the forecasts, this is not in reality an easy thing to do, because different groups have different financial and other incentives and see things through different lenses.

I would say poor quality forecasting or inputs to the forecast which are entirely unregulated as a policy (but regulated by individual planners to a degree) is the norm.

Willing to share your experiences with FVA?   Please contact the IBF at info@ibf.org to arrange an interview for the blog series.

]]>
https://demand-planning.com/2015/01/21/forecast-value-added-fva-series-2-interview/feed/ 0
Forecast Value Added (FVA) – Series 1 Interview https://demand-planning.com/2014/12/10/forecast-value-added-fva-series-1-december-2014/ https://demand-planning.com/2014/12/10/forecast-value-added-fva-series-1-december-2014/#respond Wed, 10 Dec 2014 20:05:00 +0000 https://demand-planning.com/?p=2696 Jonathon

Interviewer: Michael Gilliland, SAS

This month our interview is with Jonathon Karelse, a recognized Demand Planning and S&OP thought leader, frequent speaker, moderator, and panelist at IBF and supply chain events. Jonathon is graduate of the MIT Sloan School of Management’s Executive Program in Value Chain and Operations Management. He was the youngest ever Executive at Yokohama Tire, where he implemented a successful demand-driven global planning process, and served as Business Unit Director for the Company’s Canadian Consumer products division. He has been published in various trade and academic journals including the Journal of Business Forecasting (JBF).

Through NorthFind Partners – the company he co-founded – Jonathon developed operations strategies and enterprise demand planning for some of the world’s most successful manufacturers and distributors.

In the interview with Jonathon, we’ll be discussing the application of FVA analysis with his clients.

Mike: Jonathon, what forecasting performance metric(s) are you currently using?

Jonathon: I currently use wMAPE (by 12 trailing months of Gross Profit $), wBias (by 6 trailing months of COGS$), and for FVA we use RMSE as the basis for comparison. All our key metrics are weighted by profit because we want to remain focused on parts and actions that will impact EBIT, not just academically trying to improve accuracy everywhere. With limited time and resources, companies should focus on what moves the needle.

Mike : Glad to hear you are measuring Bias because it is often overlooked. What are your findings?

Jonathon: Most companies as expected tend to have positive bias over time, which analysts find easier to systematically correct than variability in error rates. But in some companies with perennially bad supply, they actually compound the problem with negative bias as Sales have been programmed not to expect full deliveries, and they don’t want sales targets tied to a number they think can’t be built.

Mike : Are you comparing performance to a naïve model?

Jonathon: Yes. We often use a seasonal random walk, though increasingly Simple Exponential Smoothing or Moving Average. A random walk is almost too naïve.

Mike : What are the steps in the typical forecasting process with your clients?

Jonathon: The general process steps are Statistical Forecast, Analyst Adjustment, Customer Forecasts, Sales and Marketing Inputs, and Final Demand Planner forecast.

Mike : What FVA comparisons are you measuring?

Jonathon: We make all the pairwise comparisons between all the steps, and also to the naïve model.

Mike : Do your clients adopt FVA as a key performance indicator?

Jonathon: Absolutely. Every client we’ve been engaged with finds this KPI really resonates, and is often the key area that executive management looks at. It is also the key metric used for root cause and corrective action.

Mike : Have there been any process changes as the result of FVA findings?

Jonathon: We’ve seen many changes implemented as the result of FVA findings. A couple of examples are how they handle collaborative discussions with customers, and changing the way forecasters work with sales for inputs.

For example, for a major transportation client the tribal knowledge was that customer forecasts were an indispensable element of the demand planning process. Over 180 customer forecasts dropped into MRP directly; some of them on EDI signals that turned out to be unmonitored at both ends. FVA revealed that these inputs, though intuitively beneficial, were systemically impairing our ability to forecast customer requirements. This allowed us to go back to key customers and engage in collaborative discussions focused on process and data improvement.

At another client, a major global manufacturer of electronics components, nearly 800 Sales Engineer inputs were systematically gathered every month. Who better than the Sales Engineers to understand the requirements of their customers? Well…FVA showed us that the Pareto rule was alive and well here, and only a handful of the SEs were giving us input that demonstrably improved Forecast Accuracy. By paring out hundreds of non-value added inputs, we saved hundreds of hours of time, and improved the overall Forecast Accuracy.

Mike : Have you done any volatility analysis (e.g. “comet chart”), or otherwise attempted to assess “forecastability”?

Jonathon: Yes, this is another standard for us. We often start with a histogram of a company’s Coefficient of Variation (CV) by SKU before getting into the deep dive. It gives us a sense of whether more heavy lifting up front will be required from Sales judgmental inputs (lots of high CV parts from project based business, for instance) or statistical models (most applicable to lower, more stable CV items).

We use comet charts to validate the break point on part segmentation (or forecastability) matrices, subject to required service and inventory levels. We also use comet charts for diagnostic insights into the health of the planning process by looking at parts that fall outside the control limits of expected Forecast Accuracy vs CV.

I should note this CV is calculated on the basis of deseasonalized data, since leaving it seasonal would create false positives in terms of variability; or at least unfairly suggest that fewer parts will respond well to statistical analysis. So by the time we begin looking at FVA, we have a good idea from the histogram whether initially we are going to see a good bump versus a naïve baseline. In businesses with tons of volatility, the baseline is often pretty tough to outperform.

Mike : Does your clients set forecasting performance goals (e.g. wMAPE < 20%)?

Jonathon: We strongly discourage those goals. We focus on continuous error reduction, continued improvement in FVA, and on keeping Bias volatility as low as possible.

Mike : What FVA analyses / reports do you use?

Jonathon: Here is an example from a client in transportation. They utilize a modified version of the “stairstep” report, based on RMSE (rather than the more commonly used MAPE or wMAPE). They love metrics and waterfall their FVA reports by monthly lag out to Lag 9. (Creating lagged versions is probably not something most companies need to do!)

FVA_based_on_Bkgs

 

Mike : Have you developed other new ways to conduct FVA related analyses, or report the results?

Jonathon: I believe we are the only group using the comet chart as a diagnostic tool as we are, and I also believe we are the only group tying a comet chart to the inventory/service optimization curve.

Mike : Can you point to any specific results / benefits / improvements from using FVA at your organization?

Jonathon: Much faster root cause and corrective action, and as a result, much faster improvements.

Mike : Anything else you’d like to say about FVA? Including advice for other companies considering the application of FVA?

Jonathon: FVA is easy! If you aren’t using it, you are missing a critical indicator of your organization’s forecasting performance.

 

Willing to share your experiences with FVA?   Please contact the IBF at info@ibf.org to arrange an interview for the blog series.

]]>
https://demand-planning.com/2014/12/10/forecast-value-added-fva-series-1-december-2014/feed/ 0
Simple Tools for Evaluating the Forecasting Process https://demand-planning.com/2013/09/10/simple-tools-for-evaluating-the-forecasting-process/ https://demand-planning.com/2013/09/10/simple-tools-for-evaluating-the-forecasting-process/#respond Tue, 10 Sep 2013 13:36:29 +0000 https://demand-planning.com/?p=2078 Michael Gilliland

Michael Gilliland

In the movie Slingblade, there is a great scene where they bring an apparently broken lawnmower to Karl (Billy Bob Thornton): “Karl, see if you can figure out what’s wrong with this. It won’t crank up and everything seems to be put together right.”  After a brief inspection, Karl responds “It ain’t got no gas in it.”

Sometimes it’s the simplest things that are most effective. And this certainly holds true in business forecasting.

There are several easy to understand, and easy to implement tools for evaluating the forecasting process. These tools utilize data you should already have.

One of these useful tools is the “comet chart,” which illustrates the relationship between demand volatility and forecast accuracy. As you would expect, when products have smooth and stable demand, we tend to forecast them more accurately than products with wild, erratic demand.

By creating a scatterplot of all products, showing their volatility and the achieved forecast accuracy (or error), you get a quick sense of the magnitude of your forecasting challenge. While volatility is not a perfect indicator of forecastability (there are volatile (yet well behaved) patterns that can be forecast accurately), it is of practical value in assessing your organization’s performance.

In a “forecastability matrix,” products are segmented into categories that are more (or less) forecastable, and more (or less) profitable. Since forecasting resources are always limited (no organization can afford an army of forecast analysts), forecasters can deliver more value by first focusing on those items that are more profitable and more forecastable. Those items that are difficult to forecast and have little contribution to profits have the lowest priority, and should receive little or no attention from the forecasting staff.

Forecast Value Added (FVA) analysis is another simple tool that has gained wide industry adoption. FVA looks at each step in the forecasting process, to make sure forecasting activities are “adding value” by making the forecast more accurate and less biased.

FVA identifies the waste and worst practices in a forecasting process. Many organizations have found things they are doing that just made the forecast worse! Such non- (or negative-) value adding steps can be eliminated, resulting in more effective use of company resources, and potentially more accurate forecasts.

Finally, a new approach for determining the “avoidability” of forecast error seems to be showing promise. This concept was proposed by Steve Morlidge, forecasting thought leader from the UK.  This approach seeks to determine the smallest amount of forecast error you can reasonably expect. This is another easy to apply tool, that can save you a lot of time by knowing when to stop trying to improve a forecast that has reached its limit of accuracy.

Michael Gilliland
Product Marketing Manager – Forecasting
SAS Institute

Hear Michael speak on simple, yet effective tools for evaluating the forecasting process at IBF’s Business Planning & Forecasting: Best Practices Conference in Orlando Florida, November 4-6, 2013.

 

 

 

 

 

]]>
https://demand-planning.com/2013/09/10/simple-tools-for-evaluating-the-forecasting-process/feed/ 0
SKU Rationalization: Improving Forecast Accuracy and Profitability https://demand-planning.com/2012/08/16/sku-rationalization-improving-forecast-accuracy-and-profitability/ https://demand-planning.com/2012/08/16/sku-rationalization-improving-forecast-accuracy-and-profitability/#comments Thu, 16 Aug 2012 13:58:13 +0000 https://demand-planning.com/?p=1406 IBF’s LinkedIn discussion group presently features a  lively conversation going on about SKU rationalization, a favorite topic of mine. Anthony Davidson initiated the conversation by posting the question,“…what key factors should be considered in determining which SKUs should be eliminated from the mix?”

It is generally agreed upon that unchecked product proliferation will result in very negative consequences. This is due largely to the cost and complexity of managing an ever increasing product portfolio, and the self-cannibalization of your own demand.

As a first cut, several of the respondents suggested ranking products by gross margin dollars or percent. However, I’m skeptical of financial metrics that purport to allocate costs (and compute the marginal profitability) of individual products.  As a result I prefer to stay away from those kinds of measures.

Instead, a good first step is to create a Pareto chart ranking all products by unit volume or revenue.

As we can see in this example, the top 1/3 of items generate about 90% of revenue, the middle third generates about 10% of revenue, and the bottom third generates less than 1%. A Pareto analysis of your own products should yield a similar result – the 80/20 rule will likely apply. Two real-life examples are:

  • A food company found that 25% of their items yielded just 0.5% of total revenue.
  • An apparel company found that half of their products generated just 1% of total revenue.

Be aware that negative volume or negative revenue can also be observed (when product returns exceed product sales).

Based on the initial volume (or revenue) ranking, you may decide to prune everything below a certain cutoff. You may also take into consideration the dozens of other contributing factors such as customer service levels for the product, remaining inventory, complexity of sourcing or production, or availability of substitute items. Sumit Sinha made a good point in the group about SKU rationalization being an ongoing process – at least annually for sure, and quarterly,or even on a continuous basis would be  even better. Rob Miller posted a comment that appeals for a more sophisticated determination of whether a product serves a strategic role before pruning it.

I’m particularly fond of a comment in the group made by Louis Upton,  “…everybody is in favor of rationalization until they see the products they want to kill.” Pruning items is likely to save on costs, but there is always the fear that we would also lose a small amount of revenue by giving up on extremely low volume products. However, it is not unreasonable to expect that revenue will actually increase, because now you can focus your organizations efforts on the more important items and increase their customer service level. Filling a higher percentage of orders for the important items can quickly make up for the revenue lost by the pruned items.

For more discussion of this topic, including when not to prune an item, see “SKU Rationalization: Pruning Your Way to Better Performance” in the Fall 2011 issue of the IBF’s Journal of Business Forecasting.

 

 

]]>
https://demand-planning.com/2012/08/16/sku-rationalization-improving-forecast-accuracy-and-profitability/feed/ 4
The Science of Forecasting https://demand-planning.com/2012/05/31/the-science-of-forecasting/ https://demand-planning.com/2012/05/31/the-science-of-forecasting/#comments Thu, 31 May 2012 13:41:57 +0000 https://demand-planning.com/?p=1369 The Science of Forecasting

The Science of Forecasting

We’re familiar with application of the scientific method in certain industries, such as Pharmaceuticals. When a new drug is introduced, we expect that its safety and efficacy has been demonstrated through appropriately controlled experiments.

For example, to test a new cold remedy we would find 100 people with colds, randomly give half the new drug and half a placebo, and see whether there is any difference in outcomes.  If those given the new drug get more immediate relief from the discomfort and recover faster, we may be able to conclude that the drug actually works – that it is an effective treatment for colds.

In conducting such an experiment, the scientist begins with a null hypothesis:

H0: The drug has no effect.

Through the controlled experiment, we determine whether there is sufficient evidence to reject the null hypothesis and infer that the drug does have an effect (which can be either positive or negative).

The Null Hypothesis for Forecasting

In forecasting we are fond of elaborate systems and processes, with more touch points and human engagement. We tend to believe that the more sophisticated our models and the more engaging our processes, this will result in better forecasts. But do we ever pause to test this belief?

If we approach business forecasting like a scientist, we would ask whether any of our forecasting efforts are having a beneficial effect. Do our statistical models result in a better forecast? Do our analyst overrides make it better still? Are other participants (like sales, marketing, or finance) providing further improvement?

We would start with appropriate null hypotheses such as:

H0: The statistical model has no effect on forecast accuracy.

H0: The analyst override of the statistical forecast has no effect on forecast accuracy.

H0: Input from the sales force has no effect on forecast accuracy.

But “no effect” compared to what? Is there a placebo for forecasting?

Fortunately for those of us who want to let science get in the way of our forecasting, there is a placebo… something referred to as the naïve forecast. A naïve forecast is something simple to compute, essentially a free alternative to implementing a forecasting systems and process. The two standard examples are:

Random Walk – using the last known value as the forecast. (If we sold 12 units in May, our forecast for June would be 12.)

Seasonal Random Walk – using the known value from a year ago as the forecast. (If we sold 15 in June of 2011, then we would forecast 15 for June of 2012.)

If we did nothing – had no forecasting software or forecasting process – and just used the naïve forecast, we would achieve some level of accuracy, say X%. So the questions become, does our statistical forecasting software do better than that? Do overrides to the statistical forecast make it even better? Does input from the sales force provide further improvement?

Forecast Value Added (FVA) analysis is the name for this kind of approach to evaluating forecasting performance.  FVA is defined as:

The change in a forecasting performance metric (such as MAPE, or bias, or whatever metric you are using) that can be attributed to a particular step or participant in the forecasting process.

FVA is essentially an exercise in hypothesis testing – are the steps in your process having an effect (either positive or negative). The objective of FVA analysis is to identify those steps or participants that have no effect on forecasting performance – or may even be making it worse! By eliminating the non-value adding activities (or redirecting those resources to more productive activities outside of forecasting), you reduce the resources committed to forecasting and potentially achieve better forecasts.

So should we implement a consensus process, or CPFR, or allow executive management final say over the forecast? I don’t know– they all sound like good enough ideas! But until we apply FVA analysis and put them to the test, it isn’t safe to make that assumption.

]]>
https://demand-planning.com/2012/05/31/the-science-of-forecasting/feed/ 1
Using Shipment History: A Deadly Sin? https://demand-planning.com/2012/04/20/using-shipment-history-a-deadly-sin/ https://demand-planning.com/2012/04/20/using-shipment-history-a-deadly-sin/#respond Fri, 20 Apr 2012 17:54:15 +0000 https://demand-planning.com/?p=1350 In his article titled “Seven Deadly Sins of Sales Forecasting” in the March 28 edition of APICS extra, Fred Tolbert compiled a useful list of bad practices than can worsen our forecasting, inventory management, and customer service results. I particularly liked Deadly Sin #5: Senior Management Meddling, and wrote about it on The Business Forecasting Deal blog.  However, I did have some issue with Deadly Sin #1, Using Shipment History, which we will discuss here.

The historical “demand” we feed into our statistical forecasting models play a role in the appropriateness of the forecasts we generate. This history should represent what our customers wanted, and when they wanted it, so any patterns of demand behavior can be projected into the future.

We often misrepresent demand history by attributing demand to the wrong time bucket, or in the wrong quantity. Tolbert shows how easy this can be if we use shipment history to represent demand.

Suppose you receive an order for 1000 units for delivery in July, but are unable to ship until September. If we say that Demand=0 in July (because nothing was shipped) and Demand=1000 in September (when the shipment was made), this doesn’t seem right. The shipments don’t seem to represent the “true demand” of the customer.

Tolbert states, “The appropriate response is to post the 1,000 units as July history for sales forecasting purposes.” But this assumes that Order = Demand, and I’m not convinced this is correct. There are many situations where an order does not represent what the customer truly demands, for example:

  • An unfillable order may be rejected by the company or cancelled by the customer (so no “demand” appears in the history).
  • An unfilled order may be rolled ahead into future time buckets so “demand” is overstated, re-appearing in each time bucket until the order is filled or cancelled.
  • If customers anticipate a shortage, they may inflate their orders in hopes of capturing a larger share of what’s available so “demand” appears higher than it really is.
  • If customers anticipate a shortage they may withhold orders, change orders to different (substitute) products, or redirect their orders to alternative suppliers so “demand” appears less than it really is.

“True demand” is a nebulous concept that can be very difficult to capture with the data readily available to us. Unless we service our customers perfectly, in which case Orders = Shipments = Demand, then neither orders nor shipments are a perfect indicator.

Perhaps this Deadly Sin should be restated to read “Assuming you can know true demand” – because you probably can’t. However, as a practical matter for forecasting purposes, it should be good enough to feed our systems with “demand history” that is reasonably close to what true demand really is. When you consider that the typical SKU forecasting error is 30%, 40%, 50% or even more, does it really matter that your history is off by a few percentage points? Probably not.

]]>
https://demand-planning.com/2012/04/20/using-shipment-history-a-deadly-sin/feed/ 0
Forecasting Confessions of IBF Conference Attendees https://demand-planning.com/2012/03/05/forecasting-confessions-of-ibf-conference-attendees/ https://demand-planning.com/2012/03/05/forecasting-confessions-of-ibf-conference-attendees/#comments Mon, 05 Mar 2012 15:00:47 +0000 https://demand-planning.com/?p=1330 Mike Gilliland: The BFD

Mike Gilliland AKA: The BFD

At last weeks IBF Supply Chain Forecasting & Planning Conference in Scottsdale, AZ, I had the somber responsibility of facilitating three round table sessions on “Worst Practices in Business Forecasting.”  Thirty-eight of the biggest sinners in the forecasting/demand planning profession confessed to a variety of irresponsible and embarrassing behaviors that we can all learn from:

  • Believing Marketing / Believing Sales / Believing the Customer

Faith-based forecasting is not the way to go. Participants in the forecasting process have their own little biases and personal agendas, so when we solicit their input we must stay on guard. Many of these agendas favor an increase in inventory  which is wonderful if you aren’t responsible for overstocks or obsolescence.

  • Failing to Account for Cannibalization by New Products

New products are great … for creating a lot of forecasting and supply chain headaches. New product forecasts are very often very terribly wrong, which is bad enough. But we usually fail to account for the impact of new product sales on existing products as well. Will I really continue to buy the same amount of fresh mint dental floss, once I’ve switched to the new cinnamon flavor?

  • Over Touching the Forecast

If forecasts could speak Latin, they would probably scream out “noli me tangere!” (Don’t’ touch me!)   There is plenty of evidence that we touch our forecasts too much, and with little beneficial impact. Sure an elaborate forecasting process with lots of participants and collaborative steps sounds like a good idea, but too often these human touch points just add opportunity to contaminate what should be an objective and dispassionate process.

  • Confusing the Financial Plan with The Demand Forecast

There are lots of numbers floating around an organization. We usually start the year with an operating plan and financial forecast projecting monthly revenues and costs. But even the best laid plans will need to evolve with the realities of the marketplace. This isn’t such a bad thing when we recognize that the forecast is diverging from the original plan. Recognizing the gap allows us to address the gap by shaping demand patterns to bring us back on plan, or else changing the plan to match the new demand forecast.  The only bad practice is continuing to believe (and execute) a plan that is based on wishes, not reality.

  • No Appreciation of the Range of Uncertainty

We’re used to seeing our forecasts as point estimates or a specific number of units (or dollars) for a specific product and location, in a specific time bucket. But wouldn’t it be helpful to know the range of uncertainty in that number? Knowing  that the forecast is 100 +/- 10 units can lead to drastically different actions than a forecast of 100 +/- 100 units. Before making major downstream supply decisions, be sure you understand the likely range of outcomes and not just the point forecast.

  • Failing to Address Data Issues

Analysis and modeling need data that is clean, complete, and relevant. While we may do a good job tracking the basics like orders, shipments, and sales revenue, we must take care to not ignore elements that can dramatically impact our forecast. Rigorous tracking of pricing, promotional activity, competitor activities, or other factors influencing demand, allows us to incorporate those factors into our statistical forecasting models. The more work that can be done automatically by the models, the less manual work we need to do when reviewing and overriding those models.

These are just six of many sins confessed during the round tables. Have you confessed yours?

]]>
https://demand-planning.com/2012/03/05/forecasting-confessions-of-ibf-conference-attendees/feed/ 2