FVA – Demand Planning, S&OP/ IBP, Supply Planning, Business Forecasting Blog https://demand-planning.com S&OP/ IBP, Demand Planning, Supply Chain Planning, Business Forecasting Blog Tue, 26 May 2020 12:00:02 +0000 en hourly 1 https://wordpress.org/?v=6.6.4 https://demand-planning.com/wp-content/uploads/2014/12/cropped-logo-32x32.jpg FVA – Demand Planning, S&OP/ IBP, Supply Planning, Business Forecasting Blog https://demand-planning.com 32 32 The Magic Of FVA% https://demand-planning.com/2020/05/26/the-magic-of-fva/ https://demand-planning.com/2020/05/26/the-magic-of-fva/#comments Tue, 26 May 2020 11:55:03 +0000 https://demand-planning.com/?p=8518

The latest episode of IBF’s On Demand podcast is available to watch now.

Forecast Value Add (FVA) is a wonderful way to identify which inputs and activities increase your forecast accuracy and which decrease it. It can be a real game changer for your forecast accuracy and can identify bias from other functions.

Special guest Sara Park, Vice President, Exec S&OP, Forecasting & Supply Chain Planning at Coca-Cola, reveals how you can get started with FVA quickly and easily, and Eric Wilson CPF discuss why you should move away from MPE and MAPE to FVA.

Watch previous episodes:

]]>
https://demand-planning.com/2020/05/26/the-magic-of-fva/feed/ 2
Forecast Accuracy Benchmarking Is Dead (Long Live Forecastability) https://demand-planning.com/2019/04/15/forecastability/ https://demand-planning.com/2019/04/15/forecastability/#comments Mon, 15 Apr 2019 14:05:31 +0000 https://demand-planning.com/?p=7708

How valuable is forecast accuracy benchmarking? It’s always interesting to see how your competitors’ are faring, but does knowing other companies’ forecast accuracy help improve your own, and does it help to set realistic forecast accuracy targets for your demand planners?

I think there are some lessons we can learn from Montessori here. My 10 year old daughter has attended a Montessori school for over 5 years with amazing results. As a parent, I have always loved their approach and we have seen the philosophy in action. My daughter has blossomed over the last few years.

“A student’s progress should be measured in terms of the questions they are asking, not merely by the answers that they are reciting” (Robert John Meehan).

If you are not familiar with the Montessori method, its foundation is on self-directed learning. Students are free to choose the activities they work on from a range of carefully prepared, developmentally appropriate materials. One of the things that other parents find surprising about Montessori is the fact that they do not give “tests.” The idea behind the test-less approach is not about creating a careless environment but one where each child is recognized as different, and where self-motivation and mastery at their level is the focus.

The same approach can be applied to demand planning. This approach is destroyed by using forecast accuracy benchmarks.

The (Severe) Limitations Of Forecast Accuracy Benchmarking

With this in mind, I was sitting on the couch listening to my daughter talk about the wonderful day she had and how she loves school when I open an email. The sender of the email is asking me for forecast accuracy benchmarks. I get this question a lot and my answer is always the same:

The best benchmark is no benchmark. Stop trying to benchmark forecast accuracy against an industrial average!

Far too often I see annual goals (and even bonuses) tied to an arbitrary number of what someone else at another company is achieving. We treat forecast error and demand uncertainty as a monthly test where it is pass or fail and, measuring against what everyone else is doing.

The obvious truth is, even within the same industry, the items or item combinations are different, the time horizon you are forecasting may vary, market share can impact volume and variation, and a host of other factors like systems and operational limitations and data lead to different levels of forecast error.

Using forecast accuracy benchmarks to set your own targets is like comparing an apple to a grapefruit.

The dirty little secret is that items are different, companies are different, and demand uncertainty should be expected to be different. Using forecast accuracy benchmarks to set your own targets is like comparing an apple to a grapefruit.

Use Forecastability Instead To Set Your Accuracy Targets

Many times, the companies at the top of the benchmark list are there not because they’re the best at forecasting but because they have the easiest demand to forecast. They could be forecasting a lag zero with only 12 items. We need to look at what the individual planner is trying to forecast and the forecastability of that particular item based on its own merits. (Learn how to gauge forecastability here.)

The typical approach is to look at averages, so 30% WMAPE is good, right? If this is the attitude in your planning team your demand planners will realize they can do a lot less work with equal results and never reach their full potential. The forecastibilty of your items could be above average but this benchmarking mindset won’t allow you to improve.

Using forecast accuracy benchmarks sets up demand planners for failure, or stops them from trying to improve

What if you have much more difficult items to forecast, and forecastability is lower than average? You give the demand planner unrealistic goals and set them up for failure.

This is about understanding where each product line or item or customer is different and having self-motivation and mastery at at the center of your approach. A better way is to benchmark the underlying forecastablity of the demand patterns and measure the improvements to their own baselines. To do this you can focus on forecast value added (FVA%) to measure against a naïve model or demand variation index (DVI) of the same data.

FVA Allows You To Improve Forecast Accuracy

The question shouldn’t be if you pass or fail but if the steps we take improve the results and by how much. Measuring FVA begets managing forecast processes because FVA adds visibility into the inputs and provides a better understanding of the sources that contributed to the forecast, so one can manage their impact on the forecast properly.

Companies can use this analysis to help determine which forecasting models, inputs, or activities are either adding value or are actually making it worse. You can also use FVA to set targets and understand what accuracy would be if one did nothing, or what it could be with a better process.

Most of all, FVA encourages mastery and grades you on what you can do or have done instead of what some unknown company with unknown forecastability has done. Benchmarks done incorrectly against industrial averages only can tell us what accuracy the so-called best in class companies achieve and do nothing to test your individuality and what you are capable of doing.

Maria Montessori’s approach is powerful and universal. She would have us not constantly testing ourselves against an arbitrary average, but have us focus on our own individual forecasting processes, and reach mastery in the process.

 

 

 

]]>
https://demand-planning.com/2019/04/15/forecastability/feed/ 2
Demand Planners Guide To Speaking The Language Of Sales https://demand-planning.com/2018/11/12/demand-planners-guide-to-speaking-the-language-of-sales/ https://demand-planning.com/2018/11/12/demand-planners-guide-to-speaking-the-language-of-sales/#respond Mon, 12 Nov 2018 19:48:22 +0000 https://demand-planning.com/?p=7411

There are differences between the operational and commercial mindset. The problem many of us demand planners have is not knowing how to speak the language of salespeople, or not providing the right measurements to let them better understand how information is affecting the forecasting process. In this article I distinguish five typical characteristics of salespeople and how these all lend themselves to forecasting bias. I will show you how to ‘speak Sales’ and offer a solution, Forecast Value Add, that will help improve your forecasting process and accuracy.

Problem 1: Sales Are Overly Optimistic

Sales are very optimistic. Fun fact: there are more books on motivation than on sales techniques themselves. They seem perpetually motivated and excited about potential opportunities that they just know will happen. These are great traits and much needed in business but, in demand planning, it manifests itself as over forecasting. One cannot expect accurate forecasts using a gut feeling about the next big sale.

Sales will often consistently over forecast, or talk of the big potential fish but never catch anything

The consequence is a ‘wishcast’ instead of an unbiased forecast. Sales will often consistently over forecast, or talk of the big potential fish but never catch anything.

Prevention: never request a forecast immediately following a product convention or Zig Ziglar motivational seminar. You need to get them away from emotion and provide them a baseline forecast to ground their outlook.

demand planning and sales

Salespeople are inherently optimistic and tend to protect their interests at the expense of forecast accuracy.

Problem 2: Salespeoples’ Goals Become The Forecast

Whether it’s from reading too much Zig Ziglar or a natural attribute, most salespeople are very goal orientated. This may be hard for salespeople to grasp (and sometime even executives), but goals do not always equal a forecast. Many times, there is a difference between a target and an unbiased demand plan. The problem is either they don’t believe the difference, or they do so much visualizing and hoping for what they want to happen that they actually start believing it.

It’s no coincidence that sales forecasts often exactly equal sales budget, regardless of what results are coming in. Every month the forecast misses or just rolls into future months.

Prevention: explanations of the difference between targets and unconstrained forecasts, and provide a meaningful measurement if they are adding value to the process.

Problem 3: Sales Might Be Smarter Than You Are

Sales are not only Intelligent and cunning – as a matter of fact they are most likely smarter than you are, at least in some respects. Sales has the ability and foresight to create deliberate bias to pad wallets or warehouses. Whilst it takes us multiple iterations and complex algorithms, they can shoot from the hip and get darn close with the extra inventory they want to add or sandbag just enough so as not to arouse suspicion.

You can generally see this one before and after budget by low balling before bonus and increasing right after. Many times, this shows up as well right after stock issues when customers react, and order more than they need, and sales is forecasting that plus even more to drive unneeded inventory.

Prevention: outside of just giving into the charismatic cunning salesperson knowing you can’t beat, try connecting compensation to forecast value add and accuracy.

Carpe diem, or seize the day, seems to be sales moto and attitude. They know what they are selling, who they are selling to, and even birthdates and kids names of their top clients. They don’t know seasonality, cycles, trends, or probability or what is going to happen in the future. Not understanding these things, their reference is what happened yesterday or happening today. When sales are good, tomorrow they will be great and when they had a rough day, sales will be slow until eternity. One clear signal is that the closer to the actual month the demand occurs, the worse their forecast generally gets.

I have seen this in our case where the lag 3 (three-month horizon) forecast was better than the forecast generated one month out. Another characteristic could be they use naive forecasting such as what they did last month or last year, just adding 8%.

Prevention: restrict what they use in forecasting to relevant information and attempt to eliminate most emotion from their forecast by showing the impact of the assumptions that go into the forecast.

Problem 4: Sales Always Be Closing And Never Forecasting

The last trait of sales is they follow the ABC’s of selling (always be closing), and unfortunately none of the rules of forecasting. But equally, I don’t want them spending hours of time recreating bottoms up forecast and paperwork when they could be adding revenue through closing a sale. But we do need them to engage and at least understand how a forecast works and their role in creating it.

Identifying this is sometimes much more difficult to detect because where other biases are conspiracy, this is complacency. It’s usually manifest in not showing up for meetings at all, or coming to meetings with little or no meaningful information.

Prevention: get them involved, provide meaning in what they provide, let them have a stake in the outcome, and most of all, keep it simple.

Forecast Value Added Helps Sales Know How They Contribute To Forecast Accuracy

Forecast Value Added (FVA%) is a tool that can help with all of these and bridge the communication gap between sales and demand planning. FVA can be defined as “The change in a performance metric that can be attributed to a particular step or participant in the forecasting process.” In sales speak, it is simply a case of did they help and add value. In other words, did their activity or involvement improve the forecast or did the information or process negatively impact the forecast.

This is done by measuring the forecast off the baseline, naive, or statistical against actuals and then also measuring the sales inputs, changes in forecast, or consensus forecast against the actuals and looking at the deltas. Looking at the forecast before and after any input or step adds visibility into the inputs and provides a better understanding of the sources that contributed to the forecast, so one can manage their impact on the forecast properly.

It also serves as an effective Sales training tool. What we want to know is what we don’t know, so we can make minor inputs or overrides into the forecast, either up or down, from our baseline prediction.  The sales training tool comes in the FVA as a feedback loop to those inputs to help identify what inputs work or don’t work, and the scale of adjustments needed to create value in the forecasting process.

Translating this to sales:

Optimistic – Filters emotion and gives pause to over optimism and they learn to affect the needle.

Goal orientated – Sets understandable, visual, and achievable forecasting targets for success

Intelligent & cunning – Rewards based on added value to the forecast process

Carpe diem– Provides a baseline and opportunity for relevant contributions and they learn what impacts the future

Follows the ABC’s of selling – Can streamline the process, helping them to get back to selling quicker

Bottom Line

One additional characteristic I failed to mention is that most salespeople are highly competitive. FVA actually can be used in this case as a score card as well and motivation for better and timely inputs. In my company, we have posted the FVA scores publicly, driving the sales team to compete against each other and try to be the one that provides the most value. In addition, if nothing else, providing the FVA means the salesperson is most likely dying to beat the nerdy person in the corner cube that uses a mysterious black box to crank out a forecast – and they do not want to lose to that.

 

 

]]>
https://demand-planning.com/2018/11/12/demand-planners-guide-to-speaking-the-language-of-sales/feed/ 0
Big Data? Chill Out & Keep It Old School https://demand-planning.com/2018/03/22/big-data-chill-out-keep-it-old-school/ https://demand-planning.com/2018/03/22/big-data-chill-out-keep-it-old-school/#respond Thu, 22 Mar 2018 13:12:02 +0000 https://demand-planning.com/?p=6474

Over the past few years, the Demand Planning community has become quite starry eyed over advancements in predictive software and tools. The concepts of “Big Data” and “advanced analytics” are enough to make seasoned practitioners stand to attention – and even catch the interest of the Executive Team. But when many of us still struggle with the fundamentals, it is worth investing in new-fangled technology?

Admittedly, in a field where you know you will never be “right”, this fancy technology and impressive phrases are quite attractive – they bring to mind a picture of a utopian state where analytical horsepower and near infinite data points lead to a 100% accurate forecast. There may even be a unicorn there. For me, I can’t help but be reminded of two much-loved colloquialisms that I urge Demand Planning professionals to consider as we journey into the future with new tools and ideas that may or may not usher in a new age of Demand Planning.

The only thing we know about Advanced Analytics is that you must have clear, fully costed plan as to how it is going to provide a return.

“Is The Juice Worth The Squeeze?”

This is a phrase I say probably too frequently when considering new tools, methods, and processes to improve Demand Planning. Does the effort required to explore and/or implement the new approach measure up to its expected return? For some organizations, intensified data collection (or data purchase) and the machine capability to chug through it may be cost prohibitive.

Computing equipment and data aside, the organization may not have the human resources on hand to give these capabilities their due. Perhaps the organization already enjoys a high level of forecast accuracy. Is the expenditure worth that extra percentage point? Maybe, maybe not. The only thing we know is that you must have clear, fully costed plan as to how this new tech is going to provide a return.

Don’t Throw The Baby Out With The Bathwater”

Or, “don’t throw the fundamentals out when you get your shiny new tools”. Even if your organization does decide to invest in Big Data and/or Advanced Analytics, it’s important to not abandon some of the tried-and-true measures and methodologies of effective forecasting. If your organization decides not to invest in these buzzworthy tools, there is still a great amount of improvement that can be made using some tried-and-true Demand Planning basics. Additionally, these concepts can assist in answering the juice-vs-squeeze question of a potential upgrade or data investment if the organization chooses to entertain new solutions. Some of the most impactful are as follows:

Put down the Big Data Kool Aid – FVA is great low-hanging fruit to pursue prior to making a new technology investment.

Forecast Value Add Analysis (FVA)

Whether or not Advanced Analytics and insights are in your future, the impact of a simple Forecast Value Add (FVA) analysis cannot be overemphasized. FVA is a measurement of your forecasting process – from the statistical models utilized, to the overrides added by analysts and the insights from salespeople. Each step in the forecasting process is measured to determine the added value the step brings to the overall process. Advanced Analytics or sophisticated tools could of course be an added forecasting layer to be measured, but I would caution that if steps in your process are continuing to devalue the forecast, there are things to look at first. Put down the Big Data Kool Aid – FVA is great low-hanging fruit to pursue prior to making a new technology investment.

Keeping an eye on tracking signal is important no matter how sophisticated the forecasting methodology.

Tracking Signal

While somewhat reactive in nature, I love using tracking signal as an indicator to let me know if my forecast needs a second look. Tracking signal is simply a measure of consistent bias over time. In short, if actual demand has come in lower than forecasted for each of the last three months, you may want to reinvestigate your demand assumptions.

Not only is consistent under- or over- forecasting a reliable indication to an analyst that their projections may be incorrect, it is also a great signal of potential inventory shortages or surpluses. Keeping an eye on tracking signal is important no matter how sophisticated the forecasting methodology.

[bar id=”527″]

Forecast Accuracy

I’m sure many of you are rolling your eyes at this point. Of course we measure forecast accuracy, this isn’t even worth talking about! I challenge you to revisit and audit your metric. Most organizations are familiar with the debate on precisely when forecast accuracy should be measured – is it a month before the actuals are due to come in? Three months? A week before the actuals come in? The answer is likely that a measurement at material lead time is the most appropriate. After all, this is the time in which the supply chain can, in a perfect world, respond appropriately and without expediting to the demand signal.

Recent analysis in my own organization found that the traditional “T (time) minus a generic lead time” approach was not allowing us to gather the proper insights from our forecast accuracy metrics because lead times are so wildly disparate. As a result, a change to the metric was required and more insightful conversations are now being driven during the S&OP process.

There’s No Unicorn In Your Advanced Analytics Utopia

The latest and greatest technologies offer a very tempting vision of what the future could be; after all, who doesn’t want the powerhouse predictive analytics of an Amazon or Target? However, it’s important to approach these decisions with a healthy dose of skepticism. Be mindful to evaluate the promises being made and ensure they are aligned with your needs. And, if the juice truly is worth the squeeze and you embark along the new frontier of Demand Planning, don’t forget the babies floating in that bathwater.

 

]]>
https://demand-planning.com/2018/03/22/big-data-chill-out-keep-it-old-school/feed/ 0
How To Gauge Forecastability https://demand-planning.com/2018/03/20/how-to-gauge-forecastability/ https://demand-planning.com/2018/03/20/how-to-gauge-forecastability/#comments Tue, 20 Mar 2018 14:08:53 +0000 https://demand-planning.com/?p=6431

“30% forecast accuracy? Seriously? What do I pay you for?  I could flip a coin and get better results than this!” Yes, we hear this as demand planners. And yes, it hurts – deeply, personally, unjustly.

It’s one of the most frustrating and demoralizing feelings as a Demand Planner to know that you’re trying your darnedest to improve the accuracy of something which you know is largely unforecastable. You’ve maxed-out modeling and model-tuning and have resorted to fishing for any judgemental recommendations that you can get your hands on. The latter likely only helping to see accuracy further struggle with snowballing bias and negative FVA of compounding “expert” overrides. Having thrown everything but the kitchen sink at the problem, you shift your efforts to building-up verbal defences with a Pixar-worthy storyboard of empathy-seeking data challenges and culpability-shifting anecdotes. If only there was a way we could prove to management that this product is not actually forecastable….

We Need To Set Management’s Expectations About What Is Forecastable

Management may be aware of standard forecast metrics – many are introduced in MBA programs, SCOR and Operations textbooks. But typically executives are more worried about how quickly they can show improvement in these measurements than how they are calculated.

Similarly, as demand planners, we are trained and certified on the most common algorithms, performance measurement computations, and off-the-shelf forecast modeling data structure requirements. If we are to set expectations about what is actually forecastable, and what we can actually achieve as demand planners, we need to look beyond these basics. If we don’t we will forever be taking unfair criticism for things outside of our control. What we need to do is not only present our forecast accuracy, but present it alongside forecastability. Forecastability reveals the extent to which an SKU can be forecasted, and provides the crucial context for our forecast accuracy.

Forecast accuracy vs. forecast accuracy

Forecast accuracy depends on how forecastable the product is.

Questions To Ask To Gauge An SKU’s Forecastability

What change in forecast accuracy is realized when the best-fit model is recalculated from different assortments of time series horizons?

Are the changes more prevalent for certain model types (hint – they should be for some, especially for more factor-inclusive model types like exponential smoothing)?

What differences in forecast accuracy are observed in monthly, bi-monthly, and quarterly period bucketing?  (Is poor forecast accuracy at the monthly level dramatically improved if consumption and forecast accuracy are looked at in quarterly buckets instead?)

Are any SKU-to-SKU, product line to product line, and product family to product family correlations observed when regression comparisons are run to look for like patterns in the demand history?  Are any of these like patterns accounted for in existing planning bills or bills of materials?

What record counts and financial weighting do the products and model types comprise when categorized into basic segmentation schemas (high value, volatile; high-value, stable; low-value, volatile; low-value stable)?

What are the historic forecastability ranges, within each segments and per product families? (Note: Segmentation can be combined with ABC and Pareto analyses, as well as calculated for markets, customers, or for products within each market/customer.)

Within low-value, volatile records, is inherent demand variability such that the cost of error is more prohibitive than a simple order policy (ex. reorder point or make-to-order)?

By asking these questions, we gain an insight into forecastability – what can be forecasted accurately, and what cannot. We will be able to go the S&OP executive meeting, or sit down with Sales and Marketing, Finance or senior executives, and be able to say with confidence that for a particular SKU, 30% forecast accuracy is a good thing. We can explain why an SKU cannot be accurately forecasted and then make suggestions based on that – after all, knowing that demand for a product cannot be predicted has serious implications for the business. Knowing this allows us as demand planners to mitigate risk and propose the best course of action.

What’s more taking the time to understand what one can forecast and what one cannot, and what results can be expected, one can set better expectations and understanding upfront.

How To Prove That 30% Forecast Accuracy Is A Good Thing

In the opening example, proving that 30% is actually a job well done given the forecastability is important (try re-calibrating your BI tools to show forecast accuracy in terms of variance to CoV). Go one better by showing that the cost avoidance of whatever initiative is actually outweighed by the cost of forecasting for it. If you communicate that the ‘juice-isn’t-worth-the-squeeze’ you can get work off of your plate, allowing you to focus on what matters.

Variability can be lessened by extending the time buckets planned for (daily to weekly to monthly to quarterly to semi-annually), but it is more costly. In certain markets and in certain products, this may be the only option and one that demand planners should constantly be evaluating and influencing. On the flipside is also finding ways to try to improve forecastability by forcing the square peg to better fit into that round hole.

For example, CoV movement over time can be tracked. Where increasing, investigations can be conducted to identify with Commercial colleagues the causes and then the script flipped to challenge what can be done to shape back. Analyzing positive correlating 4P’s effects in more stable products can sometimes yield a playbook to try for your more volatile areas.

Parting Thoughts On Forecastability 

This friendly neighborhood forecaster’s closing reminder is this: You measure how something is setup to execute, and you measure to control or to improve. But if the setup is wrong for the metric, or the metric is wrong for the setup, then you’re allowing the box that you’re in to dictate your success. Break out of the box, see if you need to redesign the box or redesign the metric. In forecasting, one size does not fit all – don’t let spinning on the wheel stop you from asking the question “why?” more often. That is how we understand what is actually forecastable and what isn’t, how to get the credit we deserve, and push the discipline forward.

Stay inquisitive, my friends. That’s the mark of the best Demand Planning professional.

]]>
https://demand-planning.com/2018/03/20/how-to-gauge-forecastability/feed/ 1
Artists & Scientists: Redefining Roles In Demand Planning https://demand-planning.com/2018/02/23/redefining-roles-in-demand-planning/ https://demand-planning.com/2018/02/23/redefining-roles-in-demand-planning/#respond Fri, 23 Feb 2018 19:47:03 +0000 https://demand-planning.com/?p=6273

Demand Planning and Forecasting is about people, process, and technology. Unfortunately, the people element is often forgotten. As supply chains evolve to become more demand-driven, the Demand Planning role also needs to evolve – and that means that companies must focus on creating and maintaining a talent strategy. What we need to do is hire ‘artists’ and ‘scientists’. We must define their roles and responsibilities, give them the right training and opportunities, and measure their performance.

For the demand planning function to succeed, organizations must have in place:

    • Strong talent based on core competencies
    • Right career-path culture and visibility within the organization
    • Proper training and development
    • True performance-based management

Give Up Trying To Find The Demand Planning Unicorns

Demand Planning and Forecasting roles are not created equal. A good Demand Planner will have strong analytical and critical thinking skills and a broad understanding of end to-end supply chain functions, while excelling in a core competency, or holding subject matter expertise. According to a survey conducted by Supply Chain Insights, a Demand Planner is the second hardest supply chain role to fill in an organization, followed closely by Director of Supply Chain Planning, Manager of S&OP, and Supply Planner. The reason for this is pretty straightforward; historically, organizations have looked for an individual who is analytically minded yet can still deal with ambiguity and relate to people. They seek that rare unicorn who has the seemingly contradictory ability to make sense of the data, then sit in front of Sales and Marketing and speak their language.

[bar id=”6270″]

That is not an easy combination to find. What they end up with is the Jack of All Trades but Master of None. An organization may be better off building a team of focused individual skillsets that complement each other. Understanding the needs of the department and skills of the people allows us to structure roles to around the expertise of each individual and better serve the needs of the department.

Master of Science (Demand Analyst)

This could be a more centralized role in many companies where one would generally be responsible for generating a statistical baseline forecast. People in this role are mostly analytical with less interaction and inputs required to create a final forecast output. A such, they are capable of managing more SKUs. The Demand Analyst may be the go-to person to provide ad hoc analysis in support of other functional areas with deeper statistical analysis. Many companies that are creating Centers of Forecasting Excellence are staffing primarily with analysts as opposed to traditional planners.

Master of Art (Demand Planner)

The goal here is to refine a statistical baseline forecast. This person would be less reliant on statistical skills but has a deep understanding of interdependence. Demand Planners can work with the commercial teams, running ‘what if’ simulations to adjust the forecasts based on data, analytics, and domain knowledge. The ideal planner asks the right questions at regular meetings with Sales and Marketing and instils a sense of partnership with them. As supply chains and companies evolve, specialized roles may be introduced to the Demand Planning function. For example, companies at higher stages of maturity may create a more senior role tasked with representing Demand Planning in broader supply chain transformation initiatives. The advancement of an S&OP process may require the leadership and support of an S&OP coordinator. Product Lifecycle Management (PLM) could require either deep analytics to support other areas in the organization or a Launch Leader (Product Innovation Planner) role that is highly collaborative.

Measure Accuracy And Performance

We all know or have heard that forecasting is a brutal job. There is a perception that Demand Planning is the punching bag of the S&OP process. How often have we heard that Demand Planners are always wrong?! And in reality, the closer they are to getting it right, the more likely it is that somebody else will take credit for those results. With this in mind, Demand Planning leaders should expand the scope of criteria that planners are evaluated against. This should really go back to the core competencies of the planner and the value they add to the forecasting process.

One way many leading organizations are measuring this is with Forecast Value Add (FVA%). The FVA% concept is designed to determine which, if any, steps in the forecasting process – particularly those steps conducted by practitioners – improve forecast accuracy, and which do not. This can be used to evaluate an analyst-generated statistical baseline model against a naïve forecast or the effectiveness of a planner’s overrides to that baseline. This helps the individual practitioner to better understand their individual impact on the collaborative process and provides feedback on their contribution so they can improve. This helps the company to better identify the drivers to forecast accuracy and measure the inputs and process rather than just the variability. Performances metrics should include effective collaboration with stakeholders, oral communication and conceptual thinking. [Ed: See this article on incorporating FVA analysis in your organization]

Build Pathways For Demand Planners

Leading companies also understand the importance of people and do a really nice job in creating a career path for their employees. When a new employee joins the organization, they meet with that individual to not only identify the expectations of the role over the short term, but they also discuss the individual’s long-term goals. Often these goals include the individual’s interest to move up or across the organization. Leading managers look for opportunities to get the individuals exposure to their interests through cross-functional projects or by encouraging participation in relevant meetings. That way, when the role does open up, the individual is ready. When done correctly, the individuals are incentivized to deliver results in order to prove they are ready for the next step, and the company continues to groom leaders. When done properly, with talent based on core competencies, the right culture, proper training, and performance-based management, companies become their own ‘recruiting factory’. They constantly produce top talent and improve its forecasting ability and accuracy.

This article first appeared in the Journal of Business Forecasting Spring 2016 issue, written by Eric Wilson CPF and Jason Breault. To gain access to the Journal Of Business Forecasting and a host of other benefits, become an IBF member today.

]]>
https://demand-planning.com/2018/02/23/redefining-roles-in-demand-planning/feed/ 0
Stop Saying Forecasts Are Always Wrong https://demand-planning.com/2018/02/20/forecasts-are-always-wrong/ https://demand-planning.com/2018/02/20/forecasts-are-always-wrong/#comments Tue, 20 Feb 2018 17:09:53 +0000 https://demand-planning.com/?p=6233

For many of us, the words “the forecast is always wrong” has become something we instinctively say. There’s nothing wrong with acknowledging there is variation in demand or admitting we may miss a projection. But when it becomes your automatic response to any miss and is believed to be an unavoidable part of forecasting, it is highly limiting. This seemingly harmless habit can actually lower the effectiveness of forecasts and the business’s confidence in them. What’s more, it justifies other people’s poor actions and focuses attention on the wrong things. 

As Demand Planners, We Need To Give Ourselves More Credit

I cannot help but believe that when everyone constantly says that forecasts are always wrong, it needlessly creates guilt in the poor Demand Planner’s mind and undermines their self-esteem. It’s hard to feel good about yourself when you keep falling on your own sword.

Maybe we should stop saying we are sorry and stop saying forecasts are always wrong. Repeating this mantra also sends the message that you’d rather be agreeable than be honest, when in fact our job is not to provide a number but to offer solutions. We need to stop using the crutch of inevitable forecast error and start having honest conversations and focus on what we can predict and what we can control.

When others say “the forecast is always wrong” what they really mean is that demand variability is perfectly normal.

It Actually Is Possible To Be 100% Accurate

Yes, it really is. But let us start with what constitutes accuracy. Accuracy is the degree of closeness of the statement of quantity to that quantity’s actual (true) value. While I accept that one’s ability to create an accurate forecast is related to demand variability, an accurate forecast does not reduce demand variability. Demand variability is an expression of how much the demand changes over time and, to some extent, the predictability of the demand.  Forecast accuracy is an expression of how well one can predict the actual demand, regardless of its volatility.

So, when others say “the forecast is always wrong”, what they really mean is that demand variability is perfectly normal. What we should be focusing on is that “while we can’t predict demand perfectly due to its inherent variability, we can predict demand variability” (Stefan de Kok). This is the difference between trying to precisely predict the exact point and accurately predicting a range or the expected variability.

A common example of this is trying to guess the outcome of rolling two fair dice compared to accurately predicting the range of possible outcomes. For the throw of the two dice, any exact outcome is equally probable and there is too much variability for any prediction to be useful. But the different possibilities for the total of the two dice to add up to are not equally probable because there are more ways to get some numbers than others. We can accurate predict that 16.7% of the time the two dice will add up to seven, and we can predict the range of possible outcomes as well as the probability of each outcome. While we may not know exactly what will happen, we can exactly predict the probability of it occurring. And if you predict the outcome within the probabilities, guess what? You are correct. Even though 100% precise is not an option looking at ranges or probabilistic forecast, 100% accuracy most certainly is within the realm of possibilities!

Bingo! We have officially proven everyone wrong and have our 100% accuracy.

Forecasts are always wrong

Accurately predicting an outcome within a range of probabilities is more valuable than trying to forecast a single number.

Range Forecasts Give Us So Much More Information Than Single Point Forecasts

Besides being able to more accurately predict the probabilities of outcomes and ranges, we are also providing more relevant and useful information. When you predict the variability, this not only grounds our initiatives in reality but also gives us the power to make better business decisions. One way to counteract variability is to ask for range forecasts, or confidence intervals. These ranges consist of two points, representing the reasonable “best case” and “worst case” scenarios. Range forecasts are more useful than point predictions.

With any single point forecast you are providing a single point of information which you know is not 100% correct. With a range you are providing four pieces of valuable information: we not only know the point or mean but we also know the top, the bottom, and the magnitude of possible variability.

Measuring the reduction in error rather than the increase in accuracy is more valuable to us because there is a stronger correlation between error and business impact than there is between accuracy and business effect.

It doesn’t take much to see that such a probabilistic forecast, or even just a forecast with ranges and a better prediction of uncertainty, is useful information in supply chain planning. Now we know how much variability we need to plan for and can better understand the upside or downside risk involved. In addition, accurately predicting uncertainty can add enormous value. That’s because you are focusing on improving not only the average demand prediction, but the entire range of possible demand predictions including the extreme variability that has the biggest impact on service levels.

Your KPIs For Measuring Forecast Error Are Based On A False Assumption

Part of the problem with saying we are always wrong is that we measure our performance ineffectively. This is because our definitions of forecast error are too simplistic or misrepresented. Many people look at forecast accuracy as the inverse of forecast error, and that is a major problem. Most definitions of forecast error share a fundamental flaw: they assume a perfect forecast and define all demand variability as forecast error. The measures of forecast error, whether it be MAPE, WMAPE, MAD or any similar metric, all assume that the perfect forecast can be expressed as a single number.

I mentioned above that we can provide more information in a range of forecast probabilities and subsequently be more accurate. All we need now is a way to measure this and prove it. A metric which helps us measure the accuracy and value of these types of forecasts is Total Percentile Error (TPE). Borrowing Stefan de Kok’s definition, TPE “measures the reduction in error – rather than the increase in accuracy – since there is a stronger correlation between error and business impact than between accuracy and business effect.” For more detailed information about this calculation see Foresight Magazine’s Summer 2017 issue.

Nassim Nicholas Taleb described this type of forecast accuracy measurement in his book, The Black Swan. He explains the difference in measuring a stochastic forecast (using probability distributions) and more traditional approaches (using a single point forecast).  He states that if you predict with a 20% probability that something will happen (and across many instances it actually happens 20% of the time) that the error is 0%. Naturally, it would also need to be correct for every other percentile (not just the 20 percentile) to be 100% accurate.

Bingo! We have officially proven everyone wrong and have our 100% accuracy.

You need to stop using the crutch of inevitable forecast error and start honest conversations about what we can predict and what we can control.

Focus On The Process

Even though we should know there is no such thing as being“wrong”, we should still look at what we are measuring and incentivize the right behavior. Mean Absolute Percentage Error (MAPE) or Mean Percentage Error (MPE) will tell us how much variability there is and the direction, but they do not tell us if the Demand Planning process is adding value. The question shouldn’t be whether we are right or wrong, but whether the steps we are taking actually improve the results. And if so, by how much.

Forecast Value Added (FVA) analysis can be used to identify if certain process steps are improving forecast accuracy or if they are just adding to the noise. When FVA is positive, we know the step or individual is adding value by making the forecast better. When FVA is negative, the step or individual is just making the forecast worse. [Ed: for further insight into FVA, see Eric’s guide to implementing FVA analysis in your organization.]

The obvious advantage to focusing on these types of metrics and KPI’s is that we are not casting blame but discovering areas of opportunities, as well as identifying non-value added activities. By eliminating the non-value adding steps or participants from the forecasting process, those resources can be redirected to more productive activities. And by eliminating those steps that are actually making the forecast worse, you can achieve better forecasts with no additional investment.

I Beg Of You, Please Change Your Vocabulary!

At the end of the day, our goal is not necessarily to be precise but to make a forecast more accurate and reliable so that it adds business value to the planning process. We need to stop saying we are sorry for what is out of our control and start controlling what we know is possible. To do this, we must not only change our vocabulary but also change the way we are doing our jobs.

Most people are fixed on traditional forecasting process and accuracy definitions. The goal is for you to start thinking in terms of the probability of future demand. From there, you need to be the champion inside your organization to help others understand the value of what forecasts provide. You need to stop using the crutch of inevitable forecast error and start honest conversations about what we can predict and what we can control.

]]>
https://demand-planning.com/2018/02/20/forecasts-are-always-wrong/feed/ 5
How To Use Forecast Value Added Analysis https://demand-planning.com/2018/02/12/what-is-forecast-value-added-analysis/ https://demand-planning.com/2018/02/12/what-is-forecast-value-added-analysis/#comments Mon, 12 Feb 2018 16:49:13 +0000 https://demand-planning.com/?p=6193

Forecast accuracy has always been measured, but now it is becoming a key performance indicator (KPI) for many supply chains. But are we measuring the right thing? Most companies use forecasting performance metrics, such as Mean Absolute Percent Error (MAPE), to determine how good the forecasts are. The problem with metrics such as MAPE is they only communicate the magnitude of error. 

Other metrics, such as Mean Percent Error (MPE) or other tracking signals as a trend, can only communicate the direction of the error, or bias. The problem is that neither one really reveals the complete picture, nor do they answer the simple question, “is it good enough?” This is where FVA plays a critical role. To measure everything, we need to add FVA as an additional metric to help gauge the effectiveness of the process or the performance of the forecasting professional.

What Gets Measured Can Be Improved

MAPE gives some measurement of forecast error. This is not a bad thing, and for supply chains it is critical to have visibility and understand the degree of error so that the organization can properly manage it. For most companies, this is used to set inventory targets or understand the risks of their capital investments.

Unfortunately, many of these companies also set arbitrary MAPE targets for what they would like to see the forecast accuracy be in order to hit a subjective inventory target. Because the MAPE targets are arbitrary, companies don’t understand the drivers or their underlying true variability. From a process standpoint, the problem is that one of two things will occur: the company hits the accuracy targets and is satisfied, and then little or no other improvements happen; or, they never hit the targets and become frustrated, never understanding why they can’t get there. Here is another way to look at this: while forecasts and measuring accuracy help mitigate inefficiencies in the supply chain, they do little to reflect how efficiently (or indeed why) we are achieving that forecast accuracy in the first place.

Measuring Forecast Value Added

FVA begets managing forecast processes. Forecast Value Added increases visibility into the inputs, and provides a better understanding of the sources that contributed to the forecast, so one can manage their impact on the forecast properly. Companies can use this analysis to help determine which forecasting models, inputs, or activities are either adding value or are actually making it worse.

FVA also helps to set targets and understand what accuracy would be if one did nothing, or what it should or could be with a better process. Finally, its objective is efficiency: to identify and eliminate waste in non-value adding activities from the forecasting process, thereby freeing up resources to be utilized for more productive activities.

What Is Forecast Value Added?

FVA can be defined this way: “The change in a performance metric that can be attributed to a particular step or participant in the forecasting process.” Let’s say we have been selling approximately 100 units a month, and sold exactly that many last month. Through the forecasting process and added market intelligence, our consensus forecast for the next month came to 85 units. Actuals for the next month came in at 95 units. For this example, after management and marketing adjustments, the MAPE was 10%, where a naïve forecast may have achieved a MAPE of 5%. We could say in this case that the adjustments have not added value since the naïve was lower by five percentage points. (See Table 1)

In conducting FVA analysis, we do not need to stop there, and we can make it as simple or complex as needed to evaluate our process. FVA can be utilized to determine the effectiveness of any touch point in the forecasting process. A company might start with a naïve forecast; however, this kind of comparison can be made for each sequential step in the forecasting process. One can compare the statistical forecast to a naïve forecast, or evaluate the value of causal inputs, sales overrides, or the consensus forecasting process.

In our analysis above, we might find, for example, that the statistical forecast is worse than the naïve forecast either driven by something in the time series data or tweaks that were made in the parameters of the models we are using. We may also see that the overall process is adding value by bringing all the inputs together to a consensus, but the sales and marketing inputs are negatively biased, which are impacting the final numbers.

One of the best ways to measure if the process is adding value is to utilize FVA, and determine if the forecast proves to be better. Better than what though? The most common and fundamental test in FVA analysis is not only comparing process steps used in forecasting but to also compare the forecast against the naïve forecast.

Forecast Value Added

What Is The Naive Forecast?

As per the Institute of Business Forecasting (IBF) Glossary, a naïve forecast is something simple to compute, requiring a minimum amount of resources. The key is something simple, and traditional examples are random walk (no change from the prior period where the last observed value becomes the forecast for the current period), or seasonal random walk (“year over year” using the observed value from the prior year’s same period as the forecast for the current period).

Although it seems simple, determining the naive forecast is never that easy. The best way to determine the baseline or naive forecast to measure against, one needs to remember what the primary task is and what happens if forecasting does not achieve it.

We might like to believe that if we, as forecasters, were to suddenly disappear, all the companies’ activities would come to a halt and be paralyzed, not knowing how to plan for the future. The truth is, life will go on without us and items will be produced, inventory will be built, materials will be ordered, and investments will be made. That is the key and what I like to measure against.

If you did nothing, what numbers would the company use to function? They may not call it a naive forecast, but what you generally find out is, in the absence of an expert forecast signal, a company will go with what they have, what they know, and what is simplest to get. Some may use a moving average of what was sold in the past few months, or even simpler, what was sold last month (random walk). For others who know there is inherent seasonality, they may take the sales from last year and plan against that (seasonal random walk).

Still others have budgets or financials that are locked in and, without a better signal, are what the company would plan to. The goal is to find how a company and its supply chain tend to look at its business. Is it reactionary, seasonal, top down, or an entirely different approach? How does that translate into how they would plan without a forecasting professional or process in place?

One is left with the organization’s naïve forecast. This can be the traditional random walk, or a simple moving average, or a financial projection. I have even seen some companies use the statistical baseline from their forecasting system as the naïve. None of these approaches is wrong. The best answer is the baseline forecast that takes the least amount of effort at little or no cost or resources and, I would add, drives the supply chain without the influences of the forecasting process.

Another very common and overlooked benchmark is what most re-order points and inventory targets are set. Many companies, even with a good forecast, still exclude forecast variation from their calculations and look at the coefficient of variation (COV) to measure the variation of historic demand to set policies, in essence using a naïve forecast to set policy. While this is not used as a forecast, it is a lens you can use to compare your overall forecast performance to the Demand Variation Index (DVI). The Demand Variation Index utilizes the calculation similar to the Coefficient of Variation by measuring the ratio of absolute standard deviation or percent of inherent variation to the mean or average demand.

The output of DVI will be a percentage of normal inherent variation as a percentage that can be compared to MAPE provided by your FVA. Commonly used in forecasting to see if the forecast error or variation from actual demand over time is greater than normal variation, it stands to reason that an improved DVI is better at predicting demand.

So now that we have determined the baseline or naive forecast, a reasonable expectation is that our forecasting process (which probably requires considerable effort and significant resources) should result in better forecasts. Until one conducts FVA analysis, it may not be known. Unfortunately, we have seen time and time again that many organizations’ forecasts are worse than if they just used a naïve forecast. In the book by Michael Gilliland, Len Tashman and Udo Sglavo, Business Forecasting: Practical Problems and Solutions, the authors highlight a recent study by Steve Morlidge.

After studying over 300,000 forecasts they found that a staggering 52% of the forecasts were worse than using a random walk. A growing amount of qualitative evidence would lead us to a similar conclusion: as the systems, inputs, and processes have become more elaborate and complex, the results of the forecast have not generated much better results. For all of the collaboration, external data, and fancy modeling, I would not be surprised if half the time we still are not bettering the naive forecast.

What one needs to do is focus on the steps and inputs, and simplify the process to what is working and use the inputs that add value. This way we could better focus our organization’s resources, money, and effort on the primary objective, which is improving forecast accuracy. If only we had quantitative evidence or a way to measure the different steps or inputs in a forecasting process and conclude we were adding value…

Putting FVA To Work

Forecasting is both a science and an art. Companies can employ standard algorithms to help generate a forecast, but it still takes a skilled practitioner to put the numbers together into a coherent form. As we have seen, measuring the effectiveness of that forecast is also a process with both science and art as well.

Much like the concept of FVA being a “lean” principle that helps identify what is adding value, utilizing FVA is not meant to generate unneeded excess work. Look at a simple approach to measuring and analysing your current forecast processes, and find the best ways to integrate FVA to improve the inputs and process you already have. A great place to start is by mapping each of the main sequential steps in your current forecasting process, and then tracking the results at each of those aggregate steps. A common process could include steps as shown in Figure 1.

Forecast Value Added diagram

From here, you can incorporate and use FVA in your analysis much like you use any forecast metric. Also, it is important to maintain some of the same principles as other metrics. First, understand that one data point does not make a trend. Just because you have one period with a negative FVA doesn’t mean we should fire our forecaster. Anomalies can occur in processes and inputs. Just as anomalies can happen with data analysis, they also happen with FVA analysis. Like most metrics, it needs to be evaluated over time. The same way we look at forecast accuracy, FVA viewed over time can be used to identify positive or negative trends and bias in inputs or steps. Next, I would recommend looking at sub-processes or inputs in the steps that need the most attention. If the statistical forecast is consistently adding value and it is the overrides that are interjecting variation into the process, then begin with the overrides.

For example, you may find that Sales is attempting to re-forecast the numbers every month instead of providing true inputs or overrides. Using FVA, we have already determined that the statistical baseline is effective, and now the purpose of gathering inputs should not be to validate the statistical model or calculations, but to include selective information that may be available but not reflected in historical data.

In this case, FVA can serve as an effective sales training tool. We don’t want Sales to spend their resources regenerating an entire forecast to try to correct the forecast; rather we want them to improve upon it. We already know and can demonstrate that we have a solid statistical baseline forecast from our system. We have a baseline that most likely knows better than they do about seasonality, the level, trend, and data driven events.

What we want to know is what we don’t know, so we can make minor inputs or overrides into the forecast, either up or down, in our baseline prediction. The sales training tool comes in the FVA as a feedback loop to those inputs to help identify what inputs work or don’t work, and the scale of adjustments needed to create value in the forecasting process.

Finally we need to look at the process as a whole again. In order to determine if a forecasting step or input is adding value, it is not enough to simply look at it as an isolated item; rather, it is best to look at it as an intelligent combination of inputs and processes. Extending this further, different inputs (or the same) combined and aggregated differently can be thought of as different forecasts and, as such, provide different insight.

The final question for us is not whether each of these inputs adds value; rather if each of these inputs can be combined in a meaningful way to create a better forecast that effectively integrates process, inputs, and analytics with the planner’s expertise. At the end of the day, our goal is to make a forecast more accurate and reliable so that it adds value to the business.

The Bottom Line

Increasing forecast accuracy is not an end in itself, but it is important if it helps to improve the rest of the planning process. Reducing forecast error and variability via FVA analysis can have a big impact on service, inventory, and cost for an organization. Each time we’re adding 2 percent forecast value added, that 2 percent means something in dollars. That’s why we add FVA analysis to help measure our process and show the value proposition for any process changes you’re considering.

 

This article first appeared in the Journal of Business Forecasting (JBF), Spring 2016 issue. To receive the JBF and other benefits, become an IBF member today.

]]>
https://demand-planning.com/2018/02/12/what-is-forecast-value-added-analysis/feed/ 2
Special Edition: Leveraging Predictive Business Analytics – Journal of Business Forecasting Winter 2015 https://demand-planning.com/2015/02/03/special-edition-leveraging-predictive-business-analytics-journal-of-business-forecasting-winter-2015/ https://demand-planning.com/2015/02/03/special-edition-leveraging-predictive-business-analytics-journal-of-business-forecasting-winter-2015/#respond Tue, 03 Feb 2015 19:00:39 +0000 https://demand-planning.com/?p=2766 COVER_JBF_Winter_2014-2015

Predictive Business Analytics, the practice of extracting information from existing data to determine patterns, relationships and future outcomes, is not new; it has been used in practice for many years. What is new, however is the massive amount of data, so-called Big Data, now available, that can better support decision making and improve planning & forecasting performance. Additionally, we now have access to technology that can store, process, and analyze large amounts of data. The analysis modules found in these tools have algorithms that can handle even the most complex of problems. Above all, there is a growing awareness among businesses, data scientists, and those with data responsibilities that there is a wealth of hidden information that should be tapped for better decision making.

To make the most of Predictive Business Analytics, we need to prioritize what problems we’re trying to solve, which data and signals we should collect, where they are located, and how they can be retrieved and compiled. Once these steps have been taken, the data must be altered and cleansed in preparation for analysis with a predictive analytics software tool, which can be MS-Excel, an open source application, or the many other available applications in the marketplace.

Predictive Analytics and finding solutions from Big Data have no boundaries. It’s applicable to nearly every industry and discipline. Ying Liu demonstrates how this is currently being used by the retail industry, as well as in education, healthcare, banking, and entertainment industries. Macy’s, for example, on a daily basis, checks the competitive prices of 10,000 articles to develop its counter price strategy. Other authors of this issue talk mostly about its application in the areas of Demand Planning and Forecasting. In contrast, Charles Chase illustrates how POS data can be used in sensing and shaping demand to improve sales and profits, while John Gallucci demonstrates how it can be used in managing new product launches.

Gregory L. Schlegel discusses not only how Predictive Analytics can be used to optimize the supply chain, but also how it can manage risks. He talks about how credit card and insurance companies have reduced their overall risk by using predictive analytics. In addition, he provides case studies of companies in the CPG/Grocery, High-Tech, and Automobile industries that are using them to drastically improve their bottom line. He talks about how Volvo Car Corporation is targeting zero accidents and zero deaths/injuries by gaining insight from data downloaded each time their automobiles come to dealers for servicing. Using this data, Volvo wants to determine if there are defective parts that require correction, and/or if there are issues with driver behavior, all needing to be addressed.

Eric Wilson and Mark Demers explain that with Predictive Analytics, businesses are looking to apply a more inductive analysis approach, not in lieu of but in addition to, a deductive analysis. In inductive analysis, we set hypotheses and theories, and then look for strong evidence of the truth. In other words, we search for what the data would tell us if they could talk. This is in contrast with deductive reasoning where we draw conclusions from the past.

Allan Milliken outlines the processes that Demand Planners should follow to take full advantage of the power of Predictive Analytics. It can help to classify products that are forecastable and those that are not, enabling them to arrive at a consistent policy regarding which products should be built-to-order and which for built-to-stock. It can also flag exceptions, thereby enabling established corrective actions to be applied before it is too late. Larry Lapide demonstrates how Demand Planners can extract demand signals from the myriad of data gathered from various sources including a vast amount of electronic data drawn from the Internet. Mark Lawless discusses Predictive Analytics as a tool to peek into the mind of consumers to determine what types of products and services they are likely to purchase.

It is true that Predictive Analytics is not revolutionary, but its applications can certainly be considered innovative. The significant impact it can have will continue to change the way businesses are managing demand, production, procurement, logistics, and more. There is an overwhelming belief that companies that can identify the right data and leverage it into actionable information are more likely to improve decisions, act in a timely manner, and have a true competitive advantage. Advancing your organizations’ analytics capability can be the great difference maker between companies that are performing well, and those that are not.

Your comments on this special issue on Predictive Business Analytics & Big Data are welcome. I look forward to hearing from you.

Dr. Chaman Jain, Editor

Download a preview of the latest Journal of Business Forecasting

Click here to join IBF and receive a JBF Complimentary Subscription

The Journal of Business Forecasting (JBF) has been providing jargon-free articles on how to improve demand planning, forecasting, supply chain, and S&OP, step-by-step for over 30 years. A subscription to the JBF comes with IBF membership at no additional cost.

]]>
https://demand-planning.com/2015/02/03/special-edition-leveraging-predictive-business-analytics-journal-of-business-forecasting-winter-2015/feed/ 0
Forecast Value Added (FVA) – Series 2 Interview https://demand-planning.com/2015/01/21/forecast-value-added-fva-series-2-interview/ https://demand-planning.com/2015/01/21/forecast-value-added-fva-series-2-interview/#respond Wed, 21 Jan 2015 14:10:46 +0000 https://demand-planning.com/?p=2739 Shaun Snapp

Interviewer: Michael Gilliland, SAS

This month’s interview is with Shaun Snapp, founder and editor of SCM Focus, where he provides independent supply chain software analysis, education, and consulting.

Shaun’s experience and expertise spans several large consulting companies and at i2 Technologies before staring SCM Focus. He has a strong interest in comparative software design, maintains several blogs, and has authored 19 books, including Supply Chain Forecasting Software and most recently, Promotions Forecasting. He holds an MS in Business Logistics from Penn State University.

I asked Shaun about the application of FVA analysis with his clients.

Mike: What forecasting performance metric are you using (e.g., MAPE, weighted MAPE, forecast accuracy), and at what level do you measure (e.g. by Item / Distribution Center / Week with a 3-week lag)?

Shaun: I really only use MAPE or weighed MAPE. In most cases I am comparing different effects on forecast accuracy, so a relative measure is the most appropriate. As I have to export forecasts and actuals from systems to calculate global figures, weighed MAPE, while certainly the most accurate, is a bit more work to calculate, and of course there are different ways of weighing MAPE, which brings up a separate discussion.

I try to get companies to measure at the Item/DC. I bring up the topic that the relevant duration estimate is over the replenishment lead time. I don’t use any lagging.

Mike: Are you measuring forecast bias?  What are your findings?

Shaun: Yes very frequently. My finding is the same as the literature, sales inputs have a consistent bias — which in my clients is not addressed through anything but planner adjustment. 

Mike: Are you comparing performance to a naïve model?

Shaun: No. I tend to compare the forecast of my clients against a best fit. I do have an approximation of the percentage of the database which does not need very much forecasting energy, as I know what percentage of the database has a level forecast applied — these are both highly variable items, and very stable items. 

My work pretty much stops at getting the system to generate a decent forecast. I don’t have any involvement in what the planners do after that. Most companies I work with have either walked away from the statistical forecast or only use a very small portion of the statistical forecast that are generated. The planners are free to make any adjustment or change the model applied.

Mike: What are the steps in the forecasting processes you see (e.g., stat forecast, analyst override, consensus meeting override, executive approval)? What FVA comparisons are you measuring?

Shaun: I do all of these comparisons for clients. I am trying to understand what the FVA is at each step so poor quality inputs can be de-emphasized and quality inputs can be emphasized.

The bigger problem is impressing the importance of the FVA on clients. I can’t recall finding any work of this type done at clients before I arrive. I think this is because it does take work, and demand planners are busy doing other things. Because so many manual adjustments have to be made and because so many meetings are necessary with groups that provide forecasting input most demand planning departments seem overworked versus their staffing level.

Most of the forecasting consulting that comes before me is of a system focused nature. Adding characteristics to a view, creating new data cubes, that sort of thing. There seems to be a much smaller market for forecast input testing. It is something I bring to clients, but normally not something they ask for. Many decisions are still very much made based upon opinions and “feel.” In fact I find it very rare for the attribute/characteristics which is used to create a disaggregated forecast to have been proven to improve forecast accuracy before it is implemented in the system.

Mike: Anything else you’d like to say about FVA? Including advice for other companies considering the application of FVA?

Shaun: I have never seen any forecasting group that based its design upon FVA.

This is not to say that lip service may not be paid to FVA. If you bring up the topic, most people will tend to agree it makes sense. However, really using FVA means being very scientific in how one measures different forecast inputs, and while businesses use math, businesses are generally not particularly aligned with scientific approaches.

There are an insufficient number of people, either in companies or working as consultants that have an understanding of how to perform and document comparative studies. Documentation is a very important part of the process, and again this is a serious limitation for every company I have ever come into contact with, from the biggest to the smallest and the industry affiliation does not seem to matter very much in this regard.

On a different topic, as the literature points out and as I can certainly attest, there are some groups that have a negative interest in FVA. That is, some groups want to provide input to the forecast and don’t particularly care if they are right, and don’t particularly want to be measured. Some groups just want to ensure the in-stock position of their items. These groups are very powerful and exert great deal of pressure on the supply chain forecasting group to accept their forecasting input.

Further, this gets into the topic that there is not simply “one forecast.” There are really multiple forecasts, and while there is discussion of unifying the forecasts, this is not in reality an easy thing to do, because different groups have different financial and other incentives and see things through different lenses.

I would say poor quality forecasting or inputs to the forecast which are entirely unregulated as a policy (but regulated by individual planners to a degree) is the norm.

Willing to share your experiences with FVA?   Please contact the IBF at info@ibf.org to arrange an interview for the blog series.

]]>
https://demand-planning.com/2015/01/21/forecast-value-added-fva-series-2-interview/feed/ 0