Models & Methods – Demand Planning, S&OP/ IBP, Supply Planning, Business Forecasting Blog https://demand-planning.com S&OP/ IBP, Demand Planning, Supply Chain Planning, Business Forecasting Blog Mon, 19 Aug 2024 11:45:22 +0000 en hourly 1 https://wordpress.org/?v=6.6.4 https://demand-planning.com/wp-content/uploads/2014/12/cropped-logo-32x32.jpg Models & Methods – Demand Planning, S&OP/ IBP, Supply Planning, Business Forecasting Blog https://demand-planning.com 32 32 Forecasting Fallacy Exposed: The Economics Behind Accuracy https://demand-planning.com/2024/08/19/forecasting-fallacy-exposed-the-economics-behind-accuracy/ Mon, 19 Aug 2024 11:38:38 +0000 https://demand-planning.com/?p=10423

Forecasting plays a pivotal role in most business decisions and few would disagree that that high-quality forecasting is a strong competitive advantage for businesses. Companies rightly invest significantly time and money in projects aimed at improving the quality of their forecasts. The approaches vary—be it processes, tools, data, or algorithms—but the singular goal is to have the most reliable forecast possible.

The practice of forecasting is built upon a simple foundational principle: the quality of a forecast is measured by its accuracy or, inversely, by its error rate. That businesses need a reliable forecast is not up for debate; a forecast with a low error rate is undoubtedly of better quality than one with a significant error rate. But let’s explore a more subtle question: Do our businesses truly need a reliable forecast in terms of ‘accuracy’?

But is forecast accuracy as important to corporate success as many people think? It’s quite a provocative question. It challenges a universal truth, one considered absolute in our field. Contemplating this questions requires taking a step back and reevaluating our preconceived notions of what really drives value and reassessing our daily practices.

In this and a following article, we’ll share the rather surprising—and perhaps concerning—results of a supply chain study within a Retail industry. This study is based on the analysis of a large dataset (more than 32,000 time series) to explore forecasting from the perspective of its added value and its economic contribution to the enterprise. It asks two key questions:

  1. Are ‘accuracy’ and ‘value-added’ as strongly correlated as we think?
  2. When should performance be deemed sufficient? When do further improvements to the forecast become irrelevant?

These two key questions are ones that no enterprise can afford to ignore. To start, let’s address the first question: Is better accuracy a guarantee of added value for the company? The answer is a simple no. On the contrary, as we will demonstrate, increased accuracy can even alter decision-making and lead to financial losses.

Setting Up the Experiment

The M5 competition, organized in 2020 by the Makridakis Open Forecasting Center (MOFC), is a global forecasting competition. It focused on forecasting demand for a subset of products and stores at Walmart, thus in a retail context. At the close of the competition, organizers made public around 130 distinct sets of forecasts: the top 50 deterministic forecasts, the top 50 probabilistic forecasts, and approximately 30 benchmark forecasts based on classical approaches.

This abundance of forecasts enables a deep analysis of the link between accuracy and added value. However, this M5 competition suffers from a significant limitation for our study. It was designed as a pure forecasting competition, completely ignoring the aspects of decision-making and impact evaluation. To conduct our study and explore business value successfully, we had to address this gap by defining our decision-making process and enriching associated data (packaging, supplier constraints, order frequency, cost structure, etc.). Our goal was to closely resemble real-world use cases in business. Thus, we relied on third-party data sources to define the most credible context possible (margin rates by product family, realistic packaging, target service rates, etc.).

Regarding the inventory policy, we established a weekly replenishment process with a 3-day lead time, following a classic “periodic review and dynamic order-up-to-levels” policy. Did it reflect any company’s exact replenishment policy? Clearly not, but it’s not a problem since the essential aspect is that they reflect a credible and coherent policy.

The results of our study do not claim to be universal. However, they demonstrate that there is a gap between the accuracy of a forecast and its economic value. Everyone is encouraged to replicate this analysis in their own context to evaluate if the results tie into their preconceived notions of forecast accuracy or conflict with them.

Once the procurement process was defined, and the data enriched, we developed a simulation tool and applied it for each time series of each forecast set. We employed a 4-step process:

  1. Forecast ingestion
  2. Evaluation of ‘accuracy’ (using various metrics)
  3. Simulation of the procurement decision
  4. Evaluation of economic performance (especially in terms of gains/costs).

This simulation applied to the M5 competition dataset provided us with numerous and varied data, totalling more than 9.4 million distinct cases.

From Universal Assumptions to Surprising Findings

Before detailing the main results, let’s recall the nearly universally accepted axiom: “If the ‘accuracy’ of forecast A is better than that of forecast B, then forecast A will enable better decision-making and present economic advantage.” To this, we can add a generally accepted limitation: “In some cases, a forecast, although more accurate than another, may not provide any additional added value.” We accept that the improvement might be too minor to have a real influence on replenishment decision making. For example, reducing an error from 4.4 to 4.3 units might have little impact on the replenishment of an item supplied in packages of 12 units.

There were three key findings from the comparison between forecasts, based on their accuracy (expressed here by MAPE) and their economic performance:

  1. Finding #1: In 80% of cases, improving the forecast had no impact on the decision and thus on economic performance. This proportion exceeds expectations, implying a negative return on investment (ROI) in 4 out of 5 cases.
  2. Finding #2: In 12.6% of cases, improving the forecast resulted in superior economic performance. This is our expected case. However, this proportion remains low, rewarding efforts to enhance the forecast in only 1 out of 8 cases.
  3. Finding #3: In 7.3% of cases, improving the forecast altered economic performance. This case, initially considered impossible, occurred in 1 out of 3 cases when the forecast improvement influenced the decision.

Figure 1 | Breakdown of economic performance when forecast accuracy improves

These results, evaluated using the MAPE metric, are similar for other studied metrics (MAE, MSE, MSLE, RMSE, wMAPE). This observation is therefore not specific to the MAPE metric but rather associated with the notion of accuracy itself.

Improving the accuracy of a forecast does not guarantee better economic performance. Yet, this doesn’t mean that we should stop improving our forecasts. In fact, shifting the focus from the frequency of cases to the economic performance of the forecast shows that the value created by a better forecast (here $8,376) significantly surpasses the value lost (here $-3,251). The balance remains strongly positive. This is of course quite reassuring. This implies that improving the forecast indeed holds an economic advantage. However, this gain is significantly reduced (~-28%) by the recorded underperformance, which leaves room for further improvements in the forecasting practice.

Advocating for an Economic Approach

These conclusions do not claim to be universal. Transferring them from one context to another would be inappropriate. Indeed, a simple change in decision-making, cost structure, or constraints could produce radically different results.

However, the general conclusion is worrisome. The importance given to the accuracy of a forecast might not be as fundamental as assumed. The belief that improving the accuracy of a forecast is necessarily advantageous, is a myth. In the business realm, we are therefore wrong to be so obsessed with accuracy. Perhaps we’ve become so focused on improving our forecasts that we’ve lost sight of the fact that we’re not in an accuracy competition. Our sole goal should be to generate value. And in business, value—although it can take various forms—is primarily economic.

The future of forecasting lies in better integration of decision-making and its impacts into our assessments. The challenge is commensurate with the opportunity it represents! In an upcoming article, we will explore how to improve both efficiency and performance in forecast generation, detailing an approach to target areas where effort expended has a tangible impact on economic performance while identifying those where investment would be economically irrational.

 

This article first appeared in the spring 2024 issue of the Journal of Business ForecastingTo get the Journal delivered to your door every quarter, become an IBF member. Member benefits include discounted entry to all IBF training events and conferences, access to the entire IBF knowledge library, and exclusive members workshops. 

 

]]>
The Practical Guide to your First Forecasting Model https://demand-planning.com/2024/07/08/the-practical-guide-to-your-first-forecasting-model/ Mon, 08 Jul 2024 10:35:42 +0000 https://demand-planning.com/?p=10355

Forecasting is part of everyday life. We watch the weather channel before we make weekend plans and text our family our ETA when we leave work. However, forecasting in the business context does not happen naturally. It is the responsibility of the demand planning team and as far as the rest of the company is concerned, how we arrive at these forecasts is a mystery. However, it doesn’t have to be this way.

For companies just starting out in demand forecasting, I would like to offer some very practical advice.

Forecasting anything is about determining mathematical dependencies between variables. The dependencies can be linear or nonlinear. Typically, having more data means you have a better chance at figuring out those dependencies and, as a result, developing a decent forecasting model.

This doesn’t mean that Neural Network or any other advanced algorithm. In fact, a simpler model is often better.

Simple Models Mean Buy-in From Stakeholders

First of all, if you have a choice between a linear and nonlinear model, choose linear even if it means losing a few percentage points on accuracy. The main reason is that a linear model (e.g. a regression)  is easy to translate into a formula and can be easily understood by stakeholders.

If your team can’t explain the predictions, they have no value. There will be no user buy-in, no follow-up questions, no discussions. The wise consumer of a forecast is not a trusting bystander but a participant and, above all, a critic. It is almost impossible to offer any meaningful criticism to something that is difficult to understand.

As your team develops a forecasting model their ultimate goal is to come up with something that stakeholders will digest and retain. Those should be simple things like “if X goes up 10% our sales are likely to go up 5%”.

Secondly, while having external drivers as components is a potential game-changer in the forecasting world, it is absolutely not required to get started. The best foundational model can be achieved by creating drivers (features) that are rooted in the time series data itself. Some examples of those features can be month number, quarter number, and a rolling average value for a number of previous periods.

Month Price Month Number Quarter 2-month rolling average
Jan 100 1 1
Feb 90 2 1
Mar 150 3 1 95
Apr 120 4 2 120
May 110 5 2 135

Source data (price by month) and the three features engineered for the simplest forecasting model

 

Having 100 external drivers (oil prices, labor market statistics, search word frequency, etc.) might look good on paper and result in a higher accuracy, but the business stakeholders will likely be bewildered and close the deck, never to open it again. The optimal number of causal relationships is between three and five; this way stakeholders can actually remember what they are.

Aim to Be Directionally Correct, Not Perfectly Accurate

The third and final point is that being directionally correct is the most important forecast characteristic. It is a very intuitive one, but it is often overlooked in the data science world. To illustrate this, let’s evaluate three different forecasts, and we’ll use Root Mean Square Error (RMSE) – one of the most common forecast accuracy metrics – as a way to compare them.

 

Month Actual Sales Forecast 1 Forecast 2 Forecast 3
Jan 100 80 80 80
Feb 90 70 110 100
RMSE 20 20 15.8

Comparing three forecasts

 

In the world of data science, the lowest RMSE wins. So would we pick forecast 3 as  the best one in this case? Not so fast. Out of the three models here only one correctly indicates a downward trend for the month of February. There are lots of use cases where being directionally correct is far more valuable than landing on a value that’s closest to the actual one. For example, in a Demand Review, the knowledge about a downturn in the market is a powerful weapon to wield. With that being said, in this case we would choose Forecast 1 as the best one.

An actionable forecasting model has stakeholder buy-in, is explainable and directionally correct.

 To sum up: an actionable, practical forecasting model is not the one that uses the highest number of external drivers, has the most advanced mathematical algorithm or even the highest accuracy metric. It is the one that has stakeholder buy-in, is explainable and directionally correct. This way your team can be sure that it will drive meaningful discussions and result in actions that bring value to the business.

]]>
Managing Optimism Bias In Demand Forecasting https://demand-planning.com/2022/12/07/managing-optimism-bias-in-demand-forecasting/ https://demand-planning.com/2022/12/07/managing-optimism-bias-in-demand-forecasting/#respond Wed, 07 Dec 2022 14:59:53 +0000 https://demand-planning.com/?p=9900

I recently watched a Ted Talk by Professor Tali Sharot, a specialist researcher in Experimental Psychology at University College London. She described how optimism bias is rooted in our brains to the point where it’s an evolutionary trait.

Human brains integrate positive evidence more efficiently and faithfully than we do negative evidence. Our tendency is to overestimate the likelihood of experiencing good events in our lives. In short, we are more optimistic than realistic, and we are oblivious to the fact. 80% of the population display an optimism bias to some degree, so it comes to no surprise that this translates into the business environment.

This got me thinking about optimism bias in the world of product forecasting – how we adapt our behaviour, especially regarding new or recently launched products where there is a significant judgemental contribution. New product launches typically show a forecast accuracy of 40%-55% (after the first year of launch) and forecasts are heavily reliant on the judgement of stakeholders.

The optimism bias challenge is so prevalent in the real world that the UK Government’s Treasury guidance now includes a comprehensive section on correcting for it. A real-life example is the cost of hosting the Olympic Games which, since 1976, is over forecast by an average of 200%. If future bidders wanted to safeguard against this bias, they should bear this in mind.

Tackling optimism bias is therefor crucial. Research done by Professor Sharot shows that being aware of the bias does not shatter the illusion. This is key to our understanding as having a bias KPI in forecasting is not enough for it to go away- though it is certainly a step in the right direction. It shows us the magnitude of our bias but measuring alone is not enough to eliminate the human and organisational behaviours that drive it.

To add to the above, let’s also consider that there are two types of optimism. One is closely related to belief in our success and essentially backing ourselves. Studies consistently show that this type of optimism is valuable as it leads to success in academia, sports, and politics. For a new product forecast, we are backing ourselves and our team’s ability to outpace the competition and all our assumptions being right as we are personally invested in its success. This type of optimism is in fact necessary for stakeholder buy-in and team motivation, even though the reality is that not all our efforts will bear 100% commensurate results; some might deliver higher results while others will go the opposite way.

The second and more alarming type is blind unrealistic optimism, i.e. overconfidence in our assumptions and our ability to deliver. Overconfidence is “the most significant of the cognitive biases” for new launches according to research by the Tuck School of Business at Dartmouth. It leads to the setting of unachievable goals which initially might look tempting but, once the hoped-for results do not transpire, can lead to a demotivated workforce underperforming. This is often manifest by overlooking the basic facts and fundamentals and not planning for “What if” alternative scenarios. This is where mature forecasting and S&OP processes can play a vital role as they define risks and opportunities, drive scenario planning, and encourage get stakeholders from Commercial, Marketing, Finance, and Supply Chain on the same page.

From my experience, here are some of the steps that I have seen work best when it comes to ensuring that a forecast reflects all the opportunities without being unrealistically optimistic.

1. Promote Diversity of Thought

The question here is, who has input into your forecasts? We should be wary if only one team is influencing the forecast as this could lead to homogenous personality types dominating the inputs who will inevitably have blind spots. It is common to see teams who are so invested in their new product that they discount some basic facts around its launch assumptions. The more diverse the teams who have a say in the process, the more likely we will have a robust set of assumptions which have been pressure tested. We must ask the question: Does our culture actively encourage individuals to speak up and have a say, especially if they have a different opinion?

2. Document the Assumptions That Underpin the Forecast

On occasions you may hear from a colleague who has been intimately involved with new products saying, “I feel this will perform better than the competition or better than we previously thought”. But as demand and S&OP managers, we need to understand the assumptions behind the “feel” factor and what is driving this. Feelings don’t have numbers, however assumptions driving price, competitive intelligence, and market share can be analysed and documented to be shared with stakeholders. This is where we need to keep investing in soft skill training for demand and S&OP managers.

3. Do Scenario Planning

We live in a world where our decisions are influenced by our environment and work culture. There might be pressure to hit growth P&L targets, internal politics, or an individual is overriding everyone else. In new products, it is important to acknowledge we will always have the “known knowns vs known unknowns” in our assumptions. Setting up a Base vs Ambitious case scenario opens this debate at the S&OP table in a constructive way and forces us to break down what we know and what we are less confident about. We can prepare and execute both plans and, importantly, know what indicators will show which trajectory we are on. Base vs Ambitious do not have to conflict with each other. Having this discussion early sets us up for decisions on safety stock, late-stage customisation, inventory order points, and risk of write off, among other parameters.

4. Peer Review your New Product Forecast

One of the most effective S&OP practices I saw practiced in my FMCG career was cross-checking of assumptions by an internal, non-aligned stakeholder. For example, in Pharma this could be the commercial lead of Antibiotics stress testing a Respiratory channel forecast. In retail, this could be somebody from Health and Beauty looking at Clothing channel forecast assumptions. If this is done in an open environment and constructively, it produces a robust discussion that ultimately ensures the strongest forecast possible. This works best where the leadership culture of the company is open to taking feedback from the broader organisation and doesn’t work in silos.

5. Learn From Historical Launches

What caused you to be off your forecast last time you launched? Which assumptions turned out to be incorrect? This might sound like a fundamental building block of the process, but it’s important to review your recently launched products on a 3 and 6 month rolling basis (shorter in FMCG) and have a learning feedback loop. Documenting this early on in a central repository is helpful. Often after-action reviews are in presentations that get lost over time as people move to new roles, thereby losing the institutional knowledge gained along with it.

Summary

To have success in our launches, we must believe we are equipped to execute every opportunity without becoming overconfident. A new product launch that has its forecast backed up by a robustly evaluated plan that considers all eventualities will have a significantly better chance of being a resounding win.

]]>
https://demand-planning.com/2022/12/07/managing-optimism-bias-in-demand-forecasting/feed/ 0
Demand Sensing & Shaping With Starbucks https://demand-planning.com/2022/09/19/demand-sensing-demand-shaping-with-starbucks/ https://demand-planning.com/2022/09/19/demand-sensing-demand-shaping-with-starbucks/#respond Mon, 19 Sep 2022 13:37:01 +0000 https://demand-planning.com/?p=9802

Are you a Starbucks customer? If so, the coffee chain is analyzing your purchases to create a personalized experience for you – and to get you to spend more money. To do this, they have created what they call the digital flywheel program which analyses 900 million weekly transactions, taking into account customer purchases, store locations, meteorological data, inventory data, and more. The coffee giant is leveraging this approach to predict and drive sales.


I don’t like the term AI, but they are using next level stuff here to micro target you with personalized offers based on your preferences and to get you to engage more closely with the brand. Starbucks has cracked the code here when it comes to integrating data analytics and planning.

They are successful not only because of the data and technology they have; they’re successful because of their people and their processes. I talked to Brian Nagy, Senior Demand Planning Manager at Starbucks, who is driving next level planning at the coffee chain and is at the forefront of their analytics and planning efforts. We talked about AI, demand sensing, demand shaping, and how they overcome the same planning challenges we all face. Here are the highlights of that conversation.

How COVID has caused fundamental shifts in consumer behavior

“One of the things we are looking at is the impact of COVID in terms of demographic change. We’ve seen massive population shifts in the last 3 three years – there’s been a mass movement of people leaving places like New York and moving to Florida and people from California moving to Texas. We’re trying to get ahead of that and make sure that our footprint’s there and getting ahead of our competitors. Having that information about demographic shifts is hugely powerful.”

 

On scenario planning for strategic planning/budgeting

We prioritize end-to-end capabilities and being able to assess all those outcomes, what we would call either strategic planning or budgeting. Companies tend to do this manually once a year, looking holistically at their business figuring out what their strategic direction looks like. We’re doing it with a lot of directional input and projecting trends forward. Those trends may or may not make sense but we need to know what is actually driving those plans. So being able to integrate some of this information like pricing scenarios and internal and external data to look at risks and opportunities is important.

“If you have a revenue or margin target you can input that and theoretically find the different paths of getting there within your mix, pricing, and customer base, and with regional approaches and different promotional strategies. It’s that capability to really dial in on how to optimize the business and get everyone understanding the risks and opportunities. That’s what S&OP and IBP is all about so it’s really just about getting a tool to get us there.

On the planning tools of the future

“What comes to my mind is an 80s stereo with a thousand different knobs. We want most of this to be AI and machine learning facilitated and with the capability to play with those dials and levers, whether it’s demographic data, different pricing alternatives, external data. We want to look at different things that can impact your business whether that be marketing strategies, promotional strategies, what you bring to the table from an innovation standpoint and being able to run those scenarios seamlessly and quickly.

“Obviously, this all starts with the demand plan but then it needs to go the whole way through the stream of the supply chain so we can think about things like warehousing strategies, ocean freight availability and costs etc. to the point where you have really robust contingencies in place.

“We’ve seen ocean freights just go through the roof with COVID – in a case like that what kind of scenarios have you thought about as a business? We need these things in place to steer a different direction if required. Planning tools should help facilitate those types of things just by asking “What if we go this route? What if we produce domestically versus importing?” That’s really the value of integrating a true end-to-end type capability; being able to assess the entire way through and make decisions as a group.”

On Starbucks overcoming the continuous challenges of planning

“I have been a manager for eight years now and the same principles always seem to work as far as having really robust exception management, having the right tools to understand the business, and getting the right data. 

“Sometimes you gotta be scrappy and pull it together but being clever and thinking of different ways to problem solve is important. Getting away from shipment data is something that’s been necessary because shipment history for most businesses over the last three years has been relatively worthless as an input. So just trying to come up with creative ways, looking more at POS data so we can get real-time signals of where customers are going where they’re headed.

“But largely it’s the same principles of forecasting that have always worked: the right exceptions, having the right training, and getting the team up to speed and just having Demand Planners know what to do and when they need to do it. Then the rest kind of takes care of itself.

 “The only other thing I’d say is being more vocal in the business and calling these trends out as we’re seeing them. I’ve seen demand planning emerge in a lot of different business settings since COVID. It’s been an unfortunate event for the world obviously but I think we’ve seen a lot of benefit on the demand planning side as more business leaders have recognized the value of it and are more willing to listen to what we have to say.”

Click to order your copy now.

Post-COVID, brand royalty has dissolved, meaning that companies must work harder to retain them and that requires personalized experiences. If you’re not talking directly to the consumer the way they want to be interacted with and on the right platforms, you may lose them. Underlying this is predictive analytics for demand sensing and demand shaping which on the one hand helps us understand behaviours and on the other allows us to micro target customers in ways that get them in store and maximize spend.

 It’s not just about coffee or retail either, demand sensing and shaping applies to all industries. For further information on this, there is a chapter about Starbucks’ demand sending and shaping approach in my book Predictive Analytics For Business Forecasting & Planning.

 

]]>
https://demand-planning.com/2022/09/19/demand-sensing-demand-shaping-with-starbucks/feed/ 0
The Simple Power of Aggregate Forecasting https://demand-planning.com/2022/08/19/the-simple-power-of-aggregate-forecasting/ https://demand-planning.com/2022/08/19/the-simple-power-of-aggregate-forecasting/#respond Fri, 19 Aug 2022 16:21:02 +0000 https://demand-planning.com/?p=9762

In the 1990’s when I was with Baxter Healthcare, we implemented a statistical forecasting solution for our European affiliates. In going through the user training I was intrigued by the functionality around aggregate level forecasting and the improved accuracy achieved.

The example they used was a company manufacturing bicycles. Rather than forecast the demand for different colors of a particular model the company would aggregate historical demand at the model level and then run their statistical modelling.

At this aggregate level the forecast accuracy was much better and downstream painting could be driven by Make to Order with much shorter lead times, a reorder point (ROP), or through a disaggregation technique for the higher level forecast.

At the time I was managing the European distribution of sterile surgical gloves and was excited about trying this approach. We had two SKU’s for each of the eight sizes for the five different types of surgical gloves, each with ten languages. We sourced these 80 SKU’s from our Malaysian manufacturing site into our European distribution center in Belgium and then shipped weekly to our twenty affiliates based on their actual inventories, forecast, and safety stock target.

I started by building a pyramid structure in our forecasting tool that allowed me to aggregate historical demand for these 80 SKU’s. I then began forecasting at this level each month and compared to the sum of the affiliate forecasts.

The results were astounding and I was able to demonstrate a greater than 20 point improvement in forecast accuracy using this method. I easily convinced my boss that we should use these forecasts for our manufacturing site in Malaysia.

I then calculated a ROP for each affiliate for each SKU based on historical demand variability and lead times and developed a dBase application to calculate weekly replenishment quantities based on actual inventory and the ROP. Getting commercial buy-in for this approach took more time but we did get agreement.

I also met monthly with our European product manager to ensure that any market intelligence was captured on top of the statistical model. This process worked so well that we were able to tell our affiliates that they no longer needed to spend time forecasting these products. We also well over achieved on our inventory targets.

A few years later I moved to our biotech division. I remember when my boss needed to provide a projection of QIV European sales for a blockbuster hemophilia product and asked me how much I thought we would sell.

I aggregated historical demand at the three dosage form levels, 250 AU (activity unit), 500 AU and 1000 AU lyophilized product in vials. I ran the statistical models and told him 90 million AU’s. Actual QIV sales came in close to 100 million IU’s and my forecast was much better than what he had received from Finance in the affiliates.

Since those days I have been with three different biopharmaceutical companies and have built a large network across the industry. It amazes me that not once have I seen this technique applied to improve forecast accuracy.

For many products the bulk unpackaged tablet, capsule, vial, syringe is the same across many markets and even globally. By aggregating demand at this level and then generating a forecast biopharmaceutical companies would be running their most constrained and expensive manufacturing operations with a much more accurate demand signal.

It goes without saying that this approach would have a profound impact on inventory levels. I am not suggesting that it be applied carte blanche but it should be strongly considered for any product from 3 – 5 years after launch through to late stage lifecycle.

With this approach one could use one of the strategies I mentioned above for downstream packaging and distribution. Make to Order would not work in this industry but reorder point is an option or using a technique to disaggregate the tablet/capsule/vial/syringe forecast down to the country level.

]]>
https://demand-planning.com/2022/08/19/the-simple-power-of-aggregate-forecasting/feed/ 0
Transitioning From Times Series To Predictive Analytics https://demand-planning.com/2022/06/30/transitioning-from-times-series-to-predictive-analytics-with-dr-barry-keating/ https://demand-planning.com/2022/06/30/transitioning-from-times-series-to-predictive-analytics-with-dr-barry-keating/#respond Thu, 30 Jun 2022 11:45:16 +0000 https://demand-planning.com/?p=9698

I recently had a fascinating and enlightening conversation with one of the leading figures in predictive analytics and business forecasting, Dr. Barry Keating, Professor of Business Economics & Predictive Analytics at the University of Notre Dame.

He is really driving the field forward with his research into advanced analytics and applying that cutting-edge insight to solve real-world forecasting challenges for companies. So I took the opportunity to get his thoughts on how predictive analytics differs to what we’ve been doing so far with time series modeling, and what advanced analytics means for our field. Here are his responses.

What’s the Difference Between Times Series Analysis & Predictive Analytics?

In time series forecasting, the forecaster aims to find patterns like trend, seasonality, and cyclicality, and makes a decision to use an algorithm to look for these specific patterns. If the patterns are in the data, the model will find them and project them into the future.

But we as a discipline realized at some point that there were a lot of things outside our own 4 walls that affected what we are forecasting and we asked ourselves what if we could in some way include these factors in our models. Now we can go beyond times series analysis by using predictive analytics models like simple regression and multiple regression, using a lot more data.

The difference here compared to time series is that time series looks only for specific patterns whereas predictive analytics lets the data figure out what the patterns are. The result is much improved forecast accuracy.

 

Does Time Series Forecasting a Have Place in the age of Advanced Analytics?

Time series algorithms will always be useful because they’re easy to do and quick. Time series is not going away – people will still be using holt-Winters, Box-Jenkins, times series decomposition etc. long into the future.

What’s the Role of Data in all This?

The problem now isn’t using the models but collecting the data that lies outside our organization. Data these days has different observations. We used to think when we had 200 or 300 observations in a regression we had a lot of data – now we might use 2 or 3 million observations.

“We used to think 200 observations was a lot of data – now we might use 2 or 3 million”

Today’s data is different not only because of the size, but also in its the variety. We don’t just have numbers in a spreadsheet – it may be streaming data, it may not be numbers but text, audio, or video. Velocity is also different; in predictive analytics we don’t want to wait for monthly or weekly information, we want information from the last day or hour.

The data is different in terms of value. Data is much more valuable today than it was in the past. I always tell my students to not throw data away. What you think isn’t valuable, probably is valuable.

Given we are Drowning Data, how do we Identify What Data is Useful?

When the pandemic started, digital purchases were increasing at 1% a year and constituted 18% of all purchases. Then, in the first 6 weeks of the pandemic, they increased 10%. That’s 10 years’ worth of online purchases happening in just weeks. That shift meant we now need more data and we need it much more quickly.

“You don’t need to figure out which data is important; you let the algorithm figure it out”

You don’t need to figure out which data is important; you put it all in and let the algorithm figure it out. As mentioned, if you’re doing time series analysis, you’re telling the algorithm to look for trend, cyclicality and seasonality. With predictive analytics it looks for any and all patterns.

Predictive analytics assumes that you have a lot of data – and I mean a lot

It’s very difficult for us as humans to take a dataset, identify patterns and project them forward but that’s exactly what predictive analytics does. This assumes that you have a lot of data and I mean a lot, and different to what we were using in the past.

Do you Need Coding Skills to do This?

Come to an IBF conference or training boot camp and you will learn how to do Holt-Winters, for example. Do we teach people how to do that in R, Python, or Spark? No. You see a lot of advertising for coding for analytics. Do you need to do that to be a forecaster or data scientist? Absolutely not.

There are commercial analytics packages where somebody who is better at coding than you could ever hope to be has already done it for you. I’m talking about IBM SPSS Modeler, SAS Enterprise Miner, or Frontline Systems XLMiner. All of these packages do 99% of what you want to do in analytics.

Now, you have to learn how to use the package and you have to learn enough about the algorithms so you don’t get in trouble, but you don’t have to do coding.

“Do you need to be a coder? Absolutely not”

What about the remaining 1%? That where coding comes in handy. It’s great to know coding. If I write a little algorithm in Python to pre-process my data, I can hook it up to any of those packages. And those packages I mentioned can be customized; you can pop in a little bit of Python code. But do you need to be a coder? Again, absolutely not.

Is Knowing Python a Waste of Time Then?

Coding and analytics are two different skills. It’s true that most analytics algorithms are coded in R, Python and Spark but these languages are used for a range of different things [i.e., they are not explicitly designed for data science or forecasting] and knowing those language allows you do those things, but being a data scientist means knowing how to use the algorithms for a specific purpose. In our case as Demand Planners, it’s about using K Nearest Neighbor, Vector Models, Neural Networks and the like.

All this looks ‘golly gee whiz’ to a brand-new forecaster who may assume that coding ability is required, but they can actually be taught in the 6 hour workshops that we teach at the IBF.

What’s the Best way to get Started in Predictive Analytics?

The best way to start is with time series, then when you’re comfortable add some more data, then try predictive analytics with some simple algorithms, then get more complicated. Then when you’re comfortable with all that go to ensemble models where, instead of using 1 algorithm, use 2, 3, or 5. The last research project I did at Notre Dame used 13 models at the same time. We took an ‘average’ of the results and the results were incredible.

The IBF workshops allow you to start out small with a couple of simple algorithms that can be shown visually – we always start with K-Nearest Neighbor and for a very good reason. I can draw a picture of it and show you how it works without putting any numbers of the screen. There aren’t even any words on the screen. Then you realize “Oh that’s how this works.”

“Your challenge is to pick the right algorithm and understand if it’s done a good job”

It doesn’t matter how it’s coded because you know how it works and you see the power – and downsides – to it. You’re off to the races; you’ve got your first algorithm under your belt, you know the diagnostic statistics you need to look at, and you let the algorithm do the calculation for you. Your challenge is to pick the right algorithm and understanding whether it’s done a good job.


To add advanced analytics models to your bag of tricks, get your hands on Eric Wilson’s new book Predictive Analytics For Business ForecastingIt is a must-have for the demand planner, forecaster or data scientist looking to employ advanced analytics for improved forecast accuracy and business insight. Get your copy.

 

 

 

 

]]>
https://demand-planning.com/2022/06/30/transitioning-from-times-series-to-predictive-analytics-with-dr-barry-keating/feed/ 0
My Favorite Forecasting Model https://demand-planning.com/2022/06/24/my-favorite-forecasting-model/ https://demand-planning.com/2022/06/24/my-favorite-forecasting-model/#respond Fri, 24 Jun 2022 12:47:52 +0000 https://demand-planning.com/?p=9679

One of the questions I get asked most frequently is “What is your favorite forecasting model?” My answer is “it depends” because not all problems need a hammer. Sometimes you need a wrench or a screwdriver which is why I advocate having a forecasting toolbox that we can draw on to tackle whatever forecasting project arises.

When it comes to forecasting methods, we have everything from pure qualitative methods to pure quantitative methods, and everything in between. On the far left of the image below you’ll see judgmental, opinion-based methods with knowledge as the inputs. On the far right, we have unsupervised machine learning – AI, Artificial Neural Networks etc. where the machine decides on the groupings and optimizes the parameters as they learn the test data. In between these two extremes we have naïve models, causal/relationship models, and time series models.

All of the models should be in our toolbox as forecasters.

But with dozens of methods available to you, how you decide which ones to use? There are cases when sophisticated machine learning will help you and there are cases when pure judgement will help but somewhere in between the extremes is where you’ll find the models you’ll need on a  day-to-day basis.

Picking The Right Model For A Particular Forecast

The main thing is to have a toolbox full of different methods that you can draw on depending on the data available and the resources you have. We must balance 3 key elements when choosing a model:

Time available: How much time do you have to generate a forecast? Some models take longer than others.

Interpretability of outputs: Do you need to explain how the model works to stakeholders? Outputs of some models are difficult to explain to non-forecasters.

Data: Some models require more data than others and we don’t always have sufficient data.

For example, putting together a sophisticated machine learning model and training it could take months to build, plus extra time for it to provide a useable output. When a forecast is needed now, this kind of model won’t help. Similarly you have little or no data as with new products, you may have to use judgmental methods.

Balancing interpretability and accuracy is also key. There are models whose accuracy can be finetuned to a great degree but as we become more accurate, interpretability (explaining the rationale behind the number) often becomes more difficult. Artificial Neural Networks, for example, can be very accurate, but if you need to explain to partners in the S&OP process or to company execs how the model works and why they should trust it, well you might have some difficulty.

Time series models like regression or exponential smoothing are much easier for stakeholders to understand. So what kind of accuracy do you need? Do you need 99% accuracy for a particular forecast, or is some margin of error acceptable? Remember that there are diminishing returns to finetuning a model for accuracy – more effort doesn’t necessarily provide more business value.

This is why the best model depends on the context you’re working in.

 

Judgmental Methods

These are not sophisticated but they have their place. When I have no historic data, i.e. for a new product or customer, I have nothing to forecast with. Remember, human judgement based on qualitative factors is a forecast and it’s better than no forecast at all.

Judgments are also important in overriding statistical forecasts when an external variable emerges that is isn’t accounted for. A model doesn’t know if you’ve just opened a new store or if supply constraints just unexpectedly emerged. Of course, human judgement has bias – be sure to identify it if you’re using judgmental models. In the Judgmental category we have:

Salesforce Method: This involves asking what salespeople think about future demand based on their knowledge of the market and customers.

Jury Method: This simply involves surveying stakeholder’s opinions and letting the consensus decide what future demand is likely to be.

Dephi Method: A more systematic version of the Jury Method where stakeholders blindly submit their estimates/forecasts. You then take a mean of the responses, which is a more robust/accurate method than you might think.

Time Series Models

58% of planning organizations use time series methods. It’s popular because we all have the data we need for this method – we can use sales data or shipment data. Also our colleagues in Finance, Inventory Management and Production can all use these forecasts. Here we identify patterns (whether level, trend, seasonality) and extrapolate going forward.

The key assumption here is that what happened in the past is likely to continue into the future. This means this method works best in stable environments with prolonged demand trends. It doesn’t perform so well with volatile products/customers, new products and doesn’t explain noise.

Averaging Models

Instead of using one single data point like a naïve forecast, here we’re using more data points and smooth them, the theory being that this provides a more accurate value. In this category we have simple moving averages and exponential moving averages. The difference between the two is that SMA simply calculates an average of price data while EMA applies more weight to more recent data points.

Decomposition Models

These models take out the elements of level, trend, seasonality, and noise components, and add them back in for a forward-looking projection. It’s a good statistical method to understand seasonality and trend of a product.

Exponential Smoothing  

These are the most used methods and include single and double exponential smoothing, with the Holt model and Winters model being widely used. There is also Holt-Winters which is a combination of the two which is a level, trend, and seasonal model so we’re getting 3 attributes of the time series, enabling an exponential curve weighting the past exponentially.

If a naïve model is taking a single point and averaging them to make multiple points, we’re now taking multiple points and weighting them differently, considering level, trend and seasonality. I find this to be a very versatile model that is appropriate for a lot of data sets. It’s easy to put together, can be used with relatively little data, and is easy to interpret and explain.

Going Beyond Time Series Models 

All data is not time related or sequential. And all information is not necessarily contained within a dataset. Casual or relationship methods assume that there is an external variable (causal factors) that explains demand in a dataset. Examples of causal factors include economic data like housing starts, GDP, weather etc. Relationship models include penetration and velocity models where you add variables to a model.

These carry on nicely from exponential smoothing models that identify level, trend, seasonality and noise. The noise can be explained with causal models and can identify whether there is an external variable (or several). This is useful when there is a lot of noise in your data. Generally speaking, these models are useful alongside series models to explain the consumer behavior changes that are causing the changing demand patterns/noise.

Machine Learning Models

Machine learning models take information from a previous iteration or training data set and use them to build a forecast. They can handle multiple types of data which makes them very useful. There are interpretability issues with these models, however, and there is a learning curve when it comes to using them. But it’s not too difficult to get started with the basics – Naïve Bayes is a good place to start.

Clustering Models

Clustering, a form of segmentation, allows us to put data into smaller more manageable sub-groups of like data. These subgroups can then be modeled more accurately. At a simple level, classification can be the Pareto rule, or they can be more complex like hierarchical clustering using a dendrogram (a form of distribution which considers distribution of points) and K-means where we group data based on their distance from a central point. They’re all ways of breaking up large data sets into more manageable groups.

Picking the Best Model

Understand why you’re forecasting. Think about how much time you have, the data you have, your error tolerance and the need the need for interpretability then balance these elements. Start simple (naïve might get you there) and work from there. You might need a hammer, screwdriver, or wrench – be open to using all the tools in your toolbox.


To add the above-mentioned models to your bag of tricks, get your hands on Eric Wilson’s new book Predictive Analytics For Business ForecastingIt is a must-have for the demand planner, forecaster or data scientist looking to employ advanced analytics for improved forecast accuracy and business insight. Get your copy.

 

]]>
https://demand-planning.com/2022/06/24/my-favorite-forecasting-model/feed/ 0
How I React When Forecasted Demand Deviates from Actual Demand https://demand-planning.com/2022/06/03/how-i-react-when-forecasted-demand-deviates-from-actual-demand/ https://demand-planning.com/2022/06/03/how-i-react-when-forecasted-demand-deviates-from-actual-demand/#respond Fri, 03 Jun 2022 08:59:34 +0000 https://demand-planning.com/?p=9635

As I have written about before in a previous article, Forecasting Materials Usage At An Electric Utilities Company, forecasting efforts for an electric utility company are imperative. 

Forecasting is an essential part of our demand management tactics and without forecasting diligence and a proactive approach to managing inventory, our organization is exposed to the risk of not delivering electricity to customers.

Forecasting helps us investigate the future to make decisions on what quantities of materials we need to proactively order. This exercise helps us maintain appropriate inventory levels so that our organization can fulfill demand while at the same time maintain a lean inventory that does not commit unnecessary capital.

But what happens when our forecasting efforts are ineffective?  What happens when we under-estimate, or over-estimate what we will need in the future? It’s not hard to imagine the consequences of the forecasted demand being off from actual demand. We will risk stocking out on materials, or we will have excessive inventory sitting idly.

Recognizing Error

I respond to errors in the forecast in a couple of ways. The first step is to recognize the error. Was the forecast over or under actual demand, and by how much?  The magnitude of the error is of more importance to me. If the forecast was only slightly off, I will not pay it much attention.

We understand that forecasting will generally be different from what occurs, to some extent, so being close is a win. If the forecast is off by a large amount, then it’s time to act.

Calculating Size of the Error

What is a large amount? That will vary from forecast to forecast. For example, a widget’s forecasted consumption may have an absolute deviation (absolute error) of 1,000 units in a month. Said differently, the forecasted quantity could have been 10,000 units while actual consumption was only 9,000 units – we over-forecasted by 1,000 units. Being off by 1,000 units may sound like a lot but, if we take the absolute percentage error formula

Absolute Value((Actual – Forecasted)/Actual X 100))

Absolute Value(9,000 -10,000)/9,000) X 100 = 11%

we get a mere 11 percent. With the absolute percentage error, the lower the number the better. The absolute percentage error metric provides a deeper understanding of how close the forecast was.

If I am using a weighted forecasting model, in which I take the average demand of the same month over different years e.g., January 2018, January 2019, January 2020, and January 2021, I can then assign a different weight to each year. I typically assign a higher weight to more recent years because the recent past will often – but not always – be a better predictor of the near term.

The below table exhibits how I typically assign weights to years in a weighted forecast.

After assigning the above weights in a forecast model for a particular widget, I found that the forecast was a fair amount off in one month.  The below graph helps to illustrate how the forecast in February 2022 was off from the actual.

The red circle around the February 2022’s data shows the vertical line between the turquois blue (actual consumption) and the light blue dashed line (Weighted Average Forecast) to illustrate how off the forecast was from actual consumption.

In February 2022, our actual consumption for this widget was 324 units, but the forecast was 2,328 Units.  This was an absolute deviation (aka absolute error) of 2,004 units.  The absolute percentage error was 619 percent.  Here is where the next step of the action comes in.

After noticing this significant deviation, I considered the usage from the years 2020 and 2021.  As you can see from the graph above, the consumption reached dramatically higher points in these years than in 2018 and 2019; this is one of the instances where the recent past is not a better predictor of the near future.

It is safe to say that the months of consumption in 2020 and 2021 may result in an overstated forecast if we give more deference to these years.  To address this concern, I adjusted the weights from my pervious table.  I reduced the weights from the years 2021 and 2020 both by 10 percent and added 10 percent to both 2019 and 2020.

The new weights are reflected in the below tables.  I made these adjustments for 2022’s Weighted Average forecast and made similar adjustments to the year 2023’s Weighted Average Forecast.

After adjusting the weights, the forecasted quantity for February 2023 was reduced from 1,638 units to 1,235 units and appears to be more in line with previous Februarys.

The adjustments I made to the weights in the model project a forecast for the rest of 2022 and for the first quarter of 2023 that I feel quite comfortable with.  February 2023’s new forecasted consumption of 1,235 is still rather high.  The average monthly consumption over the past four years is only 462 units.  Though it is unlikely that 1,235 units will need to be in our available inventory in February 2023, I would much rather have our organization be in the position where we are somewhat overstocked than substantially understocked.

The strategy of whether to carry a heavy inventory or lean inventory will vary from widget to widget. Some widgets will have far less of a risk appetite than others.  For these items, the approach of over-estimating, and the consequence of excessive inventory, is one that we want to live with as opposed to the risk of stocking out.  The aforementioned scenario shows us how connected forecast modeling is to an inventory strategy – a topic that certainly deserves more explanation in another article.

How I React  When The Error is Large

These are the steps in the process I use to react to forecasts when there is a significant deviation from actual demand.  A summary of the process is as follows.

Step 1: Review the item’s forecasted demand to actual demand and identify where there are big variances. Use metrics such as the absolute error (absolute deviation) and absolute percentage error to validate that there was a deviation that deserves attention.

Step 2: Review the historical demand and then adjust the calculations in the forecast as necessary. In the case of the weighted average forecast, one can easily make weight adjustments to the years, or time periods, in which historic demand may be understating or overstating the forecasting model.

Another possible step may include adjusting the historic demand data sample.  For example, if the years 2020 and 2021 demonstrate a clear downward trend of decreased demand from 2018 and 2019, it may make sense to completely remove the years 2018 and 2019 from the forecast’s data sample.  The new sample would include only the past 24 months of demand instead of the past 48 months.

There is nearly an endless number of ways in which you could change the approach to forecasting when there is a significant difference in forecasted demand from actual demand.

Perhaps an even simpler approach is to change the forecasting model all together.  Instead of applying a Weighted Average forecast, one could use a Simple Linear Regression model, or Exponential Smoothing model. Whichever the forecasting methodology you use, the same basic steps of reviewing the differences in the forecast against the actual will be necessary to determine your next course of action.


IBF’s Supply Chain Planning Boot Camp returns to Las Vegas from August 10-12.  Learn the fundamentals and best practices across the supply-demand chain and how leading companies balance supply and demand, from the demand plan to the master scheduling process. More information here.

 

 

]]>
https://demand-planning.com/2022/06/03/how-i-react-when-forecasted-demand-deviates-from-actual-demand/feed/ 0
How To Improve Forecast Accuracy During The Pandemic? https://demand-planning.com/2021/07/01/how-to-improve-forecast-accuracy-during-the-pandemic/ https://demand-planning.com/2021/07/01/how-to-improve-forecast-accuracy-during-the-pandemic/#respond Thu, 01 Jul 2021 12:19:32 +0000 https://demand-planning.com/?p=9186

Q) During the current pandemic we are facing a very difficult time in preparing forecasts. Our forecast accuracy is far below what used to be. Can you suggest any way to improve it?

A.) We are certainly in a new economic phase, something we have never experienced before. In the past we had disruptions either in supply or demand—not in both as we are currently experiencing. This may be short-lived but we must make sure we deal with it. This means we need to change the way we forecast. Firstly, we should keep in mind that the sharp increases or decreases in sales data are not outliers but a reflection of new data patterns. When an outlier repeats itself again and again, it is no longer an outlier, but a part of new pattern. This means that old data is not relevant for future forecasts. Secondly, you need to know how the data pattern is changing. The data pattern of many products has drastically changed and the sooner we learn about it, the better. To learn about the change in patterns and to respond quickly enough, we need to work with not monthly or weekly data but with daily data. Compute the percentage change in cumulative sales from one day to the next, and then compute the average weekly change. If the weekly percentage change is rising, it means that the trend is upward; if it is falling, it is downward. We can use this trend to make a forecast for the next period. It may not be long before the pandemic is over. With that, the pattern will change again. The weekly percentage change in sales will quickly tell us which way the data is trending, and how strong it is.

I hope this helps.

 

Happy forecasting!

 

Dr. Chaman. L. Jain,

Editor-in-Chief,

Journal of Business Forecasting

]]>
https://demand-planning.com/2021/07/01/how-to-improve-forecast-accuracy-during-the-pandemic/feed/ 0
5 lessons I Had To Learn As A Demand Planner https://demand-planning.com/2021/05/14/5-lessons-i-had-to-learn-as-a-demand-planner/ https://demand-planning.com/2021/05/14/5-lessons-i-had-to-learn-as-a-demand-planner/#respond Fri, 14 May 2021 14:04:05 +0000 https://demand-planning.com/?p=9112

Sometimes things are not always what they seem or how you imagined them to be. Making my start in demand planning a few years back, there were some important lessons that I had to learn. The following are 5 lessons, or ‘revelations’, that came to me while on the job that would make me a better Demand Planner.

1. Not Everyone Thinks Like Me

There is a tendency to believe that everyone is just like you are, and they think like you do. I admit I used to fall into that mental trap myself. I am a very analytical, logical person and, when starting in forecasting, I thought everything would be about presenting numbers. Numbers represent facts and that’s pretty cut and dry, right? Turns out not everyone understands what I do, or even cares about it.  They are not impressed by the models I use, correlations I found or that my mean absolute percentage error is better than average.

To do my job better I needed to start thinking like a salesperson, supply planner and marketing professional.

But they are impressed by how the insights I provide into what may occur impact them and why things are occurring now. To do my job better I needed to start thinking like a salesperson, supply planner, marketing professional, and even executive management. I needed to learn that not only do people not think like me, they’re also not interested in the technical aspects of my job – they just need the information that is relevant to them and helps them do their jobs better.

2. Numbers Are Not As important As Results

I am not just referring to metrics and measuring accuracy. While those are important and we should measure the results of our forecasts, I have learned it is even more important your forecast has a purpose. I remember being proud of a monthly forecast I was creating with pretty good accuracy one-month out only to find that manufacturing wasn’t even using my numbers.

They needed weekly forecasts and an outlook for what was going to happen sixty days from now. We can create the best, almost perfect, forecast but unless we are delivering what the company needs, when they need it, and in the right format – the results are meaningless. I needed to go beyond the numbers and adapt to who is using my forecast so they actually added value to the business.

3. It Is Not What You Know But Who You Know

Coming into my first forecasting role I started learning statistical models and analytics and even some machine learning. I had a lofty goal of creating the ultimate forecasting model that would be nearly perfect.

What I learned the hard way was that my model never predicted the new marketing campaign we were getting ready to start, the customer that was closing just because he decided to retire, or the product that sales incentivized with a contest last month. It turned out that what was better than my complex models were simpler models with more collaborative inputs.

4. It Is Not Always Our Fault

What I have discovered through developing and presenting forecasts is not that the forecasts are always wrong, rather the users don’t always understand what you are presenting. Using a weather forecasting analogy, if I was to just forecast it was going to be between 60- and 80-degrees Fahrenheit, it would not be hard to be accurate. Forecasting that it will be exactly 77 degrees is a lot trickier.

Accept uncertainty as fact of life, and then work to manage that uncertainty.

People generally look for a number instead of a range and no matter how often I give them a range, they still only wanted a number so that’s what I had to provide.  The lesson I learned here as a Demand Planner was not that my forecasts were rarely going to be accurate. And, instead of perceiving that as a failing, to accept uncertainty as fact of life, and then work to manage that uncertainty.

5. Demand Planning Is Art & Science

Imagine this: I’m putting my final touches on this month’s forecast and I’m adding in a promotion we are going to run. Marketing thinks it may add 10% while Sales thinks only 6%.  After detailed analysis behind the scenes and a little bit of voodoo, I add an 8% lift. Not because I am lazy or fully trust Sales and Marketing and just go with an average, but there is a lot of this that goes into every month’s forecast.

As a Demand Planner, I am managing assumptions more than I am managing a black box.

I think the biggest lesson out of all of these that I have learned as a Demand Planner is that I am managing assumptions more than I am managing a black box or magic wand. Foundationally, what I do is science-based but I have found there is just as much art to it as well. It is understanding how to communicate and ensure my forecast is being utilized. It is working with others and planning for what the number is but also what to do when the number is not exactly that. It is building a statistical baseline and sometimes even using some judgement to develop the best demand plan possible.

 

 

 

 

 

 

]]>
https://demand-planning.com/2021/05/14/5-lessons-i-had-to-learn-as-a-demand-planner/feed/ 0