Advanced Analytics – Demand Planning, S&OP/ IBP, Supply Planning, Business Forecasting Blog https://demand-planning.com S&OP/ IBP, Demand Planning, Supply Chain Planning, Business Forecasting Blog Mon, 28 Aug 2023 17:13:02 +0000 en hourly 1 https://wordpress.org/?v=6.6.4 https://demand-planning.com/wp-content/uploads/2014/12/cropped-logo-32x32.jpg Advanced Analytics – Demand Planning, S&OP/ IBP, Supply Planning, Business Forecasting Blog https://demand-planning.com 32 32 Why Aren’t Demand Planners Adopting Machine Learning? https://demand-planning.com/2023/08/28/why-arent-demand-planners-adopting-machine-learning/ Mon, 28 Aug 2023 17:11:29 +0000 https://demand-planning.com/?p=10144

We all know that machine learning (ML) and AI gets the analytics and data science community excited. Every self-respecting forecasting department is developing ML algorithms to predict who will click, buy, lie, or die (to borrow the title of Eric Siegel’s seminal work on the subject). All analytics conferences and publications are filled with AI buzz words.

But when it comes to real-life implementation, the majority of demand forecasters are somewhat cautious about implementing machine learning. Why is that? Isn’t machine learning all about predicting, which is literally a Forecasters job? Let’s explore the opportunities and pitfalls of applying machine learning in forecasting.

Demand Forecasters & Data Scientists Define ‘Prediction’ Differently

There is a subtle difference in the way forecasting and ML define ‘prediction’. When Forecasters say ‘prediction’ we mean a prediction about the future. Traditional forecasting prediction methods include Time Series modelling, algebraic equations and qualitative judgement calls. As a result traditional forecasting is somewhat manual and time consuming, and may be swayed by human judgement. However, the outputs are easily interpreted and it is an agile process; the Forecaster knows where the numbers are coming from and may easily make corrections as needed. Further, traditional forecasting may be done with limited data.

Machine learning or statistical model ‘prediction’ refers to predicting the past. This sounds a bit counterintuitive, but the idea is to compare the model ‘prediction’ with reality and measure the difference or error. These errors are used to finetune the model to predict the future. Consequently, model predictions are heavily driven by past performance and are almost impossible to finetune. Also, the interpretability of models is very limited. Another factor to consider is that by design ML requires a lot of data. On the upside, machine learning is quick and automated as well as objective, being free from human judgement.

Machine Learning was Built for the Digital World; Forecasters Work in the Real World

Machine learning and AI algorithms were created for a digital world with almost unlimited data on customer clicks, purchases and browsing data. As we know, these algorithms do an excellent job in luring us to make repeat purchases, buy complimentary items, and sign up for loyalty programs. The sunk cost of prediction error (lost sales) is relatively low. In addition, every error is an opportunity for the machine learning algorithm to improve itself.

The real world marketplace is quite different from the digital, marketplace, however. The data here might be limited to cash register sales, loyalty program data, or shipment data. The sunk cost of prediction error can be quite high as restaurants and retailers make procurements in bulk. Also, predictions cannot improve themselves as there is no automatic feedback loop. For these reasons, many brick-and-mortar retailers and their suppliers still rely on traditional forecasting methods. This does not mean that Machine Learning cannot offer opportunities in improving forecasting but there are a few considerations that need to be addressed before venturing into machine learning.

Machine Learning Requires Much More Data Than Time Series 

Any machine learning algorithm requires a lot of data. By a lot of data, I do not mean dates or variables. Machine learning models run on defined observation levels—this can be customer, store etc. You need at least a thousand of those (if not thousands) for machine learning to work. If the sample is limited to only 10 stores, it is probably better to refrain from machine learning and use Time Series techniques instead. Another factor to consider is the cost of maintaining the data. Is it readily available or does it need to be inputted manually? Does the data need to be engineered? Would that be a one-time effort or an ongoing process requiring human and computing resources? What would be the cost of storing data over the years?

Machine Learning is far Less Interpretable than Time Series 

By design machine learning is a black box. For example, predictions may be generated by a vote of thousands of decision trees. You can use colorful histograms to depict the weight of each factor in the model. These charts look very smart on presentation slides but are very far from intuitive. If the cost of a wrong prediction is millions of dollars, companies might be more comfortable with Time Series and arithmetic they can understand rather than a slick black box algorithm. This especially applies to new products with no sales data or limited test data.

There are a few workarounds for understanding machine learning. Playing with parameters might be a good indicator of the robustness of results. If one slight change to model inputs or specifications results in significant changes to predictions, this might be a red flag.

At the end of the day model trustworthiness may be proved by testing on new data. We don’t necessarily need to understand the ins and outs of algorithms if we are confident in the end result. Robustness of this argument may depend on your audience. Typically, analytics professionals are comfortable using machine learning predictions as long as they are tested. Supply chain leaders might be more cautious in making business decisions based on black box. A good sanity check is to run traditional forecasting methods in parallel to machine learning. If there is feasible difference between the results there might be either an issue with the model or an important consideration was left out when creating a traditional forecast.

The Cost Benefit of Machine Learning is not Always Clear

It goes without saying that when machine learning is set up right it is wonderfully efficient. All one needs to do is provide inputs and press the button. The ‘setting up right’ piece might be relatively straightforward or extremely difficult depending on prediction goal and data available. Repeat products with abundant history may be easily predicted using even out-of-the box ML packages such as SAS or Azure as long as the data is readily available. New product predictions may require intricate proxy algorithms to solve for limited data. This may require development of ML algorithms from scratch. In addition, there may also be a need to engineer data from different sources to feed the algorithm. This might require significant investment to either hire contractors, expand analytics team or put pressure on existing resources. Before ramping up a data science crew, companies would be well advised to consider how often the algorithm will be used, the efficiency gains, and the computing resources required for the project.

Impacts on Overall Business Planning 

Forecasting is the cornerstone of business planning. Any changes to the forecasting process may have an impact on other areas of the business such as Finance and Supply Chain. Typically, traditional forecasting methods rely on a top-down approach. A forecast is created in aggregate and then broken down by store/time period, etc. These breakdowns may be later used for financial targets or demand planning at store level. By design, ML Forecast utilizes a bottom-up approach. A prediction is created at store/time period level and later aggregated. When switching from traditional forecasting to ML, companies must ensure smooth transition at all stages of business planning. If not done right, this transition may result in discrepancies between the ML prediction vs the financial targets and supply plans.

To summarize, ML is a great instrument to streamline forecasting. As with any tool, it has its applications, benefits, cost, and risks. When utilizing ML for forecasting, companies should consider their data, business need, decision making culture, and planning workflow. A great place to start might be trying out ML on your data using online, off-the-shelf solutions such as Azure and SAS. Most of these solutions have step-by-step training videos that will help fit an ML algorithm to your data. Experimenting with these solutions may help decide whether ML is a good tool for your company’s forecasting, and whether an off-the-shelf solution is sufficient or there is a need for in-house development. Even if it turns out that for whatever reason ML is not a good fit for your company, there is no investment lost and some analytical knowledge will gained.

This article first appeared in the summer 2023 issue of the Journal of Business ForecastingTo access the Journal, become an IBF member and get it delivered to your door every quarter, along with a host of memberships benefits including discounted conferences and training, exclusive workshops, and access to the entire IBF knowledge library. 

]]>
The Advancements In Disease Forecasting That Can Revolutionize Business Forecasting https://demand-planning.com/2022/11/28/the-advancements-in-disease-forecasting-that-can-revolutionize-business-forecasting/ https://demand-planning.com/2022/11/28/the-advancements-in-disease-forecasting-that-can-revolutionize-business-forecasting/#respond Mon, 28 Nov 2022 11:04:02 +0000 https://demand-planning.com/?p=9886

 


On the IBF On Demand podcast, I recently spoke to a real innovator doing great things in the forecasting space. He comes from a disease forecasting background and many of his predictive analytics techniques are directly applicable to business forecasting. We’re not just talking about what will will happen but why, and identifying changes in consumer behavior before they happen. 

His name is John Cordier and he runs a consulting firm called Epistemix. Backed by the National Institutes of Health and the Bill & Melinda Gates Foundation, Epsitemix models the spread of infectious of diseases, and now applies that expertise to business.

There is much we can learn from him when it comes to maximizing the value of our own data. The following are some highlights of that conversation.

Can you give a breakdown of how you approach epidemiological forecasting? Then we’ll dive how those approaches can be applied to demand forecasting.

We approach forecasting from the bottom up. Those in our space would recognize agent-based modeling as our underlying technique, but all you need to know about how we forecast is that we represent every single person in the entire country and forecast based on their behaviors.

In the infectious diseases space, our technique has found is that within the United States there are 9 “ontological units of epidemicity.” Meaning there are 9 regions in the US with distinct seasonal behaviors. The data tells us why in late spring COVID-19 cases go up in the South but not in the Northeast and then inverts in fall, and so on.

We’re able to generate these seasonal patterns from human behaviors. There’s so many behavioral variables you can build into a model. What we end up doing is calibrating to the one or two most important pieces of data that we prepare the model against. With COVID we disregarded cases because case counts was too noisy for us to have any accurate projections so we use hospitalizations and deaths which are more stable, yet not perfect.

 

So if I’m understanding this correctly, you’re looking at the seasonalities but you’re mainly looking at the human behavior driving some of those spreads and then instead of just modeling the data, you’re monitoring how the drivers are changing and impacting each other?

Yes. We aim to capture the non-linear relationships. We’re getting these non-linear connections between different actions so we can say which behaviors are driving an epidemic – or indeed the adoption of a product or service – forward. If it’s a disease we can test what interventions will drive those numbers down, or in the case of a consumer product or service, test strategies we think might drive sales up.

This is fascinating because I can see so many use cases for it. I’ve been preaching for years that we need to look externally and develop that ‘outside-in’ type of thinking in predictive analytics for business forecasting. Are we limiting ourselves by only looking at sales data?

If you’re only using the data that exists internally and you’re making your decisions on those assumptions, you’re really saying that the future is going to look like the past. If you’re not looking at how behavior changes or how the environment is changing and how that impacts the drivers of adoption or purchases, you’re going to miss the tipping point of adoption or when you hit market saturation

One example of this is a social media app that came out in 2020 called Clubhouse and everyone was saying it’s going to totally take over Facebook and Instagram. Well, if you looked at the behaviors of adoption you would have seen that the tipping point is going to come pretty early. We forecast that tipping point in early 2021 right after the Consumer Electronics Show. You could have known then that in fact this wasn’t actually going to be the next big thing.

In business predictive analytics we start with data and the different types of standardized models that can be applied to understand the data, but you start by representing the reasons why something’s happening. Why do you start with the ‘why’?

Our goal is not just to predict demand, but to give decision makers the ability to understand how to shape demand given the resources they have available to them. Whether forecasting a health outcome, product sales, distribution of an idea, or the number of votes a given candidate is likely going to win, our synthetic population is the starting point to generate a forecast from the bottom up. 

Our interactive synthetic population includes a representation of every person, household, school, workplace, hospital etc in the country – consider this the ‘clay’ or ‘substrate’ that you’re starting with. Then we enable businesses or subject matter experts to test their assumptions about how their population is changing. These set the baseline assumptions.

You can then generate models that recreate the past data and explore “what-if” questions about how things might look different given changes in behavior or changes in the environment. Once you have a working model you’ve really created a simulation of the most meaningful drivers of behavior and outcomes that you can test demand shaping strategies against.

What is synthetic data?

Synthetic data is a broad description of techniques used to take observed data and create data that downstream increase the confidence of a decision somebody has to make. In our world of synthetic data, you can take a real data set and describe how any behaviors in the population might change. Using that as a baseline forecast, you can test assumptions to better understand the outcome you’ll achieve through different decisions you can make.

Because our synthetic population has both time and geography included, you can generate synthetic data sets complete with geospatial and time series data.

So the output is more of a probabilistic type of forecast?

It’s a stochastic type of forecast, so you’ll get something different every time. Everything is probabilistic. Our users might run a thousand simulations and get a different result every single time. But once you have that data, you have more or less ‘beat down’ the noise to get narrower confidence intervals around your uncertainty bounds of the most likely outcome.

It’s a relationship between people, things, and the environment and how all three are going to change in the future, right?

Yes. We’re able to understand the interactions between people in households, at work, in the community, and how people react to things they come in contact with online – all in combination to see the emergent behavior of entire populations. Malcolm Gladwell and others have written about what makes social epidemics take off – it comes down to those three things: what population, what thing, and the environment they exist in.

Whether it’s a product or a disease, the terms adoption and spread are similar. Imagine a scenario where you and your partner are at the dinner table and you ask “have you heard about the thing in the news today?” Maybe they didn’t, but you’re now the third person that has brought it up.  After hearing it from a few people, they look it up and tell another 4 people. Next thing you know, it’s made national news for the next cycle and millions of people are exposed to the information.

Information, behavior, and diseases – when using a bottom up approach – are emergent phenomena. At Epistemix, we help companies project emergent phenomena into the future.

Do you have any other business related examples?

We have a couple of customers that are launching consumer apps. What they’re trying to understand is the demand for their app, the network effect they’re able to create, and what actions they can take most influence adoption. 

They use the synthetic population to test marketing strategies designed to influence demand. One of the companies we’re working with, Earbuds – a new music sharing app – is trying to understand the most sustainable path to 150,000 monthly active users by June of 2023 by testing influencer driven campaigns, targeting specific online communities, and other marketing campaigns.

I see parallels with retailers modeling population shifts, for example people moving from New York to Texas and Florida.

As it happens we did a project last year with the Remaking Cities Institute out of Carnegie Mellon University and the question they wanted to ask was how remote work acceleration is impacting where consumer retail is going to be. 

We did a study to see where the workforce of Chicago is going to be in 2024. Because we had to root this in an actual problem that businesses wanted to solve, we framed the question as “where do I get my coffee in 2024?” The whole idea was to base coffee shop site selection off of consumer preferences of where they’re working, the projected density of the population, and existing locations. 

We identified locations within Chicago where the supply will be too high and other locations where there are going to be too few coffee shops. Given the information, generated from the synthetic population as a starting point, coffee shops and developers can understand the future population density and drop in three or four coffee shops in an area to capture new demand.

The cool thing about working with the synthetic population is that you can add information as you learn more about the problem you’re trying to solve. For example,  take the question of “what will happen to coastal cities in the US over the next 50 years given the increase in extreme weather events?” The data doesn’t yet exist, but you need to forecast what might happen if… Using the Epistemix platform you can test how the different “what ifs…” influence people’s behavior and test what decisions you can make to shape or be positioned to take advantage of the changes to come.

My Thoughts On These Cutting Edge Approaches

This was a great, high-level conversation. You can watch the full conversation here. You may not be ready for all of it but we can start leveraging some of the key elements. The key is to start thinking ahead of where the consumers may be in the future and how they’ll be behaving. Are you ready to not only look at a forecast but to create an idea of what your market will look like in the future? If we can do that, sales and marketing can act on changing consumer behavior before it happens.

This is a mindset change for a lot of us. This is Predictive Analytics, using different types of modelling – getting into probabilistic and forward-looking projections, and trying to understand the ‘why’.

This level of analytics is very much still emerging. Understanding and adopting these techniques requires continual learning, changing your thinking, and learning new skills and information and bringing them into your forecasting process. If you’re prepared to challenge yourself, you may find that you advance both yourself and your company.


Click to order your copy now.

 

To add advanced analytics models to your bag of tricks, get your hands on Eric Wilson’s new book Predictive Analytics For Business ForecastingIt is a must-have for the demand planner, forecaster or data scientist looking to employ advanced analytics for improved forecast accuracy and business insight. Get your copy.

 

]]>
https://demand-planning.com/2022/11/28/the-advancements-in-disease-forecasting-that-can-revolutionize-business-forecasting/feed/ 0
Transitioning From Times Series To Predictive Analytics https://demand-planning.com/2022/06/30/transitioning-from-times-series-to-predictive-analytics-with-dr-barry-keating/ https://demand-planning.com/2022/06/30/transitioning-from-times-series-to-predictive-analytics-with-dr-barry-keating/#respond Thu, 30 Jun 2022 11:45:16 +0000 https://demand-planning.com/?p=9698

I recently had a fascinating and enlightening conversation with one of the leading figures in predictive analytics and business forecasting, Dr. Barry Keating, Professor of Business Economics & Predictive Analytics at the University of Notre Dame.

He is really driving the field forward with his research into advanced analytics and applying that cutting-edge insight to solve real-world forecasting challenges for companies. So I took the opportunity to get his thoughts on how predictive analytics differs to what we’ve been doing so far with time series modeling, and what advanced analytics means for our field. Here are his responses.

What’s the Difference Between Times Series Analysis & Predictive Analytics?

In time series forecasting, the forecaster aims to find patterns like trend, seasonality, and cyclicality, and makes a decision to use an algorithm to look for these specific patterns. If the patterns are in the data, the model will find them and project them into the future.

But we as a discipline realized at some point that there were a lot of things outside our own 4 walls that affected what we are forecasting and we asked ourselves what if we could in some way include these factors in our models. Now we can go beyond times series analysis by using predictive analytics models like simple regression and multiple regression, using a lot more data.

The difference here compared to time series is that time series looks only for specific patterns whereas predictive analytics lets the data figure out what the patterns are. The result is much improved forecast accuracy.

 

Does Time Series Forecasting a Have Place in the age of Advanced Analytics?

Time series algorithms will always be useful because they’re easy to do and quick. Time series is not going away – people will still be using holt-Winters, Box-Jenkins, times series decomposition etc. long into the future.

What’s the Role of Data in all This?

The problem now isn’t using the models but collecting the data that lies outside our organization. Data these days has different observations. We used to think when we had 200 or 300 observations in a regression we had a lot of data – now we might use 2 or 3 million observations.

“We used to think 200 observations was a lot of data – now we might use 2 or 3 million”

Today’s data is different not only because of the size, but also in its the variety. We don’t just have numbers in a spreadsheet – it may be streaming data, it may not be numbers but text, audio, or video. Velocity is also different; in predictive analytics we don’t want to wait for monthly or weekly information, we want information from the last day or hour.

The data is different in terms of value. Data is much more valuable today than it was in the past. I always tell my students to not throw data away. What you think isn’t valuable, probably is valuable.

Given we are Drowning Data, how do we Identify What Data is Useful?

When the pandemic started, digital purchases were increasing at 1% a year and constituted 18% of all purchases. Then, in the first 6 weeks of the pandemic, they increased 10%. That’s 10 years’ worth of online purchases happening in just weeks. That shift meant we now need more data and we need it much more quickly.

“You don’t need to figure out which data is important; you let the algorithm figure it out”

You don’t need to figure out which data is important; you put it all in and let the algorithm figure it out. As mentioned, if you’re doing time series analysis, you’re telling the algorithm to look for trend, cyclicality and seasonality. With predictive analytics it looks for any and all patterns.

Predictive analytics assumes that you have a lot of data – and I mean a lot

It’s very difficult for us as humans to take a dataset, identify patterns and project them forward but that’s exactly what predictive analytics does. This assumes that you have a lot of data and I mean a lot, and different to what we were using in the past.

Do you Need Coding Skills to do This?

Come to an IBF conference or training boot camp and you will learn how to do Holt-Winters, for example. Do we teach people how to do that in R, Python, or Spark? No. You see a lot of advertising for coding for analytics. Do you need to do that to be a forecaster or data scientist? Absolutely not.

There are commercial analytics packages where somebody who is better at coding than you could ever hope to be has already done it for you. I’m talking about IBM SPSS Modeler, SAS Enterprise Miner, or Frontline Systems XLMiner. All of these packages do 99% of what you want to do in analytics.

Now, you have to learn how to use the package and you have to learn enough about the algorithms so you don’t get in trouble, but you don’t have to do coding.

“Do you need to be a coder? Absolutely not”

What about the remaining 1%? That where coding comes in handy. It’s great to know coding. If I write a little algorithm in Python to pre-process my data, I can hook it up to any of those packages. And those packages I mentioned can be customized; you can pop in a little bit of Python code. But do you need to be a coder? Again, absolutely not.

Is Knowing Python a Waste of Time Then?

Coding and analytics are two different skills. It’s true that most analytics algorithms are coded in R, Python and Spark but these languages are used for a range of different things [i.e., they are not explicitly designed for data science or forecasting] and knowing those language allows you do those things, but being a data scientist means knowing how to use the algorithms for a specific purpose. In our case as Demand Planners, it’s about using K Nearest Neighbor, Vector Models, Neural Networks and the like.

All this looks ‘golly gee whiz’ to a brand-new forecaster who may assume that coding ability is required, but they can actually be taught in the 6 hour workshops that we teach at the IBF.

What’s the Best way to get Started in Predictive Analytics?

The best way to start is with time series, then when you’re comfortable add some more data, then try predictive analytics with some simple algorithms, then get more complicated. Then when you’re comfortable with all that go to ensemble models where, instead of using 1 algorithm, use 2, 3, or 5. The last research project I did at Notre Dame used 13 models at the same time. We took an ‘average’ of the results and the results were incredible.

The IBF workshops allow you to start out small with a couple of simple algorithms that can be shown visually – we always start with K-Nearest Neighbor and for a very good reason. I can draw a picture of it and show you how it works without putting any numbers of the screen. There aren’t even any words on the screen. Then you realize “Oh that’s how this works.”

“Your challenge is to pick the right algorithm and understand if it’s done a good job”

It doesn’t matter how it’s coded because you know how it works and you see the power – and downsides – to it. You’re off to the races; you’ve got your first algorithm under your belt, you know the diagnostic statistics you need to look at, and you let the algorithm do the calculation for you. Your challenge is to pick the right algorithm and understanding whether it’s done a good job.


To add advanced analytics models to your bag of tricks, get your hands on Eric Wilson’s new book Predictive Analytics For Business ForecastingIt is a must-have for the demand planner, forecaster or data scientist looking to employ advanced analytics for improved forecast accuracy and business insight. Get your copy.

 

 

 

 

]]>
https://demand-planning.com/2022/06/30/transitioning-from-times-series-to-predictive-analytics-with-dr-barry-keating/feed/ 0
Simplifying Python For Business Forecasting With Mariya Sha https://demand-planning.com/2022/03/17/simplifying-python-for-business-forecasting-with-mariya-sha/ https://demand-planning.com/2022/03/17/simplifying-python-for-business-forecasting-with-mariya-sha/#comments Thu, 17 Mar 2022 12:54:05 +0000 https://demand-planning.com/?p=9528

I recently spoke to Mariya Sha, Python guru and star of the Python Simplified YouTube channel. I asked her how forecasters and Demand Planners can get started with the Python programming language and leverage it for Machine Learning. I gained some fantastic insights that should inspire all forecasters to take the leap. The following are her responses.

What’s The Best Way To Learn Python?

“The best way to begin is to take a short, introductory course on Udacity, Udemy or Coursera to learn the basic commands. As long as you have a basic understanding of functions, for loops, control flow operations, etc. you have the foundations required to use the language for your specific needs, whether it’s math, ML or whatever you need it to do.”

Does Experience In Excel Help With Learning Python?

“Skills in Excel like VBA (Virtual Basic for Applications) translate directly into Pandas, which is a data science library that is widely used in Python. I believe a lot of things you encounter in other languages can be applied in Python. The difference is that Python is very high level; you don’t need to think about the small details, all the data types. Python takes care of that, unlike other languages like  C++ which requires programming every little detail. By comparison, Python is very simple.

Python cuts to the chase – it allows you to get it done, and get it done fast. If people ty to build a simple application in Python, they will see the difference. Just try it!”

 

Tips When Starting With Python

“Begin with data types. These are the building blocks of your application. You always need to know which data types to use because they have different methods. Strings have different methods to integers and floating point numbers, for example. Every different data type allows you to do different operations. You need to know which operations you can do with each data type.

When you’re comfortable with that you can move onto control flow operations, like conditional statements, functions, and once you’re comfortable with that you can move to classes and object-oriented programming. Actually, everything is an object in Python – that’s part of why this language is so brilliant.

When you’re comfortable with object-oriented programming, then you can spread your wings and get into what you’re interested in. If you’re interested in Machine Learning, you’d then start looking for Machine Learning frameworks and libraries, if you’re into data science you’ll dive into Pandas.”

What are Objects In Python?

“If you’re creating a windmill for instance, it’ll have a height, width, speed, color etc. This is the data about the windmill. It’ll also have functions like ‘spin’ and ‘stop’. The data and functions combine into an object, which in this case is a windmill.”

 

What Is Python Library?

“A library is ready blocks of code that somebody else made that you can use for yourself. Take, for example, a SoftMax function which is an algorithm we use in ML. Instead of writing the entire formula, you write SoftMax and it’s done for you. It’s basically using simplified parameters instead of writing code. Every library has incredible documentation, with a lot of support and forums where you can ask questions. If you have an issue, somebody will help you out.

The most important libraries are NumPy (for mathematics), Pandas (for data science), and Matplotlib (for plotting graphs and charts). These are 3 main libraries that we all use. If you’re into ML/AI you’d probably go for PyTorch and TensorFlow.”

Using Python For Machine Learning

“Python is a very minimalistic language. You only specify the most basic things. With AI, it’s almost the opposite where everything consists of long formulas – not complicated math, but there’s a lot of it and it’s sequential. Python gives you the easiest syntax for ML. Processes like radiant descent can be summarized in a single command.

Inference, for example, sounds very complicated with predicting and loading a pre-trained neural network and exposing it to an image it’s never seen before. It all seems very complicated but with 30 minutes you can do all of this.

If somebody explains it to you in simple language, I think everybody can understand it. It’s not as intimidating when you understand how simple it is. I think people are afraid of ML because they read these academic style articles and assume they’re not smart enough to do it. At the end of the day, it’s very simple math. Unlike other languages like C++, Python takes care of the details, and allows you to do ML in plain English.”

Parting Thoughts

“There are no rules when it comes to using Python. Don’t listen to what people say is the right way to doing things – it just limits your imagination. The best way to go is trial and error. Try to be creative – it’s the best way to learn.”

You can find Mariya Sha and more data science and computing insights at her YouTube channel, Python Implied.

 

]]>
https://demand-planning.com/2022/03/17/simplifying-python-for-business-forecasting-with-mariya-sha/feed/ 3
Will AI Revolutionize Demand Planning? Maybe… https://demand-planning.com/2022/03/07/will-ai-revolutionize-demand-planning-maybe/ https://demand-planning.com/2022/03/07/will-ai-revolutionize-demand-planning-maybe/#respond Mon, 07 Mar 2022 11:57:31 +0000 https://demand-planning.com/?p=9511

Have you ever heard the Chinese proverb about the farmer? I originally heard it on a podcast featuring Dan Bilzerian.

The story goes:

A farmer and his son had a horse who helped the family earn a living. One day the horse ran away and the farmer’s neighbors said, “Your horse has run away, what terrible luck!” “Maybe,” replied the farmer.

Sometime later, the horse returned with a group of wild mares. The neighbors then said, “Your horse has returned with several other horses, what great luck!” “Maybe,” replied the farmer.

Later that week, the farmer’s son was trying to break one of the mares and she threw him to the ground, breaking his leg. The neighbors said, “Your son has broken his leg, what terrible luck!” “Maybe,” replied the farmer.

A couple of weeks later, the army marched through town recruiting young men. They did not take his son as he was recovering from his injury. The neighbors said, “Your boy was spared, what great luck!”

“Maybe,” the farmer replied.

The point of the story is that you never really know if something is bad or good because you don’t know how it’s going to affect the next step in your life.

I think about this when it comes to the future of demand planning as it relates to Artificial Intelligence (AI) and Machine Learning. How is it going to affect the next step in our career or even our supply chain?

Will AI change demand planning for the better? “Maybe”

AI-Generated Insight Isn’t Enough

About a year ago Daniel Fitzpatrick wrote an article titled Beware the Pitfalls of AI in Demand Planning, published on this website.

He mentions a few concerns but the one that caught my eye was “Forecasts as Proxies for Success”. In the article he says “Forecast accuracy is only a proxy for improved business performance. Without an effective supply chain to support more accurate forecasting, much of the value that an advanced algorithm might add may be lost. An excessive focus on improving forecast accuracy may draw attention and resources away from other constraints that are actually causing larger problems.”

“An accurate forecast does not reduce demand volatility.”

Let’s think about this for a minute. So even with advanced algorithms and heavy investment in AI, an accurate forecast does not reduce demand volatility. To tackle that, you are going to need to understand the root cause, some of which can be prevented and some are completely out of our control. Let’s separate the two.

On one hand, you have the preventative. Communication is at the top of the list. It’s not going to help when your sales team keeps market or customer knowledge to themselves. How about promotions and if they aren’t planned out correctly? They could drive bad consumer behavior, a self-inflicted wound of huge swings in demand. A third is inventory levels. Whether it’s through safety stock or customer inventory-level agreements, with the right strategy, both can help lead to smoother, more forecastable demand.

“Machine Learning Algorithms can only take you so far.”

On other hand, you have the uncontrollable. There’s weather, labor market and wages, raw materials costs and availability, and industrial production specifically as it relates to commodities. The list goes on, a ripple effect that affects all of our supply chains. The majority of my career has been in the CPG space and as much as I would like to claim to say I have seen it all, I am sure you could come up with a story that tops mine. The point is, AI and Machine Learning Algorithms can only take you so far.

“AI improves forecasts based on real-time, internal and external data”

Demand Planning isn’t simple and we can all envision putting ourselves on the imaginary process line of evolution ranging from the basic to the advanced. But unless you are on the top of the food chain, AI and machine learning can likely help. Machine learning can take you to the next level; it enables improved forecasts based on real-time data using internal and external data sources and can turn the uncontrollable into the measurable.

External Data Is Our Friend

External data is our friend and modern machine learning algorithms combined with our supply chain networks can likely outperform processes that are managed solely by Demand Planners. Think about new products. What if AI helped users identify products with similar characteristics ultimately leading to better predictions? Demand Planners can be turned into Super Demand Planners.

Recently I have been getting the impression that there is going to be a shift in our world of Demand Planning. Maybe it’s about finding efficiencies in processes, maintaining lean personnel, or maybe it’s about preserving inventory levels and increasing cash flow. Either way, change is coming and likely in the name of AI.

So, is Artificial Intelligence (AI) to the Rescue for Demand Planning? Maybe.

Maybe we can expand our set of tools and work smarter, not harder. Maybe there is always going to be the uncontrollable and too much data isn’t always a good thing.

 

]]>
https://demand-planning.com/2022/03/07/will-ai-revolutionize-demand-planning-maybe/feed/ 0
Using Social Media Data To Improve Forecasts https://demand-planning.com/2021/09/08/using-social-media-data-to-improve-forecasts/ https://demand-planning.com/2021/09/08/using-social-media-data-to-improve-forecasts/#comments Wed, 08 Sep 2021 12:56:47 +0000 https://demand-planning.com/?p=9259

Web 2.0 is a term that’s been used often since the early 2000s. While the itself term is ambiguous, it has in many ways come to mean the socialization of the web. Compared to the static data put up by programmers back in the day, Web 2.0 is the connection of people – a new world of content being consumed and messages being communicated.


Everyone has seen the statistics – over 90% of today’s data was in the past two years. Think about how many website clicks, interactions, and consumer transactions are being created every day. Consider the 300,000 social media status updates, the 140,000 photos uploaded, and the 500,000 comments made every minute. Add to this the Internet of Things with its constant real-time transmissions, and you’ll have a good appreciation of the speed at which data are being created.

Harvesting this information can be a gold mine for organizations and a game changer for business forecasting and demand planning.

Predictive analytics is helping us unlock the keys to social data and web 2.0. Traditionally demand planning is focused on sales forecasts, generated using only internal order history and data.  Much of this history is over 3 years old. This means that many companies are missing out on over 90% of the data available and insights included in it.

More and more demand planning teams over the past few years have migrated to predictive analytics and deriving faster and better results. They are moving to the next stage in demand planning and meeting the current revolution in social media and data. And the results are measurable improvements in forecast accuracy and consumer behavior.

Improving Forecasts With Social Media Data

In one study, Antonio Moreno, Associate Professor of Business Administration at Harvard Business School, looked at data from Facebook to improve forecast accuracy. Antonio and his team worked with an online clothing brand and compared forecasts that incorporated external social data to those without.

Over a seven-month period they gathered data from over 171,000 Facebook users and produced two sets of sales forecasting models: the baseline forecast, which included only internal company information, and a second forecast that combined internal and social media data.

Using only standard time series modeling like exponential smoothing, averaging, and other methods, the company’s existing sales forecasts for lag 1 had a MAPE of 12%. This was the researchers’ best-performing baseline model. It took into account seasonality as well as internal casual data on the company’s sales and advertising campaigns.

Adding in information from social media brought the error down to 7–9%. The new models used social media comments and natural language processing to categorize each comment as positive, negative, or neutral. Then it combined this information with the internal forecast using a neural network to greatly improve forecast accuracy.

It Can Be Less Complicated Thank You Think

Even without creating complex coding for natural language processing, companies are seeing benefits using more data and existing capabilities. A CPG company that also does direct to consumer sales found a 15%-25% improvement in their item forecasts by looking at the number of new comments and where they were being made. For them, it wasn’t about it being a good or bad comment but rather that any comment was made and the volume of comments for a given item.

Major Improvements In Forecast Accuracy

Using this easy to obtain external data, they developed a simple regression model to incorporate into the forecasting process. With close to 28,000 SKU’s, they saw forecast improvement in over 80% of their SKUs. The overall WMAPE went from 42% to under 35%. Using location tags to mark where the comments came from, they were able to improve location forecasts by 40%, thereby greatly reducing inventory mix between locations.

It’s not just about comments either. Social media users publicly share metadata such as the user’s location, language spoken, biographical data, and/or shared links. Using this information, a retail company used this gold mine to create theoretical consumer profiles and then forecast based on those new clustered profiles. This allowed them to better determine trends within categories and finetune overall item forecasts. With over 100,000 SKU’s they realized a 7% improvement in WMAPE.

 

 

For more information on making the most of the data available to your organization, get a copy of Eric’s book, Predictive Analytics For Business Forecasting & Planning. Giving you the tools to adapt to the data age, this is your guidebook to demand planning 2.0. Get your copy.

]]>
https://demand-planning.com/2021/09/08/using-social-media-data-to-improve-forecasts/feed/ 10
Breaking The Magician’s Code: Revealing The A.I Algorithms Behind Predictive Analytics https://demand-planning.com/2021/04/07/breaking-the-magicians-code-revealing-the-a-i-algorithms-behind-business-forecasting/ https://demand-planning.com/2021/04/07/breaking-the-magicians-code-revealing-the-a-i-algorithms-behind-business-forecasting/#comments Wed, 07 Apr 2021 15:46:25 +0000 https://demand-planning.com/?p=9063

I am going to attempt to pull back the curtain and unveil the magic behind the most common algorithms used in predictive analytics for business forecasting, and demonstrate what exactly goes on behind the scenes in the world of AI.

Here I focus on the top methods and algorithms that enable the execution of applications for demand planning and business forecasting. The following are the preferred Machine Learning and Predictive Analytics models of Demand Planners and Data Scientists (in reverse order):

7) Artificial Neural Networks

6) Decision Trees

5) Logistic Regression

4) Naïve Bayes

3) Linear Regression

2) Smoothing and Averaging Time Series Models

1) Simple Ratio Models

7) Artificial Neural Networks: (ANN) are a class of pattern matching techniques inspired by the structure of biological neural networks. ANNs combine logistic regressions into a neural network. ANNs less complicated than they may first appear  – ANNs are a collection of logistic regressions, so if you understand logistic regression, you can easily understand ANNs. Developing the final prediction is generally done by training the model and calibrating all of the “weights” for each neuron and repeating two key steps: forward propagation and back propagation.

6) Decision Trees: The concept and algorithms underpinning decision trees are relatively simple compared to other models. The general purpose of decision trees is to create a training model that can be used to predict the class or value of target variables. Decision trees build classification or regression models in the form of an upside-down tree structure.

5) Logistic Regression: These help demystify neural networks and help answer probability type questions. While it is listed as a type of regression model, it is less a calculation and more an iterative process. This can be used for anything from predicting failures to identifying if the object in a picture is a cat or not, for example.

4) Naïve Bayes: Naïve Bayes are probabilistic, which means that they calculate the probability of each class for a given set of features, and then output the class with the highest observed probability in the data set. In simple terms, based on a bunch of x’s, we’re looking at the odds of Y being y. It can be used for natural language processing, to classification, to simple prediction.

3) Linear Regression: Regression is a simple cause-and-effect modeling that investigates the relationship between dependent (target) and independent variables (predictor). Regression models come in all types and applications. One of the most common is a simple linear regression. The first step in finding a linear regression equation is to determine if there is a relationship between the two variables. This is often still a judgment call for many Demand Planners.

2) Smoothing & Averaging Time Series Models: With these models, one needs only the data of a series to be forecasted. They are simple and widely used by companies that own historical data. Here we assume that the training data set of sales history contains a trend and seasonality, and we extrapolate these patterns forward.

1) Simple Ratio Models: These express the relationship between two or more quantities.  Ratio models are utilized for a range of day-to-day purposes: understanding a seasonality index, calculating the velocity of sales and market penetration, or disaggregating a family level forecast, just to scratch the surface.  This easy-to-calculate statistic is used in various ways to guide decision making and drive forecasts.

What we find is that sometimes the simplest methods provide us a good forecast and are the best use of our time. Fancy techniques are great, but our overriding goal is to select the model that fits our business purposes and the resources available to us. We need to evaluate the model properly to ensure that it can do what we need it to. The most sophisticated techniques and most advanced technologies accomplish little if nobody understands the results. To complete the process we must step back, sometimes simplify, and communicate our analysis effectively. Or just tell them it was AI and leave them in awe  of your magical skills…

 

Don’t forget to join myself and a host of predictive analytics, demand planning, and forecasting leaders at IBF’s Virtual Business Predictive Analytics, Forecasting & Planning Conference from April 20-22, 2021. At just $499 for this insight-packed 2-day event, it’s an extremely cost-effective way to evolve your skills for the future of demand planning and forecasting.


 

To add the above-mentioned models to your bag of tricks, get your hands on Eric Wilson’s new book Predictive Analytics For Business ForecastingIt is a must-have for the demand planner, forecaster or data scientist looking to employ advanced analytics for improved forecast accuracy and business insight. Get your copy.

 

 

 

 

 

 

 

 

]]>
https://demand-planning.com/2021/04/07/breaking-the-magicians-code-revealing-the-a-i-algorithms-behind-business-forecasting/feed/ 1
Beware The Pitfalls Of AI In Demand Planning https://demand-planning.com/2021/02/16/beware-the-pitfalls-of-ai-in-demand-planning/ https://demand-planning.com/2021/02/16/beware-the-pitfalls-of-ai-in-demand-planning/#comments Tue, 16 Feb 2021 14:33:01 +0000 https://demand-planning.com/?p=8942

As we integrate artificial intelligence (AI) and machine learning (ML) into our demand planning processes, I am sure that we will see improvement in our ability to anticipate demand rather than react to it. However, accurate forecasting is only one element in an effective supply chain. So, while I am very much in favor of making good use of these new tools, I have some reservations about their impact on improved demand planning.

My concerns fall into five categories:

  • The nature of algorithms
  • Algorithms don’t execute
  • Gaming and overrides trump algorithms
  • Forecasts as proxies for success
  • Implementing advanced algorithms may initially make things worse
  • Implementing advanced algorithms may reveal significant supply chain inefficiencies

1. The Nature of Algorithms

Algorithms are models of reality, and all models fall short of representing reality with perfect accuracy. Understanding a model’s limitations is key to its proper use. Certainly, AI and ML algorithms will allow us to better model potential demand, but we will also need to be aware of their limitations.

“all models fall short of representing reality with perfect accuracy”

Bad data, incorrect or biased interpretations of the data, and ignoring data that does not agree with corporate direction remain significant risks in any planning process. Adding reliable processes to validate both the data itself as well as any interpretations of the data will improve the effectiveness of these advanced algorithms.

2. Algorithms Don’t Execute

Poor supply chain execution will undermine any algorithm, no matter how accurate it is. Relying on improved algorithms alone will not improve a supply chain that is riddled with ineffective practices and siloed teams. In fact, more accurate modeling may reveal just how detrimental these poor practices are. An accurate forecast that correctly anticipates future demand will be worthless if the product can’t be produced and shipped in time to meet the demand.

And a more detailed view of future customer behavior will be worthless if the company cannot focus the necessary resources on planning the development, production and shipment of products that satisfy the customers’ expectations. Adding performance metrics to key supply chain processes will allow for discovery of potential constraints. And setting up processes to address these constraints as they appear will allow for continuous improvement throughout the supply chain.

3. Gaming and Overrides Trump Algorithms

A reliable algorithm that no one trusts will be of little value to a company. When individuals or teams believe that their view of the future is more accurate than a system’s predictions, and they are allowed to game or override the algorithm, most if not all of the value of the algorithm is lost. In my experience, most companies have a significant number of S&OP team members who distrust the systems they use to plan.

“A reliable algorithm that no one trusts will be of little value to a company”

Unless this is addressed, this underlying lack of confidence in any system will severely limit any algorithm’s impact on improving forecast performance. Overrides should be documented and validated by product performance and gaming should be clearly discouraged and called out when it does occur.

4. Forecasts as Proxies for Success

In applying AI and ML algorithms to our business, we need to ask what our true goal is. Is it really a more accurate forecast? Or is it a more robust and agile process for responding to customer demand?  It is possible to improve forecast accuracy without also improving service levels and on-time delivery.

Forecast accuracy is only a proxy for improved business performance. Without an effective supply chain to support more accurate forecasting, much of the value that an advanced algorithm might add may be lost. An excessive focus on improving forecast accuracy may draw attention and resources away from other constraints that are actually causing larger problems.

5. Implementing Advanced Algorithms May Reveal Significant Supply Chain Inefficiencies

There is no guarantee that implementing advanced forecasting algorithms will improve business performance. A more accurate forecast may reveal that the company can’t actually respond quickly as market and customer preferences change.

It may also show that more resources will be needed to support the execution of a more accurate forecasting process. In the long run these are useful lessons that the company can use to address constraints to improve the entire supply chain to take advantage of more accurate forecasting. So the expectation needs to be that these advanced models will be part of a continuous improvement process that will require all the links of the supply chain to become more effective by addressing constraints as they are discovered.

6. Stepping Back to Step Forward

The success of any supply chain is dependent, in large part, on the people in it understanding their roles and executing effectively. As AI and ML models are more integrated into the demand planning process, the key practices of good communication, ongoing training, executive support and continuous improvement will also need to be supported. Without these basic practices, the value of improved forecast accuracy may be limited.

By themselves these new algorithms will only show us what is possible. It will be up to each member of the S&OP teams to make sure that the possible consistently becomes reality through consistent execution guided by reliable performance metrics.


To find about more about practical applications of machine learning models, pick up a copy of Eric Wilson’s new book, Predictive Analytics For Business Forecasting & PlanningWritten in easy-to-understand language, it breaks down how machine learning and predictive analytics can be applied in your organization to improve forecast accuracy and gain unprecedented insight. Get your copy

 

 

]]>
https://demand-planning.com/2021/02/16/beware-the-pitfalls-of-ai-in-demand-planning/feed/ 1
Updating Machine Learning Models To Adapt To Demand Shifts https://demand-planning.com/2021/01/12/updating-machine-learning-models-to-adapt-to-demand-shifts/ https://demand-planning.com/2021/01/12/updating-machine-learning-models-to-adapt-to-demand-shifts/#comments Tue, 12 Jan 2021 14:12:03 +0000 https://demand-planning.com/?p=8872

Even before the pandemic, foward-thinking retailers leveraged AI and machine learning technology (ML) for demand forecasting. In the post-Covid world, however, those ML models have failed to provide accurate predictions because they don’t know that the data they use is now obsolete due to changing demand patterns. How can we upgrade them to the new reality?

There are six possible ways to get a more accurate forecast:

1 – Gathering data on new market behavior: As dynamics within the new market stabilize, use that data set to create a new model for forecasting demand.

2- Use a feature engineering approach: Track external data sources like price indices, market states, latest news developments, exchange rates, and related financial/economic factors. Using these, models can generate more accurate predictive outputs.

3 – Factor in up-to-date POS data: Analyzing recent POS data can allow us to observe and react to real-time shifts in patterns of demand, improving forecast reliability. Depending on a given product’s classification, the appropriate range for a POS data set might be between a month to two months.

4 – Use the transfer learning approach: If we possess any data sets relating to historical pandemics or behavior based on similar principles, we’re able to use that data within the context of this present-day pandemic.

5 – Utilize a model for information cascades: Merge the cascade modeling with current POS data sets to create a demand forecasting model that is able to recognize aggregated consumer behavior patterns and predict herd patterns for future sales.

6 – Leverage NLP (natural language processing) technology: NLP analyzes actual consumer comments and posts from an array of social media sources, from media platforms to popular social media sites. NLP can use sentiment analysis algorithms to collect and analyze conversations and discussions from real customers. This gives an unfiltered look at consumers’ behavioral patterns, preferences and attitudes.

If you are looking for a way to improve your current ML models and thinking of building a demand forecasting feature from scratch, this will help you to choose the best approach depending on your business type.

A data scientist generally works with historical data, and it’s impossible to predict such drastic changes as a worldwide pandemic. But as a general rule, you should prioritize flexibility in retraining your models, add more external factors as predictors, and account for a short-term perspective as long-term models become less relevant.

We tackled this problem with pre-Covid models for a restaurant business. Here is an example from our dashboard:

forecasting dashboard

Forecasting dashboard showing normal revenue before the pandemic.

forecasting dashboard showing revenue

Forecasting dashboard revealing revenue post-lockdown.

 

In this case, we rebuild our models from scratch, losing a degree of accuracy. At this point, the historical data are not relevant anymore and we wait for new statistics and patterns inside the data.

Not every demand forecasting article on the internet fits the particular needs of your business and industry. Effective approaches vary dramatically depending on business types – here are some of the distinctions and varying ways to address demand forecasting:

Small vs Large Businesses

Small and large businesses should be approaching forecasting in completely different manners. First of all, the acceptance criteria for huge businesses is significantly smoother than for small businesses. We can have a higher error for prediction in quantity, as high sales volume allows for greater tolerance for error. When it comes to historical data, large businesses have a higher volume of collected data, making it easier to identify patterns in customer behavior. With small businesses, it’s often necessary to test your hypothesis to prove the correlations between sales volume and predictors.

Chart showing demand volatility

Demand for a product at a large business, showing clear demand patterns.

Image of a company's sales history

Demand for a small business showing erratic sales patterns.

Online vs Offline

Online sales allow for a greater range of predictors and external factors in your modeling. It’s not necessary to have a POS (point of sale) system, as you’re able to collect all relevant data from the website. You know more information about customers and collect their historical purchases. With different predictors, you can apply a wider range of machine learning models: Gradient Boosting, Random Forest, SVR, Multiple Regression, KNN, etc. With offline sales, you are often limited to historical sales only. The best approach here is to use Time Series Analysis.

The USA vs Europe

Regional differences play a huge factor in predictive modeling, as varying locations will have different behaviors both in specific sales and general cultural factors. It’s worth noting that certain regions’ consumers are influenced to different degrees by marketing campaigns. Additionally, holidays vary by region, and you’ll need to decide whether to add this feature to the model or not. Take into account different legal constraints (limit for product amount) etc.

Perishable vs Non-Perishable Products

Finally, it’s crucial to factor product type within your demand modeling. With perishable products, you need to set up the right metrics to penalize the model when the prediction is much higher than the real value, as the consequences for excess inventory are significant. You need to be careful with data preparation and work with outlier detection because this prediction should be extremely sensitive to any changes.

Conclusion

The pandemic has made us all hyper-aware of the limitations and constraints of forecasting models, but the best practices in forecasting are the same whether we’re in a pandemic or not. You should always be striving to fit your particular business model and business needs to your forecasting tools, to attract new customers, increase your revenue, and expand your market share.

 

To find about more about practical applications of machine learning models, pick up a copy of Eric Wilson’s new book, Predictive Analytics For Business Forecasting & PlanningWritten in easy-to-understand language, it breaks down how machine learning and predictive analytics can be applied in your organization to improve forecast accuracy and gain unprecedented insight. Get your copy

]]>
https://demand-planning.com/2021/01/12/updating-machine-learning-models-to-adapt-to-demand-shifts/feed/ 4
The Intersection Of Forecasting, Machine Learning & Business Intelligence https://demand-planning.com/2021/01/05/the-intersection-of-forecasting-machine-learning-business-intelligence/ https://demand-planning.com/2021/01/05/the-intersection-of-forecasting-machine-learning-business-intelligence/#comments Tue, 05 Jan 2021 14:53:31 +0000 https://demand-planning.com/?p=8858

The following is an extract from Eric Wilson’s new book, Predictive Analytics For Business Forecasting & Planning, written by Eric Wilson CPF. It is your guidebook to the predictive analytics revolution.  Get your copy


Most of what is discussed around predictive analytics is in terms of a new way to forecast where you are not just looking at your internal sales history, but also you are bringing in more data, different drivers, and other external variables to improve your forecast.

We have focused on using predictive analytics to help in discovering “Why do things happen?” and the benefits of translating this into “What could happen if…?” With advances in technology, we now have advanced models and methods that can better enable predictive analytics.

The Demand Planner or predictive analytics professional blends forecasting and business intelligence. They merge techniques and methods including machine learning to support the business’s needs. A common misconception is that machine learning, business forecasting, advanced business intelligence, and all things predictive analytics are synonymous.

The diversity of opinion reflects the fluidity of how we understand the defining language of the field. Business forecasting is the process to extract information and provide insights. Machine learning is a subset or application of AI and is more of an approach than a process. Business intelligence is the different types of analytics and outputs.

Where they overlap is the intersection of process, approach, and insights of predictive analytics.

Business Intelligence

Business Intelligence or BI focuses on infrastructure and output. The use of the term business
intelligence can be traced to the mid- to late-1860s, but it would be a century later before consultant. Howard Dresner was credited with coining the term. His definition was, “Business Intelligence: An umbrella term that covers architectures, databases, analytical tools, applications, and methodologies used for applying data analysis techniques to support business decision-making.” Despite this definition, historically, when people thought of business intelligence, most considered it as a fancy way of talking about data reporting. It has always been much bigger than just a dashboard and, through the years, people have begun to better understand the breadth and uses of BI to inform data-driven business decisions.

 


Fig b | Infographic depicting intersection of AI, BI, and business forecasting

 

Predictive analytics in BI has become a natural and needed progression of decision-making capabilities and insights. Where most of BI focused on visualization of data and descriptive type analytics, with predictive analytics we are asking more what could happen or even what we can make happen as an organization. Predictive analytics helps in presenting actionable information to help executives, managers, and other corporate end-users to make informed business decisions. Overall, predictive analytics can help discover why things happen and use this knowledge to reveal what could happen in future.

Business Forecasting

Business Forecasting: The process of using analytics, data, insights, and experience to make predictions and answer questions for various business needs. It is a process of breaking something down into its constituent elements to understand the whole and make predictions. Where BI is about the tools and representation, business forecasting is the analysis and procedures.

Predictive analytics in business forecasting has become a more advanced process that encompasses more and different types of data, more forward-looking causal type models, and more advanced algorithms and technology. It uses several tools, data mining methodologies, forecasting methods, analytical models (including machine learning approaches), and descriptive and predictive variables to analyze historical and current data, assess risk and opportunities, and make predictions.

Instead of just historical sales, we are trying to better understand the factors or the likely purchase behavior of the buyer. Predictive analytics is a new way to forecast where you are not just looking at your internal sales history, but you are bringing in more data, different drivers, and other external variables to improve your forecast.

Machine Learning

Machine Learning involves different approaches and methodologies. It is a subset of AI and is a
collection of different techniques, methods, modeling, and programming that allow systems to learn automatically.10 Machine Learning: An algorithm or technique that enables systems to be “trained” and to learn patterns from inputs and subsequently recalibrate from experience without being explicitly programmed. Unlike other approaches, these techniques and algorithms strive to learn as they are presented with new data and can forecast and mine data independently.

For predictive analytics, machine learning has opened new opportunities and provides more
advanced methods for it to use. Predictive analytics in machine learning is a category of approaches to achieve better forecasts, improved intelligence, automation of processes, and a path to AI. Some new advanced models and methods can be incorporated to further enable predictive analytics.

Predictive Analytics

At the intersection of advanced business forecasting, mature business intelligence, and some machine learning techniques, is predictive analytics. Predictive Analytics: A process and strategy that uses a variety of advanced statistical algorithms to detect patterns and conditions that may occur in the future for insights into what will happen.

Predictive analytics used to be out of reach for most organizations. However, recent advances
in professional skills, increased data, and new technologies, including machine learning and AI
techniques, have made it much more accessible. Predictive analytics utilizes many advanced business and planning processes to provide more information with less latency and improved efficiency. This is not just about advanced analytics outputs and business intelligence; it also offers more mature organizations a view of what and why things occur. Finally, while predictive analytics may use some machine learning techniques, it is only a portion of the planner’s toolbox, along with other statistical and data mining techniques.

 

This article is an extract from the book Predictive Analytics For Business Forecasting & Planning, written by Eric Wilson CPF. It is your guidebook to the predictive analytics revolution. Get your copy.

]]>
https://demand-planning.com/2021/01/05/the-intersection-of-forecasting-machine-learning-business-intelligence/feed/ 3