excel – Demand Planning, S&OP/ IBP, Supply Planning, Business Forecasting Blog https://demand-planning.com S&OP/ IBP, Demand Planning, Supply Chain Planning, Business Forecasting Blog Mon, 11 Jul 2022 08:29:16 +0000 en hourly 1 https://wordpress.org/?v=6.6.4 https://demand-planning.com/wp-content/uploads/2014/12/cropped-logo-32x32.jpg excel – Demand Planning, S&OP/ IBP, Supply Planning, Business Forecasting Blog https://demand-planning.com 32 32 From Excel To Power Bi – My Demand Planning Journey https://demand-planning.com/2022/02/08/from-excel-to-power-bi-my-demand-planning-journey/ https://demand-planning.com/2022/02/08/from-excel-to-power-bi-my-demand-planning-journey/#respond Tue, 08 Feb 2022 12:35:14 +0000 https://demand-planning.com/?p=9477

When it comes to the term “S&OP”, there is some uncertainty around how it all started. Some say it started with Oliver Wight in the early 1980’s, others say it was Richard Ling and Walter  Goddard in their book Orchestrating Success: Improve Control of the Business with Sales & Operations Planning. I’m not here to debate that because for me, it all started in 2007.

I remember the first S&OP presentation that I saw. It was as basic as you can imagine with Excel graphs copied and pasted into PowerPoint presentations. I am sure some of you are thinking to yourselves “we still do that”. You’re not alone. S&OP is a journey, not a destination and with every journey, it takes time.

Over the next few years, improvements were slow but gradual. We stayed with Excel and PowerPoint for quite some time. Different metrics came and went and unfortunately, so did some Demand Planners. The most important thing stayed constant, though – support from leadership. When you have buy-in from leadership and they truly understand the value behind S&OP, then resources such as personnel and systems start falling into place.

Remember, change is good. Change means stopping doing what’s not working, keeping doing what is working, and always making improvements.

With the support of leadership, we continued to upgrade our forecasting tools, we integrated our systems with both our promotion planning tool and a short term demand sensing tool. Things were starting to click, but we still needed improvement on the presentation side.

In 2019, Microsoft invited a few members from my company, mostly IT people, to their corporate offices in New York City. The point of the meeting was to introduce some of their tools that we weren’t taking advantage of: Yammer, Teams, and Power BI. Fortunately for me, someone couldn’t attend, so the invitation was passed along. So on a cold rainy day in the Fall, I took a short train ride with a coworker, spent the day in the city, and was immediately enamored with Power BI.

If you’re not familiar with Power BI, some say it’s closer to Excel than Microsoft Access but I like to think it’s the best of both worlds. Not only can it handle large amounts of data, like Access, using it’s query editor, but it’s a great tool that has interactive data visualization options that can help tell your S&OP story.

Power Bi was introduced into our S&OP process shortly after the trip.

When this happened, it brought energy back into the process. You could tell there was more excitement than there was in recent years. We were still showing similar data, but we were showing it using our shiny new toy.

We weren’t done yet.

We needed to keep up the momentum so it was time to partner with our friends from IT. We wanted to make the shift from static data to interactive data. We wanted our S&OP meetings to be able to answer questions on the fly. “What was case fill last year?”, “What caused that drop in forecast accuracy last month?”, “How does our inventory this quarter compare to inventory last quarter?”. The objective was to be able to answer these questions at any time.

Luckily, we had the right support. So, by partnering with IT, we were able to directly connect Power BI with data coming from our transactional system and just like that, magic. We had an interactive S&OP presentation. Don’t get me wrong; this took a lot of time and energy.

We now had one Power BI document that could be filtered on a specific Planner’s brands. Not only that, but we could filter our visualizations to show different time periods and we could begin to answer the questions that we previously had to follow up on.

One of the biggest improvements was time. There was no longer the need to make a dozen different PowerPoint presentations with Planners doing the same repetitive work. Instead they could use time more wisely; looking into forecast accuracy misses, explaining gaps to other forecasts, and laying out assumptions. We were working smarter, not harder.

And that is currently where we stand.

Over the last 15 years, I like to think we came a long way. It wouldn’t have been possible without the support and dedication from those involved.

The journey isn’t over. We are further along today than we were yesterday so let’s start thinking about tomorrow.


How To Present Forecasts Properly

Spreadsheets Are Obsolete In The Age of Big Data —What Is Replacing Them?

The Intersection Of Forecasting, Machine Learning & Business Intelligence

]]>
https://demand-planning.com/2022/02/08/from-excel-to-power-bi-my-demand-planning-journey/feed/ 0
Spreadsheets Are Obsolete In The Age of Big Data—What Is Replacing Them?  https://demand-planning.com/2020/09/14/spreadsheets-are-obsolete-in-the-age-of-big-data-what-is-replacing-them/ https://demand-planning.com/2020/09/14/spreadsheets-are-obsolete-in-the-age-of-big-data-what-is-replacing-them/#comments Mon, 14 Sep 2020 13:18:46 +0000 https://demand-planning.com/?p=8713

Excel spreadsheets are an essential part of any workforce regardless of the type of the business, so much so that Excel has become synonymous with spreadsheets. Most laptop computers and smartphones come with spreadsheets pre-loaded, making them one of the most widely accessible applications worldwide. Spreadsheets are used for data entry, basic calculations, data sorting and analysis, creating presentations, graphs and charts, budgeting and accounting, demand planning and scheduling, and much more.

Many people feel spreadsheets have an excellent user-friendly interface with basic functions that are easy to learn. There is plenty of online and offline self-help information available for learning how to use them. Creating data tables in graphs and charts is quick and easy and more experienced users can exploit advanced functionalities like rule-based formulas, conditional formatting, pivot tables, macros, and others.

1. Why Are Excel Spreadsheets Becoming Obsolete?

Given all those benefits, why are they becoming obsolete? Excel spreadsheets were a great productivity application with excellent standalone business analytics with basic reporting. However, spreadsheets have several limitations which hamper their effective deployment in the demand planning process. Chief among is the fact that spreadsheets were not designed for online collaborative work. A single spreadsheet file can be edited by only one user at a time which becomes time consuming if it requires inputs by multiple users.

2. Spreadsheets Are Not Scalable

Spreadsheets are not scalable in today’s digital economy where data streams into data repositories by the second from mobile devices, online websites and devices embedded everywhere in IoT. The higher the volume of data, the slower the spreadsheet processes leading to more chances for the data to become corrupted.

3. Spreadsheets Are Not Easily Shared

They also create islands of information that cannot be easily accessed or shared across the organization. Merging multiple spreadsheet files into a consolidated information file is a difficult and time-consuming task. The more spreadsheets we have, and the bigger they are, the more prone they are to human errors, resulting in misreporting.

4. Spreadsheets Don’t Support Real Time Decision Making

Spreadsheets are incapable of supporting decision making in real-time due to the data being outdated and inaccurate. It is time consuming gathering the most up-to-date information from multiple users and summarizing the information. In many cases, spreadsheets are vulnerable to deliberate manipulations due to an inherent lack of inability to provide controls and quality governance.

Knowing When It’s Time To Move on From Excel

According to several research studies over the past 20 years, well over 70% of demand planners name Excel spreadsheets as their tool of choice for demand planning. Or maybe not! Given the technology advancements over the past 5-10 years this seems illogical. As a demand planner, you need to ask yourself the following:

  • Is it getting harder and harder to find empty Excel spreadsheets cells, as you run out of columns and rows?
  • Do your spreadsheet cell labels have more letters than the license plate on your car?
  • Do you find yourself waking up in the middle of the night in cold sweats because you can’t scale to live streaming data and digital information using spreadsheets?
  • Are you feeling confused during business meetings because your spreadsheet results are not the same as others, even though you used the same data sources?
  • Feeling dizzy from trying to figure out why your spreadsheet keeps crashing, requiring a blueprint to find the bad cell calculations?

If you experience any of the symptoms described above, you may be suffering from demand planner spreadsheet overload syndrome.

A healthy well-rounded diet of data mining, event stream processing, predictive and prescriptive analytics, visual analytics with interactive reports and graphics, all aided by artificial intelligence and machine learning, is the cure to your Excel dependency.

Why Change Now?

Digitization is making the supply chain faster, more intelligent, more connected and autonomous. Forward-thinking C-level decision makers are presenting opportunities for companies to embrace the digital ecosystem, becoming a strategic partner with their customers and consumers. This is aided by the convergence of three factors:

  1. Powerful, more affordable computing power.
  2. Abundant data.
  3. The availability of analytics and algorithms—especially with cloud-based open source analytics.

All of this is giving rise to an awareness and willingness to apply analytics to everything. Not just to strategic initiatives, but to day-to-day tasks. There are more and more business analysts embracing and using advanced analytics and machine learning, interpreting and applying the results, effectively becoming citizen data scientists.

Companies are now looking to invest in new analytics-driven forecasting and planning technology supported by artificial intelligence and machine learning that allows them to measure sales promotions and marketing events to mathematically calculate promotion lifts and determine if they generate revenue and profit—without complicated spreadsheets. Scenarios can be run in real time and the impact automatically reconciled up/down the business hierarchy using a web interface instead of spreadsheets.

As data collection and analytics tools, applications and solutions have become more affordable and powerful, they’ve become easier for companies to justify. For many companies, data management technology has advanced so quickly that the challenge now is not about getting the budget, but how to make practical use of all the data collected from IoT devices. Spreadsheets are just not scalable enough to handle live streaming data.

Furthermore, data storage costs have declined significantly over the past decade making it more affordable to store the transactional data collected at increasingly granular levels across markets, channels, brands, products and key account configurations. Easy access web-based applications can be used to access the data, without requiring downloads or pivot tables to support spreadsheets. Faster in-memory cloud processing is making it possible to run “what if” simulations in seconds that previously had to run overnight.

Over the last decade, a wide variety of other, non-spreadsheet-based forecasting and planning tools, applications, and enterprise solutions have become available that can handle big data gathered from ever-increasing data sources.

New Demand Planning Technology Is Now Available

New disruptive solutions powered by advanced analytics simplify data management, streamline common planning processes, and supports sophisticated workflow creation and deployment.     

These new, intelligent planning solutions allow the user to:

  • Improve sell-through rates by accessing and modeling downstream data (POS/syndicated scanner data) to better anticipate and predict consumer demand. It uses consumption-based forecasting to shape shipments (transactions), also known as sell-in demand based on point of sale or sell-out demand. This results in more accurate shipment plans.
  • Improve planning process efficiency using artificial intelligence and machine learning capabilities to provide demand planners with a digital assistant to improve their forecast value add (FVA). The digital assistant guides demand planners up/down the business hierarchy to manage overrides by exception, thereby reducing errors and planning efforts.
  • Scalable planning capabilities provide an environment for data scientists and business users to collaborate more effectively through the integration of web-based dashboards, reports and planning workbooks in an integrated navigation environment. All with the look and feel of spreadsheets, with unlimited rows and columns, supported with robust data integration capabilities combined with advanced analytics and machine learning.
  • Ease of implementation, expansion and orchestration of analytic workflows. Cloud-ready, out-of-the-box modeling strategies with predefined models solve complex demand forecasting problems faster. Open API’s provide an extension of modeling capabilities with open-source tools like Python and R. This allows companies to put Python and R into production, as well as provide scalability.

The goal is to provide an environment that encourages adoption and the ability to introduce an automated low touch demand planning process that helps manage demand planning by exceptions. Artificial intelligence and machine learning are key to this kind of low touch forecasting. No pivot tables or spreadsheet cell calculations required!

 

This article originally appeared in the Spring 2020 issue of the Journal of Business Forecasting. Click here to become an IBF member and get the journal delivered to your door quarterly, as well discounted access to IBF training events and conferences, members only workshops and tutorials, access to the entire IBF knowledge library, and more.

 

]]>
https://demand-planning.com/2020/09/14/spreadsheets-are-obsolete-in-the-age-of-big-data-what-is-replacing-them/feed/ 6
Using Excel To Present & Update Forecasts At The Demand Review https://demand-planning.com/2019/10/07/using-excel-to-present-update-forecasts-at-the-demand-review/ https://demand-planning.com/2019/10/07/using-excel-to-present-update-forecasts-at-the-demand-review/#comments Mon, 07 Oct 2019 14:25:02 +0000 https://demand-planning.com/?p=8012

When it comes to visualizing data in the S&OP process, most stakeholders like to see forecasting outputs in the form that they are most familiar with – Excel. This is certainly true in my experience working in global FMCG companies.

Below I present an Excel tool which I have used many times as a Demand Planner. What you see below has been employed at the Demand Review meetings to great effect at large corporations with very mature demand planning and S&OP processes. It is an ideal solution both for smaller companies just starting their S&OP process that do not want to make a big investment in technology, as well as bigger companies where stakeholders want an easy to understand and recognizable format of data presentation.

Click Here To Download The Forecasting Demand Review Tool

It opens in the web version of Excel. I recommend to download to your computer and open in the Excel application. You can input your own data into this tool from an advanced system (e.g. statistical forecast from APO).

Purpose Of The Tool

The purpose of the tool is too support discussion at the Demand Review meeting with data in format familiar to all stakeholders, and with analytical functionalities that will help to finish the meeting with best possible consensus forecast, including incorporating information from Sales.

As the meeting involves many departments including Demand Planning, Sales , Marketing and Finance, the goal is not to review forecasts SKU by SKU, but on an aggregated level. The level of aggregation depends on the product complexity, i.e. the number of categories/brands/SKU’s. Discussion should be had for groups of products with similar sales characteristics, e.g. groups of products that are sold in similar percent splits. Further volume split per SKU can be determined by more or less advanced statistical methods.

View & Functionalities

The tool operates on historical sales data and forecast figures. Data are presented on different level of aggregation. For product dimension, the tool presents historical sales data and forecast at three aggregation levels: product group/segment/category. For the customer dimension, there are also 3 aggregation levels at which the user can review historical sales: customer/channel/market.

Selection and historical data:

The above screenshot shows customer and product data. The user can select historical sales data and forecast of Product Group A. It is possible to drill down to sales of Product Group A for particular a customer or sales channel. The tool shows the current and previous two years of sales. The reason I choose 2 previous years of sales is that when looking at the chart it is quite easy to understand the seasonality of the product.

Forecast data:On the above screenshot you can see forecast data from the last cycle, labelled “Frcst 2019”, then below, you can see the new figures which were updated in the meeting. The user simply types new figures in this line at the lowest level of Product dimension (Product Group) and saves them. The next line “Change 2019” shows the differences between the updated and last cycle figures. The line “Stat forecast 19” is a Seasonal+Linear regression model that is built in Excel and calculates a statistical forecast based on historical sales in the current selection.

The user has the possibility to select what kind of data lines he/she would like to see on the chart through check boxes.

Exclude customers functionality:

The functionality in the red box in the above screenshot allows users to exclude particular customers simply by pasting their name into the pink area. It is useful when we want to look at overall market sales without customers that cause outlying sales. These volatile customers can skew our view of the whole market.

How To Use The Demand Review Tool

Usually, a more detailed customer forecast revision is done for the period which is going to be frozen in current S&OP/IBP cycle. Demand review and analysis of the forecast for this particular month can be structured in the following way:

Forecast analysis on product group level for frozen month:

 

  • The starting point for analysis can be last year’s figures (naïve forecast)
  • Creation of building blocks – separation of 2 or 3 top customers with biggest events (in analogical period LY or planned for future) and rest of the market
  • Revision and agreement on the forecast for each building block
  • Aggregation of figures

Such an approach differs from the standard approach where segmentation is done according to product dimension. This higher-level approach can help Demand Planners to speak the same language as the Sales team which is oriented towards activities per customer. We can imagine a team of Key Account Managers and each of them comes to the meeting sharing details about their customer/channel. The above tool can be very useful in consolidating their input and challenging it with data (historical sales or statistical forecast).

Another benefit of the tool is easier estimation of promo uplift for particular customers (e.g. by finding periods of similar promotion in data). It is impossible to judge promo impact of a particular customer looking at data for whole market. Of course, the alternative is to deep dive into sales reports per customers.

High-Level Demand Review Process Steps

  • Sales to prepare info of upcoming events by customer that will impact sales and prepare info about marketing campaigns/events
  • Demand planning to analyze historical data, especially for focus month and recent trends
  • Reviewing most important/volatile SKU’s.
  • Revision of the forecast by product groups (see chart above) and agreement on final figures

The role of Demand Planning is to work out the optimum SKU mix. Commercial teams should not be involved in forecast revision SKU by SKU. However, if they are aware about events concerning single SKUs (e.g. recommendation at the cashier or new listing), the Demand Review meeting is the right place to deliver such information.

Summary

People and the underlying process are the foundation of effective S&OP, especially for companies that are just starting to implement it. This Excel tool can be a great support to start your S&OP journey. After building process maturity, investment in advanced systems can provide further results.

]]>
https://demand-planning.com/2019/10/07/using-excel-to-present-update-forecasts-at-the-demand-review/feed/ 10
How To Use Microsoft Azure https://demand-planning.com/2018/01/29/how-to-make-your-own-powerful-machine-learning-forecasting-models-for-free-without-coding/ https://demand-planning.com/2018/01/29/how-to-make-your-own-powerful-machine-learning-forecasting-models-for-free-without-coding/#comments Mon, 29 Jan 2018 20:12:25 +0000 https://demand-planning.com/?p=6067

If, like me, you work in a small to medium sized enterprise where forecasting is still done with pen and paper, you’d be forgiven for thinking that Machine Learning is the exclusive preserve of big budget corporations. If you thought that, then get ready for a surprise. Not only are advanced data science tools largely accessible to the average user, you can also access them without paying a bean.

If this sounds too good to be true, let me prove it to you with a quick tutorial that will show you just how easy it is to make and deploy a predictive webservice using Microsoft’s Azure Machine Learning (ML) Studio, using real-world (anonymised) data.

What is Azure ML?

To most people the words ‘Microsoft Azure’ conjure up vague ideas of cloud computing and TV adverts with bearded-hipsters working in designer industrial lofts, and yet, in my opinion, the Azure Machine Learning Studio is one of the more powerful and leading predictive modelling tools available on the market. And again, its free.  What’s more, because it has a graphical user interface, you don’t need any advanced coding or mathematical skills to use it. It’s all click and drag. In fact, it is entirely possible to build a machine learning model from beginning to end without typing a single line of code. How’s that for a piece of gold?

You can make a free account or sign in as a guest here – https://studio.azureml.net The free account or guest sign-in to the Microsoft Azure Machine Learning Studio gives you complete access to their easy-to-use drag and drop graphical user interface that allows you to build, test, and deploy predictive analytics solutions.  You don’t need much more.

Microsoft Azure Tutorial Time!

I promised you a quick tutorial on how to make a forecast that drives purchasing and other planning decisions in Azure ML, and a quick tutorial you shall have.

If you’re still with me, here are a couple of resources to help you get rolling:

A great hands on lab: https://github.com/Azure-Readiness/hol-azure-machine-learning

Edx courses you can access for free: https://www.edx.org/course/principles-machine-learning-microsoft-dat203-2x-6

https://www.edx.org/course/data-science-essentials-microsoft-dat203-1x-6

Having pointed you in the direction of more expansive and detailed resources, it’s time to get into this quick demo. Here are the basic steps we’ll go through:

  • Uploading datasets
  • Exploring and visualising data
  • Pre-processing and transforming
  • Predictive modelling
  • Publishing a model and using it in Excel

Uploading Datasets To Microsoft Azure

So, you’ve signed up. Once you’re in, you’re going to want to upload some data. I’m loading up the weekly sales data of a crystal glass product for the years 2016 and 2017 which I’m going to try and forecast.  You can read in a flat file csv. format by clicking on the ‘Datasets’ icon and clicking the big ‘+ New’:

   Then you’re going to want to load up your data from the file location and give it a name you can find easily later. Clicking on the ‘flask’ icon and hitting the same ‘+ New’ button will open a new experiment. You can drag your uploaded dataset from the ‘my datasets’ list on to the blank workflow:

Exploring and Visualizing

Right clicking on the workflow module number (1) will give you access to exploratory data analysis tools either through ‘Visualise’, or by opening a Jupyter notebook (Jupyter is an open source web application) in which to explore the data in either Python or R code. If you want to learn how to use and apply Python to your forecasting, practical insights will also be revealed at IBF’s upcoming New Orleans conference on Predictive Business Analytics & Forecasting.

Clicking on the ‘Visualise’ option calls up a view of the data, summary statistics and graphs. A quick look at the histogram of sales quantity shows that the data has some very large outliers. I’ll have to do something about those during the transformation step. You also get some handy summary statistics for each feature. Let’s have a look at the sales quantity column.

I’m guessing that zero will be Christmas week, when the office is closed. The max is likely to be a promotional offer. I can also see that the standard deviation is nearly 12,000 pieces, which is high compared to the mean. You can also compare columns/features to each other to see if there is any correlation:

Looking at a scatter plot comparison of sales quantity to the consumer confidence index value, that really doesn’t seem to be adding anything to the data. I’ll want to get rid of that feature. I’ve also included a quick Python line plot of sales over the two-year period.

As you can see, there is a lot of variability in the data and perhaps a slight downward trend. Without some powerful explanatory variables, this is going to be a challenge to accurately forecast. A lot of tutorials use rich datasets which the Machine Learning systems can predict well to give you a glossy version. I wanted to keep this real. I work in an SME and getting even basic sales data is an epic battle involving about fifty lines of code.

Pre-processing and Transforming

Now it’s time to transform the data. For simplicity, I’ve loaded a dataset with no missing or invalid entries by cleaning up and resampling sales by week with Python, but you can use the ‘scrub missing values’ module or execute a Python/R script in the Azure ML workspace to take care of this kind of problem.

In this case, all I need to do is change the ‘week’ column into a datetime feature (it loaded as a string object) and drop that OECD consumer confidence index feature as it wasn’t helping. I could equally have excluded the column without code using the select columns module:

One of the other things I’m going to do is to trim outliers from the dataset using another ‘Execute Python Script’ module to identify and remove outliers from the sales quantity column so the results are not skewed by rare sales events.

Again, I could have accomplished a similar effect by using Azure’s inbuilt ‘Clip Values’ module. You genuinely do not have to be able to write code to use Azure (but it helps.)

There are too many possible options within the transformation step to cover in a single article. I will mention one more important step. You should normalise the data to stop differences in scale of the features leading to certain features dominating over others. 90% of the work in forecasting is getting and cleaning the data so that it is usable for analysis (Adobe, take note. Pdf’s are evil and everyone who works with data hates them.) Luckily, you can do all your wrangling inside the machine model, so that when you use the service, it will do all the wrangling automatically based on your modules and code.

The Normalize data module allows you to select columns and choose a method of normalisation including Zscores and Min-Max.

Predictive Modelling In Microsoft Azure

Having completed the data transformation stage, you’re now ready to move on to the fun part – making a Machine Learning model. The first step is to split the data into a training set and a testing set. This should be a familiar practice for anyone working in forecasting. Before you let your forecast out into the wild you want to test how well it performs against the sales history. It’s that or face a screaming sales manager wanting to know where his stock is. I like my life as stress-free as possible.As with nearly everything in Azure ML, data splitting can be achieved by selecting a module. Just click on the search pane and type in what you want to do. I’m going to split my data 70-30.

The next step is to connect the left output of the ‘Split Data’ module to the right input of a ‘Train Model’ module, the right output of the ‘Split Data’ to a ‘Score Model’ module, and a learning model to the right input of the ‘Train model’.

At first this might seem a little complicated, but as you can see, the left output of the ‘Split Data’ is the training dataset which goes through the training model and then outputs the resulting learned technique to the ‘Score Model’ where this learned function is tested against the testing dataset which comes in through the right data input node. In the ‘Train Model’ module you must select a single column of interest. In this case it is the quantity of product sold that I want to know. 

Microsoft offer a couple of guides to help you choose the right machine learning algorithm. Here’s a broad discussion and if short on time, check this lightning quick guidance. In the above I’ve opted for a simple Linear Regression module and for comparison purposes I’ve included a Decision Forest Regression by adding connectors to the same ‘Split Data’ module. One of the great things about Azure ML is you can very quickly add and compare lots of models during your building and testing phase, and then clear them down before launching your web service.

Azure ML offers a wide array of machine learning algorithms from linear and polynomial regression to powerful adaptive boosted ensemble methods and neural networks. I think the best way to get to know these is to build your own models and try them out. As I have two competing models at work, I’ve added in an ‘Evaluate Model’ module and linked in the two ‘Score Model’ modules so that I can compare the results. I’ve also put in a quick Python script to graph the residuals and plot the forecasts against the results.

Here’s the Decision Forest algorithm predictions against the actual sales quantity:

Clearly something happened around May 2016 that the Decision Forest model is unable to explain, but it seems to do quite well in finding the peaks over the rest of the period 2017. Looking at the Linear Regression model, one can see that it does a better job of finding the peak around May 2016 but is consistently overestimating in the latter half of 2017.

Clicking on the ‘Evaluate Model’ module enables a more detailed statistical view of the comparative accuracy of the two models. The linear regression model is the top row and the decision forest model is the bottom row.

Coefficient of determinations of 0.60 and 0.72. The models are explaining between half and three-quarters of the variance in sales. The Decision Forest overall scored significantly better. As results go, neither brilliant nor terrible. A perfect coefficient of determination of 1 would suggest the model was overfitted and therefore unlikely to perform well on new data. The range of sales was from 0 to nearly 80,000, so I’ll take 4421 pieces of mean absolute error without a complaint.

It would really be ideal if we had a little more information at the feature engineering stage. The ending inventory in-stock value from each week, or customer forecasts from the S&OP process as features would help accuracy.

One of the benefits of forecasting in this way is you can incorporate features without having to worry about how accurate they are as the model will figure that out for you. I’d recommend having as many as possible and then pruning. I think the next step for this model would be to try incorporating inventory and S&OP pipeline customer forecasts as a feature. Building a model is an iterative process and one can and should keep improving it over time.

Publishing A Model And Consuming It In Excel

Azure ML makes setting up a model as a webservice and using it in Excel very easy. To deploy the model, simply click on the ‘Setup Web Service’ icon at the bottom of the screen.

Once you’ve deployed the webservice, you’ll get an API (Application Programming Interface) key and a Request Response URL link. You’ll need these to access your app in Excel and start predicting beyond your training and testing set. Finally, you’re ready to open good old Excel. Go to the ‘Insert tab’ and select the ‘Store’ icon to download the free Azure add-in for Excel.

Then all you need to do is click the ‘+ Add web service’ button and paste in your Response Request URL and your secure API key, so that only your team can access the service.

After that it’s a simple process to input the new sales weeks to be predicted for the item and the known data for other variables (in this case promotions, holiday days in the week, historic average annual/seasonal sales pattern for the category etc.). You can make this easy by clicking on the ‘Use sample data’ to populate the column headers so you don’t have to remember the order of the columns used in the training set.

Congratulations! You now have a basic predictive webservice built for producing forecasts. By adding in additional features to your dataset and retraining and improving the model, you can rapidly build up a business specific forecasting function using Machine Learning that is secure, shareable and scalable.

Good luck!

If you’re keen to leverage Python and R in your forecasting, we also recommend attending IBF’s upcoming Predictive Analytics, Forecasting & Planning conference in New Orleans where attendees will receive hands-on Python training. For practical and step-by-step insight into applying Machine Learning with R for forecasting in your organization, check out IBF’s Demand Planning & Forecasting Bootcamp w/ Hands-On Data Science & Predictive Business Analytics Workshop in Chicago.

 

 

 

 

]]>
https://demand-planning.com/2018/01/29/how-to-make-your-own-powerful-machine-learning-forecasting-models-for-free-without-coding/feed/ 1