Data – Demand Planning, S&OP/ IBP, Supply Planning, Business Forecasting Blog https://demand-planning.com S&OP/ IBP, Demand Planning, Supply Chain Planning, Business Forecasting Blog Thu, 11 Jan 2024 13:00:50 +0000 en hourly 1 https://wordpress.org/?v=6.6.4 https://demand-planning.com/wp-content/uploads/2014/12/cropped-logo-32x32.jpg Data – Demand Planning, S&OP/ IBP, Supply Planning, Business Forecasting Blog https://demand-planning.com 32 32 Basics of Data Management for Demand Forecasting https://demand-planning.com/2024/01/11/the-fundamentals-of-data-management-for-demand-forecasting/ Thu, 11 Jan 2024 13:00:10 +0000 https://demand-planning.com/?p=10257

The importance of demand forecasting is clear. Robust forecasting improves critical KPIs like customer service levels, inventory turns, and cash. 

However, demand planning is only as effective as the data informing it. Demand forecasters may find the results less trustworthy and reliable if the content fed into a forecasting system has errors or duplicate records. Creating and adhering to a thorough information collection and processing strategy prevents such outcomes.

Decide Which Data to Use

The first step is determining which data the company will use for its demand planning. The information collected in a point-of-sale system could be valuable for highlighting sales patterns, such as which times of the year specific products are most popular and what other things people typically buy at the same time. A practical approach for companies with numerous retail outlets is to gather data showing which stores have the most robust or slowest sales.

“Inventory tools give a broader picture by showing how stock levels change over time”

Alternatively, inventory tools give a broader picture by showing how stock levels change over time. Seeing that historical context can help decision-makers determine how long upswings and downturns might last, whether these events previously occurred and what caused them.

A related question is whether those working on data quality within the organization know the location of the information identified as worth using. Many companies still maintain rigid silos that create challenges for collecting and using information across departments or teams.

Establish a Data Quality Baseline

What’s the current state of the company’s data for demand planning? A data quality baseline answers that question. People must start by identifying the critical data elements (CDE). These collectively represent the information that will shape leaders’ future decisions.

Examples of CDEs in demand planning include:

  • Supplier and customer names
  • Order quantities and dates
  • Restock frequency
  • Merchandise prices
  • Average order fulfillment timelines
  • Dates associated with short-term promotions
  • Most and least-popular product names and descriptions
  • Inventory management system reports
  • Distributor names and locations

The next step is to establish data quality indicators with input from those who understand and value the importance of demand forecasting in modern businesses. We should typically measure the following:

  • Timeliness
  • Uniqueness
  • Accuracy
  • Consistency
  • Completeness
  • Validity

They often rely on specialized tools to show data quality gaps and begin developing improvement plans. However, it’s also important to discuss challenges experienced by the people who collect and use it daily in their roles. They’ll likely have valuable input for changes that might have been overlooked.

Understand Data Governance Needs

Data governance encompasses keeping information usable, secure and available while retaining its high quality. Maintaining it is a team effort of ongoing collaboration to create and uphold standards. People on the data governance team will also help establish organizational norms by training employees to handle them and reduce the chance of errors.

Data governance policies will differ in an organization depending on the type of information used for demand planning. Anything containing payment details or personal info must be treated with more care.

“Many companies use third-party service providers to meet their data-handling needs”

Many companies use third-party service providers to meet some of their data-handling needs. In such cases, data governance plans must include steps to take so those outside businesses don’t compromise quality.

Documentation is also a major part of data governance. Keeping an ongoing record of the data source, location and associated security protections helps organizations use the information and address oversights.

It’s becoming more common for companies to collect data with Internet of Things (IoT) sensors. This gives a more detailed view of what’s happening with the information. Although confirming data sources can initially be time-intensive, the increased analysis opportunities are worthwhile. Estimates indicate the IoT sensor market will experience 24.9% growth in 2027, suggesting decision-makers are interested in using them.

Create and Maintain Data Preparation and Use Processes

Those overseeing data quality and usage within the organization must develop a preparation process everyone can use before feeding the information into platforms for further analysis.

For example, people must check the data for anything that could skew the results. Under- or overestimating demand can add to the organization’s costs, and mistakes often cause these outcomes. Thorough preparation requires looking for duplicate records, misspelled product or customer names, and any information in the wrong format. All those things could result in miscalculations or data not being included in an analysis.

The resultant process must be well-documented and easy for others to follow. Those qualities will be instrumental in getting usable, consistent results within the organization.

Next, people must make a framework for how people within the organization can and should use the data for demand planning. Which tools will they use? Must leaders invest in automated solutions or other products to support the process? Which employees will be directly involved in collecting or using the information? Getting feedback from those parties before and after making the data usage framework should optimize outcomes.

Teach the Importance of Demand Forecasting to Employees

Once the responsible parties design the processes for preparing and using data, they must communicate and teach it to all others handling the information. When all relevant employees understand the importance of demand forecasting, they’ll play important roles in upholding the requirements.

Allow plenty of time for people to get used to new tools or processes. Encourage them to give feedback about everything new and provide insights about further improvements.

Some organizations still use spreadsheets to track activities across the global supply chain. Lasting change will take longer to enact in such companies, and people may feel overwhelmed initially. However, most can adapt to new processes if their managers are patient.

“Employees who understand how to maintain high data quality will feel empowered”

Discuss how seriously the organization takes the importance of demand forecasting and explain why. Employees who understand how to maintain high data quality will feel more motivated and empowered.

Leaders should also be open to hearing about any problems, concerns or challenges that arise as employees work to keep data quality high within the organization. People are more likely to be honest about the highs and lows of this transition if they know managers will hear and respect them.

Treat Data Quality Standards as Works in Progress

High data quality allows leaders to make effective and confident demand planning decisions, no matter what a company sells or how many customers it serves. However, even those who will never act on what the information says are instrumental in gathering and preparing it.

Although these steps will assist company representatives in creating data quality processes, people must periodically revisit the current procedures and assess whether they’re still working as intended. It’s not a sign of total failure if they aren’t. However, it’s a strong indicator it’s time to get to the bottom of what’s going wrong and work to improve the shortcomings.

Data quality standards may also change as a company grows, begins offering new products or must follow updated regulatory requirements. People who understand this and know data quality is never a static measure will collectively help their organizations reach new demand planning goals.

To read more of Emily’s work across business, science and technology, head over to her online magazine, Revolutionized.

 

To get up to speed with the fundamentals of S&OP and IBP, join IBF for our 2- or 3-day Boot Camp in Miami, from Feb 6-8. You’ll receive training in best practices from leading experts, designed to make these processes a reality in your organization. Super Early Bird Pricing is open now. Details and registration.

]]>
Tips For Cleaning Your Dirty Data https://demand-planning.com/2022/07/15/tips-for-cleaning-your-dirty-data/ https://demand-planning.com/2022/07/15/tips-for-cleaning-your-dirty-data/#respond Fri, 15 Jul 2022 13:02:23 +0000 https://demand-planning.com/?p=9733

 We all work with data but we’re not all data people. We must recognize that everybody who interacts with data plays an important part in cleaning and maintaining it so that it is reliable and can be fully exploited by all stakeholders in a business. Of course, we’re not all professionals in that area and it can be quite intimidating to some people.

The Very Real Problems Caused By Data Errors

 One common problem is not getting documents to match up because the dates are in different formats (UK vs USA, for example), causing someone to spend ages manually reconciling the different formats.

I see people waste hours or even days on something basic that could have easily been avoided. When dealing with clients, I might see specifications labeled with incorrect measurements like centimeters instead of meters, and a whole host of other errors that are easy to make but can cause the wrong things to be ordered or in the wrong quantity or in the wrong size which can cause huge disruption to the business.

Rogue zeroes are a classic case, “I only meant to order ten, not a thousand!”

Data errors can mean excess stock in your warehouse or fines from your customer for not delivering on time. All kinds of problems can result from one small little data error. Rogue zeroes are a classic case, “I only meant to order ten, not a thousand!”

 Let’s say we sell sticky notes online and in stores and we have 3 suppliers. Supplier A calls them ‘Post-It notes’, Supplier B calls them ‘Sticky Notes’ and Supplier C calls them ‘notepads’. They’re all the same thing so we need to categorize them as the same thing if we want an accurate picture of how much you’re selling, buying, or forecasting.

When you have bad data or missing data, your forecasts will be compromised.

 Now we see this problem especially with forecasting. When you have bad data or missing data, your forecasts will be compromised. I’ve seen seven ways to format ‘United States’. If you’re forecasting products sold within the US, you need all the data points labeled in the same way which means getting everybody to agree a set standard about how to input data.

Maintaining Data Discipline

You can facilitate this by controlling what people can put into certain columns; some might be mandatory or maybe sometimes you can do drop-down lists (although those have problems because people are lazy and naturally we’ll just pick the first thing). Expectation setting and getting people to understand the importance of data classification is key – letting your colleagues know that just by putting a little bit more information in a spreadsheet means somebody else isn’t to waste two hours doing unnecessary work down the line.

Check Your Data Regularly

I’m a real strong believer in data maintenance because you cannot keep your data clean if you don’t maintain it. You have to check it regularly and make sure that it’s still the way it’s supposed to be because people can delete things and people can cut and paste over things.

The most important thing is to look at your data on a daily or weekly basis because you’ll know if it doesn’t look right. For example, if somebody accidentally inputs a thousand units instead of a hundred, and you know that every week you’re ordering 100 units, you’re going to spot that difference and be able to fix it.

Driving Consistency In Data Management

I’ve categorized retail data, I’ve categorized food data, procurement data – I’ve categorized everything. The one thing that is true across all datasets is the importance of maintain standards and consistency. I came up with something to help clients remember that which is making sure your data has its “C.O.A.T” on. Your data should always be:

Consistent: Your data should always be consistent so everybody’s using the same terminology, the same units of measure, the same formats, and the same processes for data input.

 Organized: Categorize data in such a way that if you need it you can pull out that information really quickly. If you need to look at it by country or division or region or by buyer or by department, categorize it like that and then you can pull off a report quickly. How many people within companies trying to cut and paste different spreadsheets together to get what they want when all you need to do is a quick VLOOKUP?

Accurate: Make sure it’s as accurate as possible. I would never claim that you could get your data 100% accurate – if you do it’s not going to stay like that for very long because too many people involved. But striving for 100% accuracy means it’ll be accurate enough to have utility in the business.  

Trustworthy: This is where the magic happens. Trustworthy data means you know you can go to your senior decision makers and say these are the right numbers – this is exactly what we’re doing, this is what we’re buying, this is what we’re selling, this is what we’re forecasting etc. Data facilitates decisions, after all, and we need to have faith in the numbers.

Data Classification Is Key To Business Efficiency

All too often I see businesses taking weeks to prepare reports for end of month and in every case it’s easy to streamline that process and get that reporting process done in a few days rather than weeks. Rather than spending hours resolving queries, we can create lookups and formulas in Excel and get the data we need so much quicker.

It’s only when you fix these data issues that you realize how long these things were taking – so often people just sit in a time vacuum when you’re working through these things and nobody raises an eyebrow because in the absence of a better system, it just has to get done. I’ve seen that many, many times. 

Cleaning your dirty data means greater productivity, whatever your role or functional area

And that’s the value driver of proper data management – speeding up processes to make your business more efficient. It means your people can work on more value-added activities instead of manual, repetitive tasks. Cleaning your dirty data means greater productivity, whatever your role or functional area. – Send comments to the Editor at andrews@ibf.org

 

 

 

]]>
https://demand-planning.com/2022/07/15/tips-for-cleaning-your-dirty-data/feed/ 0
3 Sources Of Forecast Error To Avoid https://demand-planning.com/2022/02/28/3-sources-of-forecast-error-to-avoid/ https://demand-planning.com/2022/02/28/3-sources-of-forecast-error-to-avoid/#comments Mon, 28 Feb 2022 09:05:02 +0000 https://demand-planning.com/?p=9499

Those seeking to reduce error can look in three places to find trouble: The data that go into a forecasting model, the choice of a forecasting method, and the organization of the forecasting process. Let’s look at each of these elements to understand where error can be introduced into forecasting so we can mitigate it and improve our forecast accuracy.

1. Error Caused by Data Problems

Wrong data produce wrong forecasts. I have seen an instance in which computer records of product demand were wrong by a factor of two! Those involved spotted that problem eventually, but a less obvious error – but still damaging – can easily slip through the cracks and poison the forecasting process. In fact, just organizing, acquiring, and checking data is often the largest source of delay in the implementation of forecasting software. Many data problems derive from the data having been neglected until a forecasting project made them important.

Data Anomalies

Even with perfectly curated forecasting databases, there can be wildly discrepant  – though accurately recorded – data , i.e., anomalies. In a set of, say, 10,000 products, some items are likely to have endured strange things in their demand histories. Depending on when the anomalies occur and what forecasting methods are in use, anomalies can drive forecasts seriously off track if not dealt with.

2. Error Caused by the Wrong Forecasting Method

Traditional forecasting techniques are called extrapolative methods because they try to find any patterns in an item’s demand history and project (extrapolate) that same pattern into the future. The most used extrapolative methods go by the names of exponential smoothing and moving averages. There are variants of each type, intended to match the key characteristics of an item’s demand history. Is demand basically flat? Is there a trend? Is there a seasonal cycle?

However, where there is choice, there is the possibility of error. Choosing an extrapolative method that misses trend or seasonality is sure to create avoidable forecast error, so is one that wrongly assumes trend or seasonality.

“Using classical extrapolative methods on intermittent data is asking for trouble.”

Further, extrapolative methods are designed to work with data that are “regular,” which is to say non-intermittent. Intermittent data have a large percentage of zero demands, with random non-zero demands mixed in. Spare parts and big-ticket, slow-moving items are usually intermittent.

High-volume items like CPG products are usually non-intermittent. Intermittent demand data requires specialized forecasting methods, such as those based on Markov modeling and statistical bootstrapping. Using classical extrapolative methods on intermittent data is asking for trouble.

Even when the assumptions underlying a forecasting method are satisfied by an item’s demand history, the method might still be considered “wrong” if there is a better method available. In some cases, methods based on regression analysis (also called causal modeling) can outperform extrapolative methods or specialized methods for intermittent demand. This is because regression models leverage data other than an item’s demand history to forecast future demand.

“Although regression models have great potential, they also require greater skill, more data, and more work.”

Although regression models have great potential, they also require greater skill, more data, and more work. Unlike extrapolative and intermittent methods, they are not available in software as automatic procedures. The first problem is to determine what outside factors drive demand. Then one must acquire historical data on those factors to use as predictor variables in a regression equation. Then one must separately predict all those predictors. This process demands a level of statistical sophistication that is usually lacking among Demand Planners, opening up possibility for error.

Pro tip: Any proposed statistical forecasting method should be benchmarked against the simplest method of all, known as the naïve forecast. If the data are non-seasonal, then the naïve forecast boils down to “tomorrow’s demand will be the same as today’s demand.” If the data are seasonal, it might be something like “next April’s demand will be the same as this April’s demand.” If a fancy method can’t do better than the naïve method (and sometimes they can’t), then why use it?

3. Error Caused by Flaws in the Forecasting Process

Forecasting always starts out as an individual sport but usually includes a team component. Each phase can go wrong. We’ve already discussed errors caused by individual forecasters, such as deciding to use the wrong model or feeding the model data of poor quality.

Forecasting always starts out as an individual sport but usually includes a team component.

The team component usually plays out in periodic Sales and Operations Planning (S&OP) meetings. In these gatherings, various relevant departments gather to argue out what the company’s official forecast will be. While the aim is to achieve consensus, the result may work against the goal of reducing forecast error.

Participants often come to these meetings with their own competing forecasts. The first mistake may be trying to pick just one as the “official” forecast for all. Various functions – Marketing, Sales, Production, Finance – often have different priorities and different planning tempos. For instance, Finance may need quarterly forecasts, but production might need weekly forecasts.

These differences in forecast horizon imply different levels of aggregation, which can greatly influence the choice of a forecasting method. For example, day-of-week seasonality in demand may be critical for Production but irrelevant for Finance.

Assuming there are competing forecasts at the same time scale, the second mistake may be the way these forecasts are evaluated. At this stage, relative accuracy is usually the deciding criterion. The mistake is not recognizing this as an empirical question that cannot be settled by arguments about relative expertise or sophistication.

Too often, companies do not take the time to acquire and analyze retrospective assessments of forecast accuracy. If the task is to forecast next month’s demand using a certain technical approach, how has that approach been doing? Forecasting software often includes the means to do this analysis, but it is not always exploited when available. If it is not available, it should be made so.

“S&OP meetings often fail when the participants suggest changes to statistical forecasts.”

S&OP meetings often work, or fail, when the participants suggest changes to statistical forecasts. Since statistical forecasts are inherently backward-looking, these management overrides should, in principle, reduce error by accounting for factors like future promotions or market conditions that are not encoded in an item’s demand history. The third mistake is failing to monitor and confirm their value. Many of us believe we have a “golden gut” and can adjust forecasts without risk. Not necessarily true; trust but verify.

 

 

]]>
https://demand-planning.com/2022/02/28/3-sources-of-forecast-error-to-avoid/feed/ 1
Using Social Media Data To Improve Forecasts https://demand-planning.com/2021/09/08/using-social-media-data-to-improve-forecasts/ https://demand-planning.com/2021/09/08/using-social-media-data-to-improve-forecasts/#comments Wed, 08 Sep 2021 12:56:47 +0000 https://demand-planning.com/?p=9259

Web 2.0 is a term that’s been used often since the early 2000s. While the itself term is ambiguous, it has in many ways come to mean the socialization of the web. Compared to the static data put up by programmers back in the day, Web 2.0 is the connection of people – a new world of content being consumed and messages being communicated.


Everyone has seen the statistics – over 90% of today’s data was in the past two years. Think about how many website clicks, interactions, and consumer transactions are being created every day. Consider the 300,000 social media status updates, the 140,000 photos uploaded, and the 500,000 comments made every minute. Add to this the Internet of Things with its constant real-time transmissions, and you’ll have a good appreciation of the speed at which data are being created.

Harvesting this information can be a gold mine for organizations and a game changer for business forecasting and demand planning.

Predictive analytics is helping us unlock the keys to social data and web 2.0. Traditionally demand planning is focused on sales forecasts, generated using only internal order history and data.  Much of this history is over 3 years old. This means that many companies are missing out on over 90% of the data available and insights included in it.

More and more demand planning teams over the past few years have migrated to predictive analytics and deriving faster and better results. They are moving to the next stage in demand planning and meeting the current revolution in social media and data. And the results are measurable improvements in forecast accuracy and consumer behavior.

Improving Forecasts With Social Media Data

In one study, Antonio Moreno, Associate Professor of Business Administration at Harvard Business School, looked at data from Facebook to improve forecast accuracy. Antonio and his team worked with an online clothing brand and compared forecasts that incorporated external social data to those without.

Over a seven-month period they gathered data from over 171,000 Facebook users and produced two sets of sales forecasting models: the baseline forecast, which included only internal company information, and a second forecast that combined internal and social media data.

Using only standard time series modeling like exponential smoothing, averaging, and other methods, the company’s existing sales forecasts for lag 1 had a MAPE of 12%. This was the researchers’ best-performing baseline model. It took into account seasonality as well as internal casual data on the company’s sales and advertising campaigns.

Adding in information from social media brought the error down to 7–9%. The new models used social media comments and natural language processing to categorize each comment as positive, negative, or neutral. Then it combined this information with the internal forecast using a neural network to greatly improve forecast accuracy.

It Can Be Less Complicated Thank You Think

Even without creating complex coding for natural language processing, companies are seeing benefits using more data and existing capabilities. A CPG company that also does direct to consumer sales found a 15%-25% improvement in their item forecasts by looking at the number of new comments and where they were being made. For them, it wasn’t about it being a good or bad comment but rather that any comment was made and the volume of comments for a given item.

Major Improvements In Forecast Accuracy

Using this easy to obtain external data, they developed a simple regression model to incorporate into the forecasting process. With close to 28,000 SKU’s, they saw forecast improvement in over 80% of their SKUs. The overall WMAPE went from 42% to under 35%. Using location tags to mark where the comments came from, they were able to improve location forecasts by 40%, thereby greatly reducing inventory mix between locations.

It’s not just about comments either. Social media users publicly share metadata such as the user’s location, language spoken, biographical data, and/or shared links. Using this information, a retail company used this gold mine to create theoretical consumer profiles and then forecast based on those new clustered profiles. This allowed them to better determine trends within categories and finetune overall item forecasts. With over 100,000 SKU’s they realized a 7% improvement in WMAPE.

 

 

For more information on making the most of the data available to your organization, get a copy of Eric’s book, Predictive Analytics For Business Forecasting & Planning. Giving you the tools to adapt to the data age, this is your guidebook to demand planning 2.0. Get your copy.

]]>
https://demand-planning.com/2021/09/08/using-social-media-data-to-improve-forecasts/feed/ 10
What Happened To The Data Gathering Step In S&OP? https://demand-planning.com/2021/08/27/what-happened-to-the-data-gathering-step-in-sop/ https://demand-planning.com/2021/08/27/what-happened-to-the-data-gathering-step-in-sop/#comments Fri, 27 Aug 2021 14:11:50 +0000 https://demand-planning.com/?p=9244

The year is 1991. Nirvana just released Nevermind, Baywatch is the hottest show on TV, and it’s your job to gather data to begin the next month’s S&OP cycle. As part of the S&OP process, you need to find the data you need, sort it out, and start the very manual process of figuring out what it all means.


Jump ahead 30 years and now data is fed in almost real time and we collect it in data repositories and lakes and store it in the cloud. We glean insight into future demand from this data with minimal effort and it feeds every function with the press of a button.

Traditionally, data gathering was recognized as the first step in the monthly S&OP cycle. I still see many textbooks and consultant diagrams express this as a formalized, first process step. Today, in the real world, however, data gathering is no longer a formalized process step in S&OP but rather a part of every function and process within the organization.

In recent IBF surveys, less than 3% reported using some formalized data gathering step. A larger percentage did report components such as statistical forecasting, baseline plans, month end close, report building, or other data related efforts. As far as a physical data gathering is concerned, it is a relic – a throwback to outdated technology and processes.

This is not to diminish the importance of timely, accurate, and meaningful data in an S&OP process. Data is still the oil of good S&OP.  The only difference now is that you don’t need to mine it yourself. We can and should rely on technology, data warehouses, business intelligence and advanced planning systems, and begin to centralize data collection and governance that enable every business process and function.

Even though we may no longer have a formal data gathering step, data is still the starting point for our S&OP cycle each and every month.

S&OP Still Runs On Snapshots Of Data

Even though data today is dynamic and fluid and being updated in almost real time, for planning and S&OP we need static snapshots for comparisons and continuity. In the accounting world, a monthly closing process in your regular accounting procedures ensures that your numbers are reliable, stable, and accurate. We have a similar responsibility in monthly S&OP planning.

It would be next to impossible to track performance if numbers bounce around when new information is reported, or someone incorporates past information that wasn’t recorded in a timely fashion. Closing the books each month for S&OP and the data used for comparison should be done consistently and with discipline.

Whether it is a financial close or capturing inventory dollars or orders, there needs to be a beginning and end of every period we operate in.

We need to have a consensus-derived number that drives the process. While the forecast may be updated weekly or even daily, our plans need to be a snapshot of one of those forecasts in a monthly bucket. We are not blind to the uncertainties surrounding that static number, but plan for them at a strategic and tactical level.

How Should We Approach Data In 2021?

With data becoming part of every aspect of our S&OP process, it is easy to become complacent about it.  Even though we may not call data a formalized step, it still needs to be part of the S&OP culture that supports data-driven decision making. We need to use data to help drive evidence-based decisions instead of those built on gut feel.

Data-driven S&OP processes recognize that the insights and recommendations that the data holds can help them make stronger, more informed decisions that move the company in the right direction. As they get results, they can look back and see which strategies led to the best outcomes to improve decision making going forward.

To Makes Decisions In Future, You need To Collect Data Now

Becoming data-driven involves more than technology and using the data you have. A solid S&OP process should continue to look at other data sources and insights that may provide additional information. Even if you may not be using the data today, consider what is available and what you can collect and store for later use.

As you continue your S&OP journey, you may find new uses for the data you currently hold or that new technology can extract value from previously untapped data streams. If you want to use data to drive decision making in future, you need to be collecting it now.

There is no question we are dealing with more data in a greater variety of formats, and it is coming at us faster than ever. With the onslaught of big data and the need to covert this data into insights, we require processes and techniques to manage not only the data we have now, but the data that is yet to come.

Inside the S&OP process we use this data to help build each step of the process and glean insights to make better decisions along the way. While we are no longer calling for a formal data gathering step, for a successful S&OP we are calling for a company to have in place a centralized data governance, collection and analysis process to help advance to the next level of S&OP.

For more insight into forecasting and planning best practices, join me at IBF’s Business Forecasting, Planning & S&OP Conference in Orlando, held from October 19-22 at the Wyndham Orlando Resort. The biggest and best event of it’s kind, it’s your opportunity to learn best practices in S&OP, demand planning and forecasting, and network and socialize in a fantastic setting. See here for details.

]]>
https://demand-planning.com/2021/08/27/what-happened-to-the-data-gathering-step-in-sop/feed/ 1
How Business Forecasting & Predictive Analytics Are Merging https://demand-planning.com/2020/12/07/how-business-forecasting-predictive-analytics-are-merging/ https://demand-planning.com/2020/12/07/how-business-forecasting-predictive-analytics-are-merging/#respond Mon, 07 Dec 2020 15:24:53 +0000 https://demand-planning.com/?p=8825

Business forecasting and predictive analytics are merging to leverage Big Data as a growth driver.

Predictive analytics does not have to be complicated and Demand Planners can learn these models and methods to drive business insight.

Organizational processes to support the application of predictive analytics insights are arguably a bigger challenge than the models. 


IBF spoke to Eric Siegel, author of Predictive Analytics: The Power To Predict Who Will Click, Buy, Lie, Or Die and former Columbia Professor, who revealed just what predictive analytics is and how it crosses over into business forecasting.

“Predictive analytics is basically applications of machine learning for business problems”, says Siegel. Machine learning learns from data to render prediction about each individual [thing being examined].” That individual thing can be a customer, product, machine, or any number of things.

When asked why predictive analytics is the latest evolution in information technology, Siegel responded “Because predicting by individual case is the most actionable form of analytics because it directly informs decision for marketing, fraud detection, credit risk management etcetera”.

But How Does Predictive Analytics Actually Work?

“Data encodes the collective experience of an organization so predictive analytics is the process of learning from that experience. You know how many items you sold, which of your customers cancelled, or which transaction turned out to be fraudulent.”

Siegel continued, “You know all this – that’s the experience, and you learn from that experience and the number crunching methods derive patterns. And those patterns are pretty intuitive and understandable. They could be business rules. For example, if a customer lives in a rural area, has these demographic characteristics and has exhibited these behaviors, then they might have a 4 times more likely chance of buying your product than the average.”

“That may be a relatively small chance but when improving something like mass marketing, finding a segment that is 4 times more like to buy than the average, that has a dramatic improvement on business performance.”

It is clear then, that by identifying patterns in data, predictive analytics can reduce risk and identify valuable commercial opportunities.

Predictive Analytics Meets Business Forecasting

“There is a continuum between forecasting and predictive analytics”, Siegel notes. But he does highlight key differences in their current applications:

• Forecasting is about a singular prediction, i.e., about sales in the next quarter or who will win a political election.
• Predictive analytics renders a predictive score for each individual whether it is a consumer, client or product, and as such provides insight into how to improve operations relating to marketing, fraud detection, credit risk management etc. more effectively.

Siegel laments the current disconnect between the two fields, “There should be a lot more interaction between what are two very siloed industries but have a lot of the same concepts, a lot of the same core analytical methods, and a lot of the same thinking. Both belong under the subjective umbrella know a as ‘data science’”.

Ultimately, both forecasting and predictive analytics serve to gain business insight but approach it from different starting points. Every business decision starts with a lag between what you know now and what occurs. Whether you’re forecasting sales or the likelihood someone will buy something in response to a marketing initiative, you’re generating a prediction.

Siegel said of the similarities between forecasting and machine learning, “the methods on the business application side include decision, trees, logistic regression, neural networks and ensemble models while forecasting uses time series modeling, but there are ways these two classes really do interact and really build on one another”.

Predictive Analytics Isn’t Scary

When challenged that complex predictive analytics methods can scare people off, Siegel insists that “they’re totally intuitive” and that machine learning and predictive analytics can be “accessible, understandable, relevant, interesting, and even entertaining”. That should reassure Demand planners looking to adopt predictive analytics methods and models.

Talking of the apparent complexity of machine learning models, Siegel commented that even neural networks, which represent the more advanced modeling on the predictive analytics spectrum, are modular and each if its components are in fact very simple.

Even if the model as a whole is difficult to fully understand (even for the people who invented them) you can test them and see how well they work, meaning that regardless of how complicated the models are to understand, their actual application is relatively straightforward.

Whether it’s through his Dr. Data YouTube channel (complete with rap videos), his book, or his Coursera program, Siegel is on a mission to make predictive analytics accessible. When it comes to the data that predictive analytics uses, he again highlights the simplicity, “It can be simple as a two-dimensional table on an Excel spreadsheet where each row is an example and each column is an independent demographic or behavioral variable”.

How Can Demand Planners Start Using Predictive Analytics?

It goes without saying that training in data science and predictive analytics is necessary when it comes to demand planners applying these techniques. Most of the training available on predictive analytics is technical, however, and that’s just part of equation warns Siegel, “There’s another side to machine learning if you’re going to make business value out of it which is the organizational process – the way you’re positioning the technology so it’s not just a cool, elegant model but is actually actionable and will actually be deployed.”

That’s a theme that Demand Planners will recognize all too well and it’ll come as no surprise that supporting process and culture are vital to leveraging predictive analytics insight in an organization, “Organizational requirements like planning, greenlighting, staffing, and data preparation are foundational requirements.”

Click to order your copy now.

One of the key themes raised by Eric Siegel is that forecasting and predictive analytics are merging to meet the business needs of today. To find out more about the future of these fields and how they impact demand planners and forecasters, check out Eric Wilson’s upcoming book, Predictive Analytics For Business Forecasting, published by the Institute of Business Forecasting, which is available to preorder now.

To get up to speed with the core concepts underlying predictive analytics, head over to Eric Siegel’s Machine Learning Course on Coursera.

]]>
https://demand-planning.com/2020/12/07/how-business-forecasting-predictive-analytics-are-merging/feed/ 0
How Much Data Is Enough In Predictive Analytics? https://demand-planning.com/2020/06/22/how-much-data-is-enough-in-predictive-analytics/ https://demand-planning.com/2020/06/22/how-much-data-is-enough-in-predictive-analytics/#respond Mon, 22 Jun 2020 12:36:24 +0000 https://demand-planning.com/?p=8565

If we can gain insights from just a small amount of internal structured data, then how much more could we glean from Big Data? I’m talking that external mass of structured and unstructured data that is just waiting to be collected and analyzed.

But there’s a balance between not enough data and too much. What’s the right amount of data to work with as demand planner or data scientist?

There is a debate about how much data is enough and how much data is too much. According to some, the rule of thumb is to think smaller and focus on quality over quantity. On the other hand, Viktor Mayer-Schönberger and Kenneth Cukier explained in their book Big Data: A Revolution That Will Transform How We Live, Work, and Think, that “When data was sparse, every data point was critical, and thus great care was taken to avoid letting any point bias the analysis. However, in many new situations that are cropping up today, allowing for imprecision—for messiness—may be a positive feature, not a shortcoming.”

The obsession with exactness is an artifact of the information-deprived analog era.

Of course, larger datasets are more likely to have errors, and analysts don’t always have time to carefully clean each and every data point. Mayer-Schönberger and Cukier have an intriguing response to this problem, saying that “moving into a world of big data will require us to change our thinking about the merits of exactitude. The obsession with exactness is an artifact of the information-deprived analog era.”

Supporting this idea, some studies in data science have found that even massive, error-prone datasets can be more reliable than simple and smaller samples. The question is, therefore, are we willing to sacrifice some accuracy in return for learning more?

Like so many things in demand planning and predictive analytics, one size does not always fit all. You need to understand your business problem, understand your resources, and understand the trade-offs. There is no rule about how much data you need for your predictive modeling problem.

The amount of data you need ultimately depends on a variety of factors:

The Complexity Of The Business Problem You’re Solving

Not necessarily the computational complexity, (although this an important consideration). How important is precision verses information? You should define this business problem and then select the closest possible data to achieve that goal. For example, if you want to forecast the future sales of a particular item, the historical sales of that item may be the closest to that goal. From there, other drivers that may contribute to future sales or understanding past sales should be next. Attributes that have no correlation to the problem are not needed.

The Complexity Of The Algorithm

How many samples are needed to demonstrate performance or to train the model? For some linear algorithms, you may find you can achieve good performance with a hundred or few dozen examples per class. For some machine learning algorithms, you may need hundreds or even thousands of examples per class. This is true of nonlinear algorithms like random forest or an artificial neural network. In fact, some algorithms like deep learning methods can continue to improve in skill as you give them more data.

How Much Data Is Available

Are the data’s volume, velocity, or variety beyond your company’s ability to store, or process, or use it? A great starting point is working with what is available and manageable. What kind of data do you already have? In Business-to-Business, most companies are in possession of customer records or sales transactions. These datasets usually come from CRM and ERP systems. A lot of companies are already collecting or beginning to collect third party data in the form of POS data. From here, consider other sources, both internal and external, that can add value or insights.

Summary

This does not solve the debate and the right amount of data is still unknowable. Your goal should be to continue to think big and work with what you have, gather the data you need for the problem and algorithm you have.

When it comes to gathering data, it is like the best time to plant a tree was ten years ago.  Focus on the data available and the insights you have today while building the roadmap and capabilities you want to achieve in the future. Even though you may not use it now, don’t wait until tomorrow to start collecting what you may need for tomorrow.

 

 

]]>
https://demand-planning.com/2020/06/22/how-much-data-is-enough-in-predictive-analytics/feed/ 0
Overcoming The Challenges of Big Data https://demand-planning.com/2019/12/17/overcoming-the-challenges-of-big-data/ https://demand-planning.com/2019/12/17/overcoming-the-challenges-of-big-data/#respond Tue, 17 Dec 2019 14:39:49 +0000 https://demand-planning.com/?p=8126

We can no longer ignore data. Now that we have begun to define it and find new ways of collecting it, we see it everywhere and in everything humans do. Our current output of data is roughly 2.5 quintillion bytes a day and as the world becomes ever more connected with an ever-increasing number of electronic devices, it will grow to numbers we haven’t even conceived of yet. 

We refer to this gigantic mass of data as Big Data. First identified by Doug Laney, then an analyst at Meta Group Inc., in a report published in 2001, Big data has commonly been defined as “information that is high-volume, high-velocity, and/or high-variety beyond normal processing and storage that enables enhanced insights, decision making, and automation”. 

The problem is that “high volume” and “normal” are relative to your company size and capabilities. For this reason, I prefer to look at Big Data as a continual growth in data Volume, Velocity, and Variety beyond your company’s ability to store or process or use it.

The Problem With Big Data

The challenge with the sheer amount of data available is assessing it for relevance. The faster the data is generated, the faster you need to collect and process it. Not only that, data can be structured in many different ways and comes from a wide variety of different sources that need to be tied to together and sorted out. And finally, when we talk about Big Data, we think of it as raw information and not about the strategies to deal with it or the tools to manage it.

[Credit: Doug Laney]

Volume, Velocity and Variety – dubbed the three Vs – are used to describe Big Data according to three vectors and are key to understanding how we can measure big data compared to traditional datasets or collection methods.

Volume

Big data is about volume and refers to the large amount of data involved. The size of available data is growing at an increasing rate. If there was ever “small-data”,  it was generated internally from enterprise transactional systems and stored on local servers. Today, businesses are constantly collecting data from many different outlets like social media, website lead captures, emails, eCommerce and more. This has begun to outgrow an organization’s capabilities to manage these larger volumes of data – a major issue for those looking to put that new data to use instead of letting it go. If this sounds familiar, you are dealing with Big Data, and it’s probably a big headache.

More data sources that create more data combine to increase the volume of data that has to be analyzed. The world holds an enormous amount of data, possibly an incomprehensible amount. With over 90% of today’s data being generated in the past 2 years, that comes to about 2.5 quintillion data bytes daily. Perhaps 10 or 15 years ago, terabytes qualified as high-volume data, but these days you’re not really in the Big Data world unless you’re dealing with exabytes (1 million TB) or petabytes (1,000 TB).

To deal with these larger volumes of data, companies are moving from desegregated data sources to data lakes and warehouses, and data management systems. Storage is transforming from local servers to the cloud and external partners like Amazon and others. For processing, we are considering tools like Hadoop and Apache. Business intelligence software for data cleansing and data visualization are becoming more prevalent. And in predictive analytics, we are considering new methods and approaches to analyze larger sets of data and capture greater insights.

Velocity

Velocity measures how fast the data is coming in. Big Data isn’t just big; it’s growing fast. It’s also coming in at lightning speed and needs to be processed just as quickly. In the olden days (3 to 5 years ago), companies would usually analyze data using a batch process. That approach works when the incoming data rate is slower than the batch processing rate and when the result is useful (considering there’s a delay). But with the new sources of data and the need to be more agile in decision making, the batch process breaks down. The data is now streaming into the server in real time, in a continuous fashion and the result is only useful if the delay is very short.

Think about how many website clicks, consumer transactions and interactions, or credit card swipes are being completed every minute of every day. Consider the sheer number of SMS messages, the 300,000 social media status updates, the 140,000 photos uploaded, and the 500,000 comments made every minute. Add to this the Internet of Things and the constant real time transmissions and you’ll have a good appreciation of the speed at which data is being created.

We need real time tools (or close to real time) to collect, analyze, and manage all this data, then act on it. Demand sensing is the key to this. Demand sensing is sensing demand signals, then predicting demand, and producing an actionable response with little to no latency.

According to the Summer 2012 issue of The Journal of Business Forecasting, demand sensing sorts out the flood of data in a structured way to recognize complex patterns and to separate actionable demand signals from a sea of “noise.”

Besides this, velocity also calls for building Big Data solutions that incorporate data caching, periodic extractions and better data orchestration, and deploying the right architecture and infrastructure.

Variety

Data is big, data is fast, but data also can be extremely diverse. Data variety refers to all the different types of data available. Data was once collected from one place (more than likely internal) and delivered in one format. It would typically be in the form of database files such as Excel, CSV and Access. Now there is an explosion of external data in multiple forms and unstructured data that doesn’t fit neatly on a spreadsheet. This, more than any of the other vectors, can quickly outpace an organization’s ability to manage and process their data.

Variety is one the most interesting developments in data as more and more information is digitized. A few decades ago, data would’ve been in a structured database in a simple text file. Nowadays we no longer have control over the input data format. Consider the customer comments, SMS messages, or anything on social media that helps us to better understand consumer sentiment. How do we bring together all the transactional data, POS data from trading partners and sensor data we collect in real time?  Where do you put it?

Although this data is extremely useful to us, it can create more work and requires more analytics to decipher it so it can provide insights. To help manage the variety of data there are also a variety of techniques for resolving problems. We no longer just extract and load, we are now importing data into universally accepted and usable formats such as Extensible Markup Language (XML). To sort through the volume and variety of data we are using data profiling techniques to find interrelationships and abnormalities between data sources and data sets.

The Bottom Line

Big data is much more than just a buzzword or simply lots of data. It is way to describe new types of data and new potential for greater insights. The three V’s do well to describe the data, but we still need to remember that even Big Data is still the small building blocks. For Big Data to be valuable, we need more data coming in faster from multiple sources – and we need the systems, analytics, techniques, and people to manage that process and derive value from it.

[Editor’s note: The 3 Vs in Big Data concept is taken from “3D Data Management: Controlling Data Volume, Velocity, and Variety”, Gartner, file No.949. 6]

 

 

 

]]>
https://demand-planning.com/2019/12/17/overcoming-the-challenges-of-big-data/feed/ 0
Forecaster’s & Planner’s Guide To Data https://demand-planning.com/2019/08/26/forecasting-data-types/ https://demand-planning.com/2019/08/26/forecasting-data-types/#comments Mon, 26 Aug 2019 15:34:56 +0000 https://demand-planning.com/?p=7931

In supply chain and operations, raw materials are substances that are used in the  manufacturing of goods. They are the commodities to be transformed into another state that will either be used or sold. For algorithms or predictive models, data is the raw material that every insight begins with.

A piece of data, or collection of it, can help drive a predictive analytics process and uncover insights. Data are the building blocks and inputs, and without data it is nearly impossible to find answers and make decisions. That said data is not the destination. Data is not a decision. And, while data may take on many forms and be used for many things, data by itself is not insight.

Data Is Information In Its Raw Form

Information is a collection of data points that we can use to understand something about the thing being measured.

Insight is gained by analyzing data and information to understand what is going on with a particular thing or situation. The insight can then be used to make better business decisions.

Data on its own is meaningless. It is just a raw material that needs to be transformed, analyzed, and turned into understanding.

Data on its own is meaningless. It is just a raw material that needs to be transformed, analyzed, turned into understanding and shared by people with the skills, training and commitment to do so. At the same time, predictive modeling or any business insight without data is equally as meaningless. No matter how skilled you are, or how good your model is, it is like trying to produce a finished product without the proper parts.

There’s no arguing the power of data in today’s business landscape. Businesses are analyzing a seemingly endless array of data sources in order to glean insights into just about every activity – both inside their businesses and out. Right now, it seems that enterprises cannot get their hands on enough data for analysis purposes. They are looking at multiple sources and forms of data to collect and use to learn more about customers and markets, and predict how they will behave.

What Are The Different Types Of Data?

We can think about data in terms of how it is organized, as well as the source. Data may be either structured or unstructured and the source can be either internal or external.

Forecasting data types

Knowing what types of data you have, and where they come from, is crucial in the age of Big Data and analytics.

Internal Sources: Internal sources of data are those which are procured and consolidated from different branches within your organization. Examples include: purchase orders, internal transactions, marketing information, loyalty card information, information collected by websites or transactional systems owned by the company, and any other internal source that collects information about your customers.

Before you begin to look for external sources, it’s critical to ensure that all of a business’s internal data sources are mined, analyzed and leveraged for the good of the company. While external data can offer a range of benefits (that we’ll get into later), internal data sources are typically easier and quicker to collect, and can be more relevant for the company’s own purposes and insights.

External Sources: External sources of data are those which are procured, collected, or originate outside of the organization. Examples include external POS or inventory data from a retail partner, paid third party information, demographic and government or other external site data, web crawlers, macroeconomic data, and any other external source that collects information about your customers. Collection of external data may be difficult because the data has much greater variety and the sources are much more numerous.

Structured Data: Is both highly-organized and easy to digest and generally refers to data that has a defined length and format. It is sometimes thought of as more traditional data which may include names, numbers, and information that is easily formatted in columns or rows.  Structured data is largely managed with legacy analytics solutions given its already-organized nature. It may be collected, processed, manipulated and analyzed using traditional relational databases. Before the era of big data and new, emerging data sources, structured data was what organizations used to make business decisions.

Unstructured Data: Does not have an easily definable structure and is unorganized and raw, and typically isn’t a good fit for a mainstream relational database. It is basically the opposite of structured and includes all other data generated through a variety of human activities. Common examples are comments on web pages, word processing documents, videos, photos, audio files, presentations, and many other kinds of files that do not fit into the columns and rows of an excel spreadsheet.

These new data sources are made up largely of streaming data coming from social media platforms, mobile applications, location services, and Internet of Things technologies. Since the diversity among unstructured data sources is so prevalent, businesses have much more trouble managing it than they do with traditional structured data. As a result, companies are being challenged in a way they weren’t before and are having to get creative in order to pull relevant data for analytics.

Don’t Get Left Behind When It Comes To Data

You may believe that only super large companies with massive funding or technology are implementing data analytics and pushing the limits of the types of data that are collected.  While 90% or more of data today is internal structured data, it is important to understand that 90% plus of the data ‘out there’ (external data) is unstructured.

It is important to understand that 90% plus of external data is unstructured.

With this increase in data and the need to be competitive, along with the expansion of data storage capabilities and data analytics tools, the playing field has leveled. While data is not insights, new forms and types of data have given rise to demand for newer insights and this focus on data has embedded itself into the culture of more and more businesses.

 

Eric will reveal  how to update your S&OP process to incorporate predictive analytics to adapt to the changing retail landscape at IBF’s Business Planning, Forecasting & S&OP Conferences in Orlando (Oct 20-23) and Amsterdam (Nov 20-22). Join Eric and a host of forecasting, planning and analytics leaders for unparalleled learning and networking.

]]>
https://demand-planning.com/2019/08/26/forecasting-data-types/feed/ 2
First Day Of S&OP Implementation? Calm Down & Start With Data https://demand-planning.com/2018/05/14/first-day-of-sop-implementation-calm-down-start-with-data/ https://demand-planning.com/2018/05/14/first-day-of-sop-implementation-calm-down-start-with-data/#respond Mon, 14 May 2018 14:57:02 +0000 https://demand-planning.com/?p=6872

It’s your first day at a new company and you’re tasked with implementing S&OP. What’s the first thing you do?  The best starting point is to figure where to get the data for your forecast, and how you’re going to prepare it for input.

When we start with S&OP implementation it is essential to have the support of senior management and key people in each function to put all the pieces together. We’re talking Sales, Operations, Finance, Logistics and Purchasing. This collaboration is crucial for many reasons but primarily because this is how we get data, and without data, you don’t have an S&OP process.

The S&OP process must be aligned with Finance, making sure data inputs for both forecasts come from the same source. But before you start thinking of preparing your forecast, it is necessary to understand the following:

1. Know The Financial Performance Of The Company

The income statement of the company provides insight into what is really going on in the company. I consider it highly advisable to spend some time studying these statements and talking with the Finance people to identify the burning issues of the moment that are driving the decision-making process. Never lose sight of the fact that the S&OP process is a decision-making tool that directly impacts the income statement. Work to build the trust of senior management to reinforce this idea.

2. Know How Finance Uses Financial Statements

If Finance is using this information to plan for demand, the company will almost inevitably have planning problems. Using only this information is limiting because it is simply sales, dispatches and credit and debit notes applied to the account of each client. Dispatches and sales are not enough to plan effectively. It may happen that the difference between billing and dispatches is minimal, but either way, we need to know what the difference is in order to align the objectives of the business with those of Supply Chain and Operations.

3. Use Finance’s Data For Your Forecast Input

Use the same dispatch/sales information used by the Finance team as the input for your forecast. Why? Because we must use the same data if our forecasts (and subsequently plans) are to align. In every S&OP process, shipments to customers valued in USD is the first information we get from Finance. Use this data as the main input for your sales forecasts. If we skip this step and use Sales’ data for our capacity planning, we can end up Production not having the required resources, because Finance has developed the budget using completely different assumptions.

Once we have covered these 3 points, we can move onto data management.

Data Management

1. Look At The Data In Different Ways

Breakdown the data by client, by product, by production plant etc. This allows us to identify customers and their buying patterns. We can manually check for quirks that, if not picked-up on, can create errors in the forecast. One such quirk is a customer who used to buy products from one of your production plants, for but for whatever reason now buys from another. It can look like two different customers in two different locations, but in reality is the same customer. Another example is a customer changing its business name – it looks like two different customers but is again the same customer. Statistical forecasting without this manual override will not identify these quirks and will result in forecast error that is easily avoidable.

This bit of the process is repetitive and, frankly dull, but it’s important at the beginning because we need to cleanse our data that goes into the forecast. The old adage of garbage in, garbage out applies.

2. Look At product Mix

Looking at product mix is valuable because it allows use to identify changes in consumer behaviour, drops in demand of a particular product, changes in a specific customer’s behaviour etc. We can gain insight into what customers are doing and why, relating their behavior to specific demand influencing factors.

3. See If New Products Will Be Released

The S&OP process pays special attention to new products. Why? Because there is no historic demand to help us understand how many we’ll sell. We have very little idea how it’ll perform until it’s released into the market, by which time the initial planning phase is already completed. This means we must leverage the S&OP process, with its benefit of cross-functional collaboration, to gather as much qualitative insight as we can. We may not have hard data, but this knowledge can help us predict how it’ll perform.

With these steps completed we can start our forecasting process and arrive at numbers we can take into the pre-meeting. In the pre-meeting you’ll sit down with representatives from the Sales team and gain their input into short term demand and another factors that may influence demand that salespeople have unique insight into. This qualitative knowledge will help us refine our statistical forecasts. We’ll then be ready to go to develop a one number forecast all functions will work from. And when we have done that, we will have achieved the core component of S&OP – the creation of an integrated approach to understanding and fulfilling demand.

 

]]>
https://demand-planning.com/2018/05/14/first-day-of-sop-implementation-calm-down-start-with-data/feed/ 0