How To Guides – Demand Planning, S&OP/ IBP, Supply Planning, Business Forecasting Blog https://demand-planning.com S&OP/ IBP, Demand Planning, Supply Chain Planning, Business Forecasting Blog Wed, 22 May 2024 13:51:33 +0000 en hourly 1 https://wordpress.org/?v=6.6.4 https://demand-planning.com/wp-content/uploads/2014/12/cropped-logo-32x32.jpg How To Guides – Demand Planning, S&OP/ IBP, Supply Planning, Business Forecasting Blog https://demand-planning.com 32 32 Balancing Supply & Demand: The 5 Core Steps https://demand-planning.com/2020/03/03/balancing-supply-demand-the-5-core-steps/ https://demand-planning.com/2020/03/03/balancing-supply-demand-the-5-core-steps/#comments Tue, 03 Mar 2020 17:42:26 +0000 https://demand-planning.com/?p=8261

Alignment of demand and supply has been the subject of extensive research but is still a pain point for many organizations, causing either lost sales on the one hand or holding excess inventory on the other. Unfortunately, under or overstocking is often viewed as binary choice that has to be made, but there is another solution – balancing supply and demand.

Let’s take a look at what under and overstocking means for different functions in the business.

From the point of view of Sales, understocking means:

  • Missing sales targets
  • Not being able to earn bonuses
  • Empty shelves at retailer store meaning lost sales
  • Risk of paying penalties to contractual retailers
  • Poor customer service which can cause customers to go elsewhere

From the point of view of Supply Chain, overstocking means:

  • Limited space in warehouse, causing higher inventory holding costs
  • Increased cost of rent if space is not enough to hold stock
  • Increased risk of product obsolesce if shelf-life is limited
  • Having to offer discounts to clear excess stock, impacting profitability
  • Higher labor costs to manage the stock on a regular basis.

Thus, none of the managers on the supply side want to be overstocked but salespeople do so they can take advantage of all opportunities in the market. But it doesn’t have to be either/or. Instead we can find a balance that satisfies the need to both sell as much as possible without incurring the costs of holding excess stock.

How To Find The Balance Between Over & Understocking

1 -Understand Consumer Demand

The first thing is to understand demand, i.e., what consumers want and where. To do so, companies need to learn what shoppers can afford, what products they prefer and why, and environmental and cultural factors that have an impact on consumer behavior.

For instance, if the consumer demands high-end premium products in your store or region, there is no reason to overstock brands that are cheaply made or packaged in boxes with unreadable labels. This calls for historical data to ensure that sales trends, seasonality, and validity in the market can be scanned periodically. With statistical modeling we can take this sales data and extrapolate future demand. Depending on the industry, companies need to review historical sales and update forecasts daily, weekly, monthly and quarterly, depending on the product type.

This approach is also same for suppliers who need to analyze the demand coming from each retailer and store and consumer characteristics. Overall, these actions enable a much clearer picture of what consumers want and what they don’t. When we know this, we have a foundation to start meeting this demand with the necessary supply.

2 – Invest In Your Demand/Supply Planners

A good understanding of demand cannot be achieved with historical data alone. You need good demand and/or supply planners, who are knowledgeable about product groups and categories and aware of external factors that may affect consumption, and who are equipped with knowledge of demand management and forecasting methodologies. Not everything cannot be found in the data. The impact of a competitor’s in-store promotion, cultural impacts forming shopping habits, background of expatriates in the city/region etc. are only some of the factors that planners should consider when generating forecasts, planning supply, and setting stock levels.

Demand management is a specific area that includes many techniques, methodologies and nuances unique to the role, thus making education and training crucial. Planners should know which forecasting techniques to use for which data set, how to aggregate forecasts with factors affecting demand, and when to adjust forecasts with qualitative judgment. Being knowledgeable about a particular product category is a competitive advantage for planners because it allows us to understand the likely impact of promotions and competitor activities. It also enables better communication with the sales team, who we rely on for input into the forecasts and customer information.

3 – Forecasts Feed The Supply Plan

Let me ask you the following question: Does your company look to just hit monthly sales targets or to enhance profitability in the long-run?

If the goal is to just close the month with sales targets achieved, let the sales team create and approve the forecast. This is how it works at most suppliers and distributors. Don’t get me wrong, the contribution of the sales team to any business is incredibly important but they do not have the expertise to create forecasts that represent true demand. Salespeople have the unconscious habit of being optimistic on sales targets, which must be tempered by data-focused demand planners.

When generating forecasts we need input from our colleagues in Sales and Marketing, Finance, Supply Chain, and perhaps Customer Service, and the optimal way to collaborate is through a Sales and Operations Planning process. This is the forum that allows us to align on aggregated forecast numbers. Supply Chain plays a key role here. Let’s take the example of the Tesco, the biggest retail chain in the UK: when Tesco handed the responsibility of order replenishment to their Supply Chain directors, it dramatically increased product availability and reduced inventory. This approach enabled both Supply Chain and category management teams to manage shelf space, promotions and new launch item in a more efficient way.

4 – Integrate Pareto Analysis Into Your Target Stock Level

Pareto analysis, also known as the 80/20 rule, is a statistical method used for decision making which identifies which 20% of inputs leads to 80% of the desired output. In demand planning, we’re looking to identify the 20% of products that contribute to 80% of profitability.

Pareto analysis should be the best friend of planners when it comes to managing inventory. To illustrate further, when I was working for Transmed Overseas, a full service distributor in the Middle East and Africa, we had one single number of DOS (days of stock) target per brand. This was causing massive fluctuations in stock levels and was triggering not only out of stock (OOS) but also excess inventory and obsolete stock. With Pareto analysis, we first categorized the SKUs of each brand from best to worst performing. Then we categorized SKUs that generated 75% of sales as class A and the SKUs that generated 15% of sales as class B. The final group, C, were the SKUs that generated only 5% of sales. This approach identified our most important products and the safety stock levels for each product were determined according to their category.

Using Pareto analysis, we not only reduced excess stock by 15 to 20%, but also ensured availability of class A SKUs, which improved customer service by 3%. As valuable as Pareto analysis, is, you must also consider lead time, contractual agreements, forecast accuracy, and other factors.

5 – Optimize Order & Replenishment Frequency

If we get our inventory replenishment frequency right, we reap the rewards of lower inventory. It is easy to write but difficult to apply! Of course, there are many factors affecting the right order frequency such as long-lead times, seasonality, forecast accuracy, containerization, promotions, and PIPO (Phase in Phase out) practices, but it is doable.

Your starting point is to check whether your lead times are accurate or not. Without a high level of lead time accuracy, any attempt to increase order frequency is shot in the dark that risks failure and cost. Thus, the supply chain team should work meticulously to track OTIF (On time, in full) performance of every purchase from each supplier. Once there is a reliable history on lead time with accuracy performance, then the team should check forecast accuracy at the SKU level and forecast misses, which is one of the invisible inventory costs incurred by companies. Improvement in lead time and forecast accuracy will increase the confidence in replenishing products on time and in full at DCs and stores. Following this, containerization should be analyzed which has a direct impact on logistics and transportation costs. This responsibility lies with the Supply Chain team, who should compare the cost of inventory holding, forecast misses, and obsolesce versus savings from logistics and transportation. This cost/savings ratio should inform your ordering frequency.

Amending your order frequency should consider marketing or category management teams because they run promotions that can impact the amount of inventory you need. If promotions are not factored into the lead times and not communicated to the Supply Chain and Procurement teams, plans will not include the promotional volume. This not only gives rise to missed sales/understocking, but also poor customer service, and even penalties at the downstream level depending on your agreements with retailers.

 

 

]]>
https://demand-planning.com/2020/03/03/balancing-supply-demand-the-5-core-steps/feed/ 1
Business Efficiency Planning For Effective S&OP https://demand-planning.com/2019/01/03/business-efficiency-planning/ https://demand-planning.com/2019/01/03/business-efficiency-planning/#respond Thu, 03 Jan 2019 14:42:46 +0000 https://demand-planning.com/?p=7491

Business Efficiency Planning (BEP) works to align all functional areas around common business goals and enable and coordinate decision making to achieve the most efficient plan. To get there we need tools to help understand what decisions need to be made and when, and who needs to make them. Given the complexity of business, this often feels like finding a needle in a haystack.

Process mapping has been used by companies to help to identify bottlenecks by:

  • providing visibility into how processes are carried out;
  • identifying where the processes are executed;
  • identifying who is doing what within the process;
  • revealing how processes affect other processes;
  • and determining why a process is being executed.

In much the same manner, Business Decision Mapping essentially makes the decision-making process visible, illustrating how and when decisions occur so they can be viewed, analyzed, and improved. With such an understanding, stakeholders can align their activities to achieve better execution, communication, and enhanced relationships—all of which help to make the decisions more aligned and better than it was before.

Business Decision Mapping Identifies Activities Within Each Process

First developed in Copenhagen with the help of Thomas Holm, a senior partner at Implement Consulting Group, Business Decision Mapping was designed to help take separate and sometimes competing processes, (such as FP&A and S&OP and PLM and ERM) and streamline them into an efficient business planning process. Using Thomas’s lean thinking and the goal of coordinated decision processes, the mapping process made it easier to identify activities within each process that are not adding value and then targeted them for elimination and/or assimilation into other processes.

Variations have been adapted and this mapping exercise has been used by executives for individual processes or as a launching step to a comprehensive Business Efficiency Planning process (BEP). In conducting a mapping process, we recommend these seven steps to make it easier. As experience tells us, each is crucial to achieving success and to better understand your organization’s decisions making process.

Step 1 (1 week ahead): Prepare Upfront

Start before the meeting and plan the ‘who, what and where’. This includes devoting some time to deciding who should participate in the session. A simple rule to apply here would be to choose the people who represent different functions involved in the decision-making process. Before the meeting, let people know how to prepare and begin thinking about all the strategic decisions they make weekly, monthly or annually, and even write some of them down. Decide on a suitable place. The mapping exercise should be conducted in a comfortable environment so that you can get the best output from the participants. Similar to a process mapping exercise, you will need a large whiteboard or empty wall you can add paper to, along with post-it notes and markers.

Step 2 (15 – 30 min): Explain The Process

The meeting facilitator should clearly outline the process and guidelines. Start the session with setting up goals, defining what kind of decisions you should focus on, what we mean by the time horizon and frequency of the decision, and the decision matrix. Also, set a time frame. Don’t let the meeting last forever. A good process should take approximately 3 hours and typically no more than 4.5 hours.

Either before the meeting or during this step, a decision matrix framework is created on a whiteboard or with paper on a wall. It usually has horizontal swim lanes corresponding to different functions or functional processes. Most common are finance and the FP&A process, Commercial and PLM process, supply chain and operations and S&OP process, and executive team and ERM or business review process. Vertically, the matrix is divided by the frequency of the decision, usually weekly, monthly, quarterly or annually. It is important we are not looking at the time horizon of the decision such as strategic decisions that impact 4 months out but only the frequency that decision is made. Time horizon is important for scope and it is recommended to only look at decision with time horizons of 3 months or longer where all other operational decisions should be just that-operational and part of day to day business.

 

Weekly Monthly Annually
Finance (FP&A)
Marketing / Product (PLM)
Supply Chain (S&OP)
Executives

Step 3 (20 – 30 min): Jump In

Before heading into groups, organizations should insist that each person first try to come up with their own thoughts and ideas of decision points. Start the process and have participants individually generate decisions that are made, write them down on post-it-notes and post them in the appropriate swim lanes or on a flipchart and then transfer ithem to swim lanes and the matrix. These can be any strategic decision over a 3 months horizon that impacts the business and planning process. It may be better to think of it as any recurring answer you have to give and word it as a question or decision point. What will the forecast be?  It may be suitable with the right audience to not limit participants to their own sand boxes and allow them to brainstorm and add any decision they know is critical to whatever swim lane. When you have finished this step, you should have multiple decisions dotting your matrix with some duplicates and some you never considered.

Business Decision Mapping

Step 4 (30 – 60 min): Refine The List

As small groups of participants with functional expertise begin to clarify or justify decisions within your swim lane, removing duplicates or non-critical ones and moving any that happen at a different frequency.  Add whatever remains beside or on the note, detailing the key input needed, who is making the decision, who may need the decision to be made, and the meeting it is made in or mechanism used to make the decision. Add as much detail you need or have in the time allotted. All of this helps in future steps but is not absolutely critical in this one and may be added later as well. Finally, you will notice obvious dependencies and linkages where you can begin to literally draw lines between notes or points and highlight similar decisions and inputs used for multiple places.

Step 5 (30 – 60 min): Prioritize

Now, arrange ideas and notes into logical groups such as similar timings, inputs or people. It is OK to decide together to move some of the notes around to help visualize the groupings-the sticky notes can be helpful here because you can easily rearrange them. Next, take all the decisions points and work on culling them to come up with the most critical ones to focus on.  As a group, decide upon priorities for top decisions or groupings and on the order of importance. This does not mean you need to disregard the ones at the bottom of this list. This becomes a great opportunity to help identify overlap, gaps in timing and interdependencies and so on.

Business Decision Mapping 2

 

Step 6 (60 – 90 min) Strategize

Continue as a group to further group strategic decisions around cost, cash, service and process focus. For this, you are looking at what grouping the decision impacts the most. For example inventory safety stock levels is primarily a cash decision but has some cross over to service and cost. Some discretion on where they fall and some straddling is OK but make an effort to reach a consensus on specific categories. It is here that you also add any missing details for the top decisions with inputs to the decision, the who and where, and why we are making it. The final objective of this meeting is to decide what will happen next with the top decision points. Define the next steps for combining or improving the decision process – this may include steps such as presenting the ideas to senior management, gathering feasibility data etc.

Step 7 (post meeting): Create Your Roadmap

Outside of the meeting, create a list of opportunities for improvement around key decision points. Possibly outline a road map to combining strategic decisions into Business Efficiency Planning (BEP) core meetings and identity opportunities to eliminate waste and latency in the decision making process. Assess the impact on the organization’s meeting and decision flow design for the proposed changes. Share your findings with the group or wider audience to ensure consensus. This would also be the place to communicate the “low hanging fruit” where natural synergies can occur and make immediate changes. For the rest, and with the key decision points, develop a business case for implementation and improvements.

Whilst most people are familiar with the concepts of detailed and high-level process or value stream maps, many need clarifications on Business Decision Maps. Business decision mapping helps companies see how, when, and where key decisions are being made and improve the inputs and align them better to the organization’s strategy. Although its typical purpose is to streamline decisions and eliminate waste, business decision mapping can also be seen from the perspective of adding value. With inputs from the right people and in the right forum, it can add insights to make more informed and efficient business decisions, driving value and success in your company.

 

]]>
https://demand-planning.com/2019/01/03/business-efficiency-planning/feed/ 0
S&OP KICK OFF GUIDE: PART II https://demand-planning.com/2018/06/25/sop-kick-off-guide-part-ii/ https://demand-planning.com/2018/06/25/sop-kick-off-guide-part-ii/#comments Mon, 25 Jun 2018 14:22:48 +0000 https://demand-planning.com/?p=7055

In the previous article we talked about the essential requirements for people and processes to ensure successful S&OP. In this second part, we will talk about the initial business decisions that must take place before the implementation process begins.

Understanding The Business

Rather than implementing a methodology that involves a cycle of meetings with key people, S&OP leaders and their teams need to understand the business they are in, and understand their products, competitors and trends. Benchmarking data is highly valuable when it comes to staying current with how your products and processes should be performing. [Ed: IBF members get access to world-leading S&OP,  forecasting and planning benchmark data.]

I have seen poorly-aligned S&OP processes that have not only failed to deliver wide strategic goals, but cost the company a lot of money in the process

Strategic Planning

A mature S&OP process supports the delivery of the strategic plan. With this in mind, it is necessary that S&OP’s objectives are 100% aligned with the business strategy. I have seen poorly-aligned S&OP processes that have not only failed to deliver wide strategic goals, but cost the company a lot of money in the process. I was recently in a situation where a new S&OP team at a midsize company was focused on reducing inventory without analyzing the risk of stockouts on the business. This was a major problem because the company’s strategy was to grow exponentially in almost all product categories. All teams were working towards this goal (including Marketing who were pushing the products aggressively) apart from S&OP. With the strategic misalignment of the S&OP team, many products were unavailable, causing a non-recoverable loss to the business that year.

Swot Analysis

The S&OP team needs to be clear about the Swot analysis of the business that will be covered in the S&OP process, primarily to tackle the Opportunities and improve the Weaknesses. One recommendation is to have a discussion with key managers to raise these already-known opportunities and pain points and prioritize those to be helped by the S&OP process. With this clarity and prioritization, results from the S&OP will happen faster.

Planning Horizon

This planning horizon depends on the type of business but as we are talking about an immature S&OP process, the shorter the planning planing horizon the better – when we start out, we want the process to be manageable and that means not looking too far ahead. Some businesses, however, do not allow for a short planning horizon – you’ll need to decide on what is most appropriate at the outset.

Set up KPIs so you can monitor how the implementation process is evolving and make any necessary adjustments

Key Performance Indicators

Another point that often ends up being an afterthought are key performance indicators. It is important that these are defined and set up so you can monitor how the implementation process is evolving. This way we can see if we’re on track and make any necessary adjustments. 

You should measure the performance of the following:

Strategies: Service Level (OTIF) , Working Capital, Growth Revenue, Margin, P&L. (Measure by Region, sales channel, product categories, etc.)

Sales: MAPE (Mean absolute percentage error) or Sales Forecast Accuracy, Bias. (Measure by Region, sales channel, product categories, etc.)

Operations: Accuracy of plans: production, materials, transfer of products to warehouses, revenues.

Inventories: Inventory days, inventory turn, inventory health.

Projections: Storage, % use of head count, % use of the industrial park.

S&OP Maturity: If the company already has an S&OP initiative in place, the recommendation is to measure the current maturity of the S&OP process using a maturity model. 

Everything you are planning needs to be measured so that adjustments are made exactly where you need them. If the company does not have a KPI process in place, the recommendation is to carry out performance measurements afterwards. This way you can identify what level of maturity the team has managed to achieve, and put together a plan for further improvement.

 

]]>
https://demand-planning.com/2018/06/25/sop-kick-off-guide-part-ii/feed/ 1
S&OP Kick Off Guide https://demand-planning.com/2018/06/04/sop-kick-off-guide/ https://demand-planning.com/2018/06/04/sop-kick-off-guide/#comments Mon, 04 Jun 2018 13:05:32 +0000 https://demand-planning.com/?p=6959

The purpose of this article is to make clear the essential requirements for sustainable S&OP process implementation. S&OP implementation has a high failure rate, with most initiatives not actually delivering any value and doing nothing to improve the balance between demand and supply. Using this S&OP kick-off guide, the chances of failure are much reduced, increasing your chances of leveraging S&OP as a growth driver that provides a real competitive edge.

Often the S&OP Process implementation initiative is carried out in a poorly structured way, especially when the implementation decision is made from the bottom up without executive sponsorship. The managers involved in this type of implementation end up not following the methodology completely, leaving out the essential pillars that are required to sustain the process. This means creating more work further down the line, day-to-day difficulties in the S&OP process, and, sometimes, damaging the credibility of the process which can result it its death. Haste and lack of capacity building are usually the key factors in the demise of the S&OPS&OP is like building a house – it needs a strong foundation, and if the proper work is not carried out at the beginning, problems down the line can cause irrevocable damage. Here are some helpful pointers that will guide a successful S&OP, regardless of whether it is a top down or bottom up initiative.

Sponsor: This is the person who will provide the necessary support before, during and after the implementation of the S&OP process. This person must have great influence within the company, have knowledge of the process, and be able to carry out all the necessary alignments and approvals with the main managers. This person needs the necessary position and communication skills to open doors that may have been locked for years.  

S&OP in the hierarchy: Normally the S&OP area is created within the Supply Chain, however, when the process leaves stage 1 maturity it is important that the S&OP area reports to a neutral entity (free of department specific interests) in the company, for example: Finance or a senior executive. S&OP and its management need to maintain the collective interest with focus on the best result for the overall business.  

S&OP Leader: This person needs to have solid experience and knowledge in Supply Chain, as they will lead a wide range of different activities. In addition to the experience and technical expertise in the field, the S&OP leader will need to have great energy and discipline to meet the schedule, and a willingness to move people around and change processes to ensure integration. Ability to communicate at different levels is necessary.. Involvement of the human resources area is critical at this stage to find the right candidate. When choosing the right person for S&OP Leader, a junior profile will not cut it.

S&OP Team: As we are dealing with a total integration process, we absolutely must involve all areas, even if we have to demand it. An S&OP Committee is recommended for the initial phase, with all parties committing to collaboration and support of the implementation project. Choosing the right people and their backups is key to starting the process as well as maintaining it later on.

Roles and Responsibilities: Each S&OP member must have a clear role and defined responsibilities within the S&OP process. They need to be trained to contribute properly in the process – both inside and outside meetings – as facilitators and process owners. I recommend you use a matrix of responsibilities, train those involved and record all activities.

S&OP Meetings: All meetings must have: an objective, a duration, participants, inputs, discussions, outputs, attendance list, and a list of required actions. Participants at each meeting will need to be trained to ensure an effective meeting.

Meeting Schedule: All S&OP monthly and weekly cycle meetings need to be set in advance. In order for people to attend the meetings, invitations need to be sent as soon as possible. One recommendation is to keep the invitations sent for the next 3 months of meetings. It is necessary to pay attention to special dates like holidays and events that could affect attendance.

Documentation: The S&OP process must be formalized through documents like process flowcharts, procedures and operational instructions to ensure the decentralization of information and the survival of the process when a team member leaves. The is important in assisting standardization and proper management of S&OP documents. One of the most important documents is the S&OP policy which details S&OPs’ involvement in the business. In addition to containing all the requirements to establish and maintain the process, this document codifies the agreements and hierarchy of decision-making. It is a living document that must be kept up-to-date by the managers of the S&OP process.

Process Auditing: After a few monthly S&OP cycles, an S&OP audit process has to be defined to ensure that everything that was planned has been implemented in practice. This must be carried out by an independent team, who will have to be trained in S&OP to audit the process. In some companies this department already exists.

[Ed: These are the essential criteria for successful S&OP, that will both facilitate its implementation and sustainability. Only 1/3 of S&OP initiatives end up actually adding any value –  make sure you lay the appropriate foundations to ensure yours is a real growth driver.]

]]>
https://demand-planning.com/2018/06/04/sop-kick-off-guide/feed/ 7
How To Build A Strategic Capability in S&OP https://demand-planning.com/2018/04/09/how-to-build-a-strategic-capability-in-sop/ https://demand-planning.com/2018/04/09/how-to-build-a-strategic-capability-in-sop/#respond Mon, 09 Apr 2018 19:11:21 +0000 https://demand-planning.com/?p=6623

There’s a huge amount of information on the deployment of S&OP – at the last count there were around 40 million hits on a combined search of S&OP and IBP. Despite all this information and the many years of cross-industry experience of applying S&OP since it was developed in the mid-1980s, the success rate of deployments is still disappointingly low at just 25-30%.

Whilst some of the failures of S&OP deployment become apparent in the first months of execution, it is often the case that engagement and support for the process diminishes over time. Its value is gradually eroded to the point where it is no longer a key process in the organization.

There are two common drivers for this:

  • S&OP is fundamentally a cross-functional process, and most organizations are not set up to enable and sustain ongoing cross-functional collaboration
  • Important elements such as process design, systems and data provision are frequently not underpinned by critical softer enablers (such as reward and recognition, empowerment or other cultural factors)

Throughout my career as a commercial leader, IBP leader and General Manager, I have often experienced the challenges in building and sustaining cross-functional capability. As a result, I developed a generic framework to set out the key enablers for this capability which are often missing from deployment programmes. This is summarised in Figure 1 below.

S&OP strategic capability

When applied specifically to the challenge of S&OP deployment, this framework highlights three enablers that typically receive inadequate attention:

  • Development
  • Empowerment
  • Continuous Improvement

1. Development

S&OP deployments often focus on the specific technical or functional capabilities to execute the process (and normally with an emphasis on Supply Chain roles such as Demand or Supply Planners).  However, in order to provide strategic enterprise support for S&OP, several other aspects of Development are essential:

  • Cross-functional leadership – Building and leading cross-functional teams requires a set of specific knowledge, skills and behaviours are often neglected in traditionally organized businesses. This wide range of capabilities is built upon a fundamental understanding of the culture, perspective and goals of the various functions involved. However, it extends beyond this to include building high-performing enterprise teams and leadership capabilities to help functional specialists achieve company-wide goals.
  • Tailored support across functions – Equipping every participant in the S&OP process to fulfil their role is of critical importance. In particular, the experience of participants in the first few cycles of a new process fundamentally affects the probability of the process being sustainable.  Early problems rapidly lead to disengagement and create a poor reputation for the new process from which it is extremely difficult to recover.  Just as significantly, creating the infrastructure to induct and develop new entrants to the process over time is central to ongoing sustainability.  It is therefore crucial to invest in engaging approaches to capability building across the various functions involved. Supply Chain roles tend to be relatively well-supported but the roles of, for example, product managers, sales teams and finance analysts must also be considered.
  • Development Paths – Most organizations tend to develop leaders along functional lines late. This creates a serious gap in the leadership capability for sustainable S&OP where enterprise leadership is critical at every step in the process cycle. In order to address this, a development and succession planning approach is needed which recognises this need. This approach should develop cross-functional leadership skills at various levels in the organisation structure and across the Supply  Chain, Commercial and Finance functions.

 

2. Empowerment

The most effective and efficient S&OP processes are those which are executed with a clear central principle of empowerment. This drives decision-making to the lowest level in the process and escalates issues for executive level decision-making only by exception. This maximises the pace of the process and drives essential team working behaviours at all levels in order to make enterprise-optimised decisions. It also ensures that where key scenarios or decisions need to be discussed by the senior team, they are able to devote quality time to these rather than being overrun by minutiae which could effectively be managed at lower levels.

This requires two fundamental enablers:

  • Information sharing – In order to be fully empowered, an S&OP team needs to share information in a consistent way across the team, with standard definitions, metrics and analyses. This ensures that a scenario is consistently viewed by all team members. Individual teams or functions tend to develop their own approaches to reporting but agreeing a single common approach is the first step to making aligned enterprise decisions. It is also important that the team is very clear on the goals and performance measures for the business so that they can be confident that their proposals and decision-making align with these business goals.
  • Clear boundaries – It is also critical that S&OP or IBP participants are clear on their specific role, and that of their colleagues, in the process. In its most fundamental form, this includes a transparent description of where each decision should be taken and, where necessary, any thresholds or conditions that apply.  For example, a process might define that a local manufacturing site can make decisions on local inventory within certain limits but beyond those limits an approval may be required at the next S&OP or IBP step. Emphasizing the importance of boundaries in the context of empowerment may seem contradictory but research into high-performing empowered organisations reflects the criticality of this approach. It is of course equally important that these boundaries are respected. This can be a challenge for senior leaders who now are required to participate in a monthly cross-functional process to drive decision making when they may have had the freedom to make wholly independent judgements in the previous environment based on their functional seniority.

3. Continuous Improvement

The real value achieved by S&OP is delivered though the ongoing optimization and alignment of enterprise decisions over time. When set up with the relevant support infrastructure, S&OP also becomes more efficient over time. However, many deployment programmes understandably focus on getting a new process up and running and pay relatively little attention to the critical foundation to sustain and improve the process, thus undermining its ability to deliver value year after year.

A number of specific tactics are useful to secure continuous improvement in S&OP:

  • Network of champions – Whilst building a network of change agents or champions is sometimes used as an initial change management approach, it is also a very effective means to maintain energy and focus on a newly-deployed process. Selecting this group of champions based on their ability to lead, communicate and influence, not functional skills alone, is critical. Investing to maintain this network and develop its members provides the means to keep S&OP on the agenda across geographies and functions.  This network can be used to identify improvement opportunities and work alongside the relevant expert groups (e.g. IT for systems issues) to continuously build capability and performance in the process.
  • Clear ownership of S&OP process standardsIn an organisation of any size, it does not take long before a region, manufacturing site or commercial team decide to tweak the standard process. In this context, it is critical that there is both defined ownership of process standards and also a transparent change control process. This ensures that proposals for improvement are not just ignored, but are systematically reviewed and incorporated where suitable into a standard corporate process.
  • Audits/Healthchecks Audits or ‘healthchecks’ are also useful tools to combine the upsides of process innovation and learning (and at the same time mitigate the potential negative effects of lack of process adherence). However, it is essential to set up the audit objectives carefully and transparently (and with full senior leader support) to ensure a focus on continuous improvement and avoid an excessive bias towards simply checking and monitoring adherence.

 

 

]]>
https://demand-planning.com/2018/04/09/how-to-build-a-strategic-capability-in-sop/feed/ 0
How To Balance Demand & Supply In Omnichannel https://demand-planning.com/2018/03/26/how-to-balance-demand-in-ecommerce-and-brick-mortar/ https://demand-planning.com/2018/03/26/how-to-balance-demand-in-ecommerce-and-brick-mortar/#respond Mon, 26 Mar 2018 17:05:50 +0000 https://demand-planning.com/?p=6498

Ecommerce is changing how we see Demand Planning and ushering in new rules and best practices. Traditional businesses that moved into eCommerce must deal with the friction between managing supply for online customers and Brick and Mortar (B&M) customers and be able to  manage the same inventory for two different sets of customers, for which there are vastly different demand factors. At Newell Brands, we know the game has changed and we need new tactics if we want to compete.

Here are few recommendations that will give you an insight into Ecommerce Demand and Supply. Starting with Demand Plan tactics and with the misconception that “forecast is always wrong anyway”, I strongly recommend harnessing the information offered by your online customers, who are highly data-driven. Sell-in, sell-through and inventory profile information are highly valuable elements and are easy to obtain for each SKU you are managing. Let’s take a closer look at it.

Demand Plan Tactics For Ecommerce

It is important to measure the accuracy of your current Ecommerce forecast against the sell-in actual data to adapt your demand plan if necessary. This forces a more granular demand review rather than an aggregate level analysis.

Our next step is the incorporation of the sell-through performance plus the inventory profiles to adapt the best demand statistic model and the correct signals to supply. Analysis of the complete SKU business portfolio can be a massive task but doing a total ABC/XYZ pre-study and focusing in your A/X items should at least address 60% of the issues.

The combination of these three elements will increase demand plan quality by quickly identifying erroneous demand signals that can cause excess inventory issues, as well as predicting of out-of-stocks.

In a perfect world sell-through data is expected to give a one to one ratio with your sell-in demand plan interpretation, but don’t forget to look at the inventory level consumption in between.

How demand is managed for these brands depends on the sales channel.

Supply Chain Inventory Allocation

When it comes to benchmarking best practices, some organizations ringfence stock to serve key account customers until the order drops in. This brings us to the risk of severe excess issues if that key account order never comes through, plus the possibility of missed orders from non-key account customers. Also, this might require micromanagement in customer services operations and supply chain and logistics teams to control the stock.

Benefits Of Centralized Supply

A possible solution is to consolidate the online business inventory and B&M inventory into one source of supply. There is a higher probability that you will cover the other BC/YZ order segments coming from a centralized source of supply rather than fighting the demand complexity for your other levels of supply downstream. Depending on the demand levels of online customers, further benefits of maximizing availability of stock by centralized procurement are:

  • pallet loads configuration opportunities
  • promotions for direct loads/containers even from original vendors
  • hybrid solutions for peak seasons to direct fulfil end user orders, etc.

There are lead time and transportation costs to consider and evaluate while going through the process. There are inevitable trade-offs, so make sure assess what you are gaining and risking by centralizing supply.

Bottom Line

In conclusion, the variables impacting demand differ from business to business. There is no one size fits all solution. It’s important to understand how demand for each SKU is spread across the two channels, understand which regions the demand is coming from for each channel, and a have handle on how the distribution network is setup for both. There are major challenges reconciling the two different sources of demand – the key is to know the individual demand factors and the core Supply Chain for each, treating them as distinct operations with their own demand signals and Supply Chain requirements.

]]>
https://demand-planning.com/2018/03/26/how-to-balance-demand-in-ecommerce-and-brick-mortar/feed/ 0
How To Deploy Global S&OP https://demand-planning.com/2018/03/21/how-to-deploy-sop/ https://demand-planning.com/2018/03/21/how-to-deploy-sop/#comments Wed, 21 Mar 2018 13:24:01 +0000 https://demand-planning.com/?p=6443

I recently led the successful deployment of global S&OP in a top 10 pharmaceutical company. It covered all business units and regions in this $40 billion turnover organisation and lasted 3 years. During that time we learned a lot – and boy, there are some things I wish I’d known at the start.

1. Most Expert Advisers Cannot Engage Commercial Leaders in S&OP

I took on the leadership of the project with a background of 20 years’ experience as a senior commercial leader, not an S&OP expert. I was therefore relying on the subject matter experts to help me energize and engage my commercial colleagues (and others outside the supply chain) in S&OP.

However, when asking the simple question ‘why should a General Manager support and lead S&OP?’ I found that most experts would either;

(a)   list all the supply chain parameters that would be improved (forecast accuracy, plant utilization or OEE, adherence to plan etc) – most of which the GM had never heard of, much less cared about day-to-day, or;

(b)   list a series of very high level statements on improving revenue, margin and growth without explaining how S&OP could shift company performance on such major fundamentals.

This immediately creates a credibility gap and the critical engagement of commercial stakeholders can be lost at the first interaction. The learning for me was clear – S&OP process is an enterprise-wide process and the deployment team must have the genuine capability to connect and engage cross-functionally with credibility.

2. It Doesn’t Need to be Perfect

There is no shortage of advice on process design for S&OP. The basic concepts were established over 30 years ago and a Google search on ‘sales and operations planning’ yields 25 million results. Creating and documenting the S&OP process is relatively straightforward but process descriptions tend to be very detailed, and I found that there was often a desire to deploy the detailed process across all business units and regions as rapidly as possible.

This leads to excessive expectation and at the same time overstretched both the deployment and business-as-usual teams engaged in the process. This, in turn, meant that initial expectations were at risk of not being fulfilled and at a time when engagement and sustainability was still fragile.

The learning here is that the process does not need to be perfect and all-encompassing from day 1. The chances of success can be greatly improved by;

  1. Identifying key business areas in which to start deployment (and these could be specific brands, a business unit or a geography) and focusing attention and quick wins only on these
  2. Focusing on specific aspects of the overall S&OP process to address first (e.g. starting with the demand forecasting and review process)

3. It Takes Longer To Embed & Stabilize S&OP Than You Expect

The initial plan was to train and coach the key stakeholders (e.g. General Managers, Demand Planners, Supply Planners, and Financial Controllers etc.) in the new process over a period of 3 monthly cycles. Following ‘classroom’ training on the process, hands-on coaching and practical support was tailored to the role and level of each key participant. The degree of coaching was gradually reduced over time so that ownership, backed by the key capabilities to deliver it, was built up in each individual.

In some cases, this worked perfectly, in other cases further cycles of support were necessary (for example 5-6 months, rather than the planned 3 months). However, the quality and stability of each individual S&OP activity (e.g. a Demand Review Meeting, or DRM) was greatly enhanced by this extended support. This had a disproportionately important impact as the shortfalls of a sub-optimal DRM, for example, had a knock-on effect through the rest of the monthly S&OP cycle and could quickly affect confidence and commitment to change.

4. Eliminating Duplicate Processes Is As Important as Deploying The New S&OP Process

The natural focus on most S&OP deployment initiatives is to create and execute the new S&OP process. However, the impact of allowing duplicative processes to continue operating in parallel to S&OP is not always recognized (e.g. a business unit performance or financial review or elements of the corporate financial planning and forecasting processes).  In my recent deployment, there were many pre-existing local or function-specific processes to fill the gaps that existed before S&OP was adopted. These were often locally developed, and were changed whenever a new leader took over a business area or team as their own personal preferences were adopted in their teams.

In the early stages of deploying the new S&OP process it was critical to deliver incremental value, but also to secure and sustain the engagement to maintain the new standard process. Where the goals and outputs of the new S&OP process were duplicated in other processes, then leaders started to make choices on which process they would favor and support. The lack of commitment to the corporate standard and losing management focus became clearer as the deployment progressed.

I learnt it is critical to establish the corporate governance of the new process in the initial stages of deployment and to ensure that there is sufficient senior management commitment to eliminate parallel processes.

5. Senior Leaders Need Help, but May Not Ask For It

Most S&OP deployment programmes readily recognise the importance of senior leader engagement.  However, engaging and working with senior leaders is often not well targeted to achieve an impactful and sustained contribution from them. Furthermore, these individuals may not recognize the support they need to lead and sponsor the process effectively. My experience was that, as a programme team, we initially invested time in 1:1 meetings with senior stakeholders to explain the overall flow of the S&OP process, its benefits and the key inputs required from them in the monthly cycle (e.g. sign off of a demand forecast).

These leaders frequently did not have the personal experience to support the process in the same way they would in their own functional area. This was especially true for commercial leaders who had not been involved in S&OP before. Supporting these people by providing tips and tools on how to execute the leadership role was very well-received – even if not explicitly requested in the first place.

My specific learning in this topic is that there are several areas of support that senior leaders found useful;

  • A cheat sheet to ask what process metrics and behaviors he/she should ask about when on a visit to a business unit
  • Contracting with the leader to observe and provide feedback on the execution of their S&OP meetings
  • Having access to peer support networks, for example by connecting a GM to a peer in another region where the team have made positive progress and the leader has established good practice in their process leadership

Conclusion

The successful deployment and sustainability of S&OP is undoubtedly a tough cross-functional challenge. I believe that combining the learnings above with the widely available information on S&OP process design will give you a head start for a successful deployment – good luck!

 

]]>
https://demand-planning.com/2018/03/21/how-to-deploy-sop/feed/ 1
How To Use Facebook’s Prophet https://demand-planning.com/2018/03/14/forecasting-with-facebooks-prophet/ https://demand-planning.com/2018/03/14/forecasting-with-facebooks-prophet/#comments Wed, 14 Mar 2018 12:26:02 +0000 https://demand-planning.com/?p=6358

Working in a SME with limited resources, the kind of sophisticated forecasting tools used by the major multinationals can seem far out of reach. For people in smaller companies like mine, the abundance of free to use, open-sourced, state of the art software like Facebook Prophet offers access to game-changing functionality.

The last couple of years have seen several major internet names open sourcing powerful predictive analytic API’s, making them free to use for developers and professionals. Google’s TensorFlow deep-learning library is probably the most widely used and influential library, but there are many more libraries that provide valuable functionality for Demand Planners and Purchasers that don’t require the kind of GPU computing power that deep-learning requires.

What Is Facebook Prophet?

Facebook describe the software as “a procedure for forecasting time series data. It is based on an additive model where non-linear trends are fit with yearly and weekly seasonality, plus holidays. It works best with daily periodicity data with at least one year of historical data. Prophet is robust to missing data, shifts in the trend, and large outliers.”

Prophet was designed to tackle the problem that quality forecasts are required faster than analysts can produce them, while automated forecasting techniques are too inflexible to incorporate useful assumptions or rules gleaned from experience. In a business context we’ve all seen automatically generated forecasts that don’t factor in change points in demand such as market breakout for a booming trend of the slump of a major customer moving to a competitor. At the same time, with thousands or tens of thousands of SKU’s to monitor, we know that finding highly skilled analysts to complete the workload consistently and rapidly can be a major challenge.

Why Use Facebook Prophet For Forecasting?

Firstly, Prophet is stupidly easy to use and generates reasonable results without having to worry about choosing between models and tuning hyperparameters.

Secondly, Prophets parameters allow for customization in ways that make sense to a non-expert and in a business context, such as the ability to inject S&OP information about how the forecast will likely change, the ability to set caps on possible demand based on experience and market knowledge, and the ability to model irregular holidays like Chinese New Year or Easter.

As a keen Pythonista, one of the best things for me about Prophet is that it can be used in Python and is easily installed from either pip or conda. Generally, R has had the edge over Python for time series regression problems. The auto.arima function in R is hard to beat for ease of use and accuracy of results. R also has some recent additions for dealing with time series problems – CausalImpact from Google which identifies the causal effects of things like marketing campaigns on sales, and AnomalyDetection/BreakoutDetection from Twitter, that help identify anomalies and shifts in trends. Facebook Prophet is therefore a very welcome addition to the Python ecosystem.

What is Facebook Prophet Optimized To Solve?

Prophet was designed by Facebook, so it’s well suited to regularly spaced timeseries observations and works best with at least a year’s observations to catch seasonal trends. It has a very useful function for incorporating national holidays, which depending on your business might represent peaks (television ratings during holidays) or dips (stores closed or open half-days on national holidays). It also has a useful parameter called a ‘changepoint’ which enables you to specify a point after which demand is likely to change, such as the launch of a competitor’s product or a major television campaign.

One important note is that Prophet is an additive regression model built up from trend, annual seasonality, weekly seasonality, and a user-specified holiday list. If you’re concerned that your model might be inherently multiplicative in nature, then it might be worth log-transforming the data and then using the inverse log to normalize the predictions.

I should also make clear from the start that as impressive as Prophet is, you can get better results by stacking the model in an ensemble of various techniques if you have the computing power to do so. On my laptop (and my work desktop), it would take several days to fit a very sophisticated ensemble model, whereas Prophet is able to do a reasonable job on all 3000 SKU’s for my current company in a matter of minutes, so there is a trade-off of accuracy against computational/time cost.

Installing Facebook Prophet in Python

Prophet can be installed very easily in Python, either through pip or through conda install. I used the conda installation which also loads all the dependencies and is very convenient: https://anaconda.org/conda-forge/fbprophet

Installing Prophet in Python

Installing Prophet in Python is straightforward.

Preparing Facebook’s Prophet Datasets

Prophet accepts the primary dataset of time series data and an optional list of holidays. I read these with this into Python with Pandas read_csv function passing parse_dates=True. If you’re unfamiliar with the Pandas commands, you can gain a quick understanding with the excellent 10 minutes to Pandas guide here: https://pandas.pydata.org/pandas-docs/stable/10min.html

The Prophet documentation shows that the variables for the primary dataset should be labelled ‘ds’ for the time series and ‘y’ for the variable. The holiday list should also be labelled ‘ds’ for the time series and ‘holiday’ for the list of notable events. Time series should be sorted and formatted as datetime datatype. This is easily done inside the workflow. I’ve take a single line of crystal glass tableware as an example having already sliced the dataframe down to one SKU:

Facebook prophet forecasting

Setting Up The Process

I tend to import a range of packages to perform basic exploratory data analysis, but the only essential packages for this will be pandas and fbprophet.

Facebook prophet forecasting

Here’s a quick plot of the time series:

Facebook Prophet time series

Facebook Prophet time series

Eyeballing the data, you should notice an increase in both frequency and volume of orders in a regular annual cycle with perhaps a faint downward trend over the three years.

Anyone familiar with sci-kit learns fit_predict/fit_transform methods will find Prophet follows a very similar pattern.

Facebook prophet forecasting

Here I instantiate the model with an uncertainty window of 95% (Prophet defaults to 80% even though 95% is normally standard in many business fields). I feed it my holiday list as a parameter, and then fit the model to my filtered dataframe (Ndf). I then project a future dataframe of around 3 months using Prophets ‘make_future_dataframe’ function.

Once the model is fit. All that remains to do is to predict the model over the future dates and have a look at the dataframe to insanity check the results.

Facebook prophet forecasting

Then using the model.plot(forecast) we can have a look at the fit and projected values:

Facebook prophet seasonal patterns

As you can see the model has done an excellent job in finding the seasonal pattern and correctly identified the downward trend over the last three years. One of the best features of Prophet is that it will return the model components. Here we can see the overall trend and holidays have been isolated:

Facebook prophet seasonal patterns

Facebook prophet seasonal patterns

And the weekly and annual seasonality analysis is a very good fit for my experience in the table top HORECA trade, with annual demand peaking slightly at the start of summer and rising to a peak in the lead up to Christmas when parties fill up the hotels, restaurants and bars.

As you can also see, we close for weekends, so not so many invoices raised then. The busiest weekday is a Wendesday.

Final Thoughts On Facebook Prophet

With very little coding and without setting any of the numerous other hyperparameters, Prophet did an excellent job on the time series, despite the large number of outliers in the data, achieving a coefficient of determination of 0.84.

There are certainly many advantages for an SME purchaser or demand planner considering forecasting with Prophet:

The software is free to use, open-source code and is ridiculously easy to deploy.

Prophet is extremely quick, taking only a few seconds even on my now badly outdated laptop – more advanced neural networks are notorious for requiring multiple GPU’s or burning out the CPU’s of machine’s after several days running.

The optional hyperparameters as intuitive even to the less technically minded demand planner.

The predictions are returned with a confidence interval around the forecast, which can often be more useful that the predicted value itself when making decisions about stock levels.

All in all, R still has the edge when it comes to comprehensively tackling time series regression tasks, but if you’re a Pythonista working in demand planning and you want to upgrade your forecasting accuracy, then I’d strongly recommend Prophet as a tool to consider.

 

 

 

 

]]>
https://demand-planning.com/2018/03/14/forecasting-with-facebooks-prophet/feed/ 1
How To Use Forecast Value Added Analysis https://demand-planning.com/2018/02/12/what-is-forecast-value-added-analysis/ https://demand-planning.com/2018/02/12/what-is-forecast-value-added-analysis/#comments Mon, 12 Feb 2018 16:49:13 +0000 https://demand-planning.com/?p=6193

Forecast accuracy has always been measured, but now it is becoming a key performance indicator (KPI) for many supply chains. But are we measuring the right thing? Most companies use forecasting performance metrics, such as Mean Absolute Percent Error (MAPE), to determine how good the forecasts are. The problem with metrics such as MAPE is they only communicate the magnitude of error. 

Other metrics, such as Mean Percent Error (MPE) or other tracking signals as a trend, can only communicate the direction of the error, or bias. The problem is that neither one really reveals the complete picture, nor do they answer the simple question, “is it good enough?” This is where FVA plays a critical role. To measure everything, we need to add FVA as an additional metric to help gauge the effectiveness of the process or the performance of the forecasting professional.

What Gets Measured Can Be Improved

MAPE gives some measurement of forecast error. This is not a bad thing, and for supply chains it is critical to have visibility and understand the degree of error so that the organization can properly manage it. For most companies, this is used to set inventory targets or understand the risks of their capital investments.

Unfortunately, many of these companies also set arbitrary MAPE targets for what they would like to see the forecast accuracy be in order to hit a subjective inventory target. Because the MAPE targets are arbitrary, companies don’t understand the drivers or their underlying true variability. From a process standpoint, the problem is that one of two things will occur: the company hits the accuracy targets and is satisfied, and then little or no other improvements happen; or, they never hit the targets and become frustrated, never understanding why they can’t get there. Here is another way to look at this: while forecasts and measuring accuracy help mitigate inefficiencies in the supply chain, they do little to reflect how efficiently (or indeed why) we are achieving that forecast accuracy in the first place.

Measuring Forecast Value Added

FVA begets managing forecast processes. Forecast Value Added increases visibility into the inputs, and provides a better understanding of the sources that contributed to the forecast, so one can manage their impact on the forecast properly. Companies can use this analysis to help determine which forecasting models, inputs, or activities are either adding value or are actually making it worse.

FVA also helps to set targets and understand what accuracy would be if one did nothing, or what it should or could be with a better process. Finally, its objective is efficiency: to identify and eliminate waste in non-value adding activities from the forecasting process, thereby freeing up resources to be utilized for more productive activities.

What Is Forecast Value Added?

FVA can be defined this way: “The change in a performance metric that can be attributed to a particular step or participant in the forecasting process.” Let’s say we have been selling approximately 100 units a month, and sold exactly that many last month. Through the forecasting process and added market intelligence, our consensus forecast for the next month came to 85 units. Actuals for the next month came in at 95 units. For this example, after management and marketing adjustments, the MAPE was 10%, where a naïve forecast may have achieved a MAPE of 5%. We could say in this case that the adjustments have not added value since the naïve was lower by five percentage points. (See Table 1)

In conducting FVA analysis, we do not need to stop there, and we can make it as simple or complex as needed to evaluate our process. FVA can be utilized to determine the effectiveness of any touch point in the forecasting process. A company might start with a naïve forecast; however, this kind of comparison can be made for each sequential step in the forecasting process. One can compare the statistical forecast to a naïve forecast, or evaluate the value of causal inputs, sales overrides, or the consensus forecasting process.

In our analysis above, we might find, for example, that the statistical forecast is worse than the naïve forecast either driven by something in the time series data or tweaks that were made in the parameters of the models we are using. We may also see that the overall process is adding value by bringing all the inputs together to a consensus, but the sales and marketing inputs are negatively biased, which are impacting the final numbers.

One of the best ways to measure if the process is adding value is to utilize FVA, and determine if the forecast proves to be better. Better than what though? The most common and fundamental test in FVA analysis is not only comparing process steps used in forecasting but to also compare the forecast against the naïve forecast.

Forecast Value Added

What Is The Naive Forecast?

As per the Institute of Business Forecasting (IBF) Glossary, a naïve forecast is something simple to compute, requiring a minimum amount of resources. The key is something simple, and traditional examples are random walk (no change from the prior period where the last observed value becomes the forecast for the current period), or seasonal random walk (“year over year” using the observed value from the prior year’s same period as the forecast for the current period).

Although it seems simple, determining the naive forecast is never that easy. The best way to determine the baseline or naive forecast to measure against, one needs to remember what the primary task is and what happens if forecasting does not achieve it.

We might like to believe that if we, as forecasters, were to suddenly disappear, all the companies’ activities would come to a halt and be paralyzed, not knowing how to plan for the future. The truth is, life will go on without us and items will be produced, inventory will be built, materials will be ordered, and investments will be made. That is the key and what I like to measure against.

If you did nothing, what numbers would the company use to function? They may not call it a naive forecast, but what you generally find out is, in the absence of an expert forecast signal, a company will go with what they have, what they know, and what is simplest to get. Some may use a moving average of what was sold in the past few months, or even simpler, what was sold last month (random walk). For others who know there is inherent seasonality, they may take the sales from last year and plan against that (seasonal random walk).

Still others have budgets or financials that are locked in and, without a better signal, are what the company would plan to. The goal is to find how a company and its supply chain tend to look at its business. Is it reactionary, seasonal, top down, or an entirely different approach? How does that translate into how they would plan without a forecasting professional or process in place?

One is left with the organization’s naïve forecast. This can be the traditional random walk, or a simple moving average, or a financial projection. I have even seen some companies use the statistical baseline from their forecasting system as the naïve. None of these approaches is wrong. The best answer is the baseline forecast that takes the least amount of effort at little or no cost or resources and, I would add, drives the supply chain without the influences of the forecasting process.

Another very common and overlooked benchmark is what most re-order points and inventory targets are set. Many companies, even with a good forecast, still exclude forecast variation from their calculations and look at the coefficient of variation (COV) to measure the variation of historic demand to set policies, in essence using a naïve forecast to set policy. While this is not used as a forecast, it is a lens you can use to compare your overall forecast performance to the Demand Variation Index (DVI). The Demand Variation Index utilizes the calculation similar to the Coefficient of Variation by measuring the ratio of absolute standard deviation or percent of inherent variation to the mean or average demand.

The output of DVI will be a percentage of normal inherent variation as a percentage that can be compared to MAPE provided by your FVA. Commonly used in forecasting to see if the forecast error or variation from actual demand over time is greater than normal variation, it stands to reason that an improved DVI is better at predicting demand.

So now that we have determined the baseline or naive forecast, a reasonable expectation is that our forecasting process (which probably requires considerable effort and significant resources) should result in better forecasts. Until one conducts FVA analysis, it may not be known. Unfortunately, we have seen time and time again that many organizations’ forecasts are worse than if they just used a naïve forecast. In the book by Michael Gilliland, Len Tashman and Udo Sglavo, Business Forecasting: Practical Problems and Solutions, the authors highlight a recent study by Steve Morlidge.

After studying over 300,000 forecasts they found that a staggering 52% of the forecasts were worse than using a random walk. A growing amount of qualitative evidence would lead us to a similar conclusion: as the systems, inputs, and processes have become more elaborate and complex, the results of the forecast have not generated much better results. For all of the collaboration, external data, and fancy modeling, I would not be surprised if half the time we still are not bettering the naive forecast.

What one needs to do is focus on the steps and inputs, and simplify the process to what is working and use the inputs that add value. This way we could better focus our organization’s resources, money, and effort on the primary objective, which is improving forecast accuracy. If only we had quantitative evidence or a way to measure the different steps or inputs in a forecasting process and conclude we were adding value…

Putting FVA To Work

Forecasting is both a science and an art. Companies can employ standard algorithms to help generate a forecast, but it still takes a skilled practitioner to put the numbers together into a coherent form. As we have seen, measuring the effectiveness of that forecast is also a process with both science and art as well.

Much like the concept of FVA being a “lean” principle that helps identify what is adding value, utilizing FVA is not meant to generate unneeded excess work. Look at a simple approach to measuring and analysing your current forecast processes, and find the best ways to integrate FVA to improve the inputs and process you already have. A great place to start is by mapping each of the main sequential steps in your current forecasting process, and then tracking the results at each of those aggregate steps. A common process could include steps as shown in Figure 1.

Forecast Value Added diagram

From here, you can incorporate and use FVA in your analysis much like you use any forecast metric. Also, it is important to maintain some of the same principles as other metrics. First, understand that one data point does not make a trend. Just because you have one period with a negative FVA doesn’t mean we should fire our forecaster. Anomalies can occur in processes and inputs. Just as anomalies can happen with data analysis, they also happen with FVA analysis. Like most metrics, it needs to be evaluated over time. The same way we look at forecast accuracy, FVA viewed over time can be used to identify positive or negative trends and bias in inputs or steps. Next, I would recommend looking at sub-processes or inputs in the steps that need the most attention. If the statistical forecast is consistently adding value and it is the overrides that are interjecting variation into the process, then begin with the overrides.

For example, you may find that Sales is attempting to re-forecast the numbers every month instead of providing true inputs or overrides. Using FVA, we have already determined that the statistical baseline is effective, and now the purpose of gathering inputs should not be to validate the statistical model or calculations, but to include selective information that may be available but not reflected in historical data.

In this case, FVA can serve as an effective sales training tool. We don’t want Sales to spend their resources regenerating an entire forecast to try to correct the forecast; rather we want them to improve upon it. We already know and can demonstrate that we have a solid statistical baseline forecast from our system. We have a baseline that most likely knows better than they do about seasonality, the level, trend, and data driven events.

What we want to know is what we don’t know, so we can make minor inputs or overrides into the forecast, either up or down, in our baseline prediction. The sales training tool comes in the FVA as a feedback loop to those inputs to help identify what inputs work or don’t work, and the scale of adjustments needed to create value in the forecasting process.

Finally we need to look at the process as a whole again. In order to determine if a forecasting step or input is adding value, it is not enough to simply look at it as an isolated item; rather, it is best to look at it as an intelligent combination of inputs and processes. Extending this further, different inputs (or the same) combined and aggregated differently can be thought of as different forecasts and, as such, provide different insight.

The final question for us is not whether each of these inputs adds value; rather if each of these inputs can be combined in a meaningful way to create a better forecast that effectively integrates process, inputs, and analytics with the planner’s expertise. At the end of the day, our goal is to make a forecast more accurate and reliable so that it adds value to the business.

The Bottom Line

Increasing forecast accuracy is not an end in itself, but it is important if it helps to improve the rest of the planning process. Reducing forecast error and variability via FVA analysis can have a big impact on service, inventory, and cost for an organization. Each time we’re adding 2 percent forecast value added, that 2 percent means something in dollars. That’s why we add FVA analysis to help measure our process and show the value proposition for any process changes you’re considering.

 

This article first appeared in the Journal of Business Forecasting (JBF), Spring 2016 issue. To receive the JBF and other benefits, become an IBF member today.

]]>
https://demand-planning.com/2018/02/12/what-is-forecast-value-added-analysis/feed/ 2
How To Use Microsoft Azure https://demand-planning.com/2018/01/29/how-to-make-your-own-powerful-machine-learning-forecasting-models-for-free-without-coding/ https://demand-planning.com/2018/01/29/how-to-make-your-own-powerful-machine-learning-forecasting-models-for-free-without-coding/#comments Mon, 29 Jan 2018 20:12:25 +0000 https://demand-planning.com/?p=6067

If, like me, you work in a small to medium sized enterprise where forecasting is still done with pen and paper, you’d be forgiven for thinking that Machine Learning is the exclusive preserve of big budget corporations. If you thought that, then get ready for a surprise. Not only are advanced data science tools largely accessible to the average user, you can also access them without paying a bean.

If this sounds too good to be true, let me prove it to you with a quick tutorial that will show you just how easy it is to make and deploy a predictive webservice using Microsoft’s Azure Machine Learning (ML) Studio, using real-world (anonymised) data.

What is Azure ML?

To most people the words ‘Microsoft Azure’ conjure up vague ideas of cloud computing and TV adverts with bearded-hipsters working in designer industrial lofts, and yet, in my opinion, the Azure Machine Learning Studio is one of the more powerful and leading predictive modelling tools available on the market. And again, its free.  What’s more, because it has a graphical user interface, you don’t need any advanced coding or mathematical skills to use it. It’s all click and drag. In fact, it is entirely possible to build a machine learning model from beginning to end without typing a single line of code. How’s that for a piece of gold?

You can make a free account or sign in as a guest here – https://studio.azureml.net The free account or guest sign-in to the Microsoft Azure Machine Learning Studio gives you complete access to their easy-to-use drag and drop graphical user interface that allows you to build, test, and deploy predictive analytics solutions.  You don’t need much more.

Microsoft Azure Tutorial Time!

I promised you a quick tutorial on how to make a forecast that drives purchasing and other planning decisions in Azure ML, and a quick tutorial you shall have.

If you’re still with me, here are a couple of resources to help you get rolling:

A great hands on lab: https://github.com/Azure-Readiness/hol-azure-machine-learning

Edx courses you can access for free: https://www.edx.org/course/principles-machine-learning-microsoft-dat203-2x-6

https://www.edx.org/course/data-science-essentials-microsoft-dat203-1x-6

Having pointed you in the direction of more expansive and detailed resources, it’s time to get into this quick demo. Here are the basic steps we’ll go through:

  • Uploading datasets
  • Exploring and visualising data
  • Pre-processing and transforming
  • Predictive modelling
  • Publishing a model and using it in Excel

Uploading Datasets To Microsoft Azure

So, you’ve signed up. Once you’re in, you’re going to want to upload some data. I’m loading up the weekly sales data of a crystal glass product for the years 2016 and 2017 which I’m going to try and forecast.  You can read in a flat file csv. format by clicking on the ‘Datasets’ icon and clicking the big ‘+ New’:

   Then you’re going to want to load up your data from the file location and give it a name you can find easily later. Clicking on the ‘flask’ icon and hitting the same ‘+ New’ button will open a new experiment. You can drag your uploaded dataset from the ‘my datasets’ list on to the blank workflow:

Exploring and Visualizing

Right clicking on the workflow module number (1) will give you access to exploratory data analysis tools either through ‘Visualise’, or by opening a Jupyter notebook (Jupyter is an open source web application) in which to explore the data in either Python or R code. If you want to learn how to use and apply Python to your forecasting, practical insights will also be revealed at IBF’s upcoming New Orleans conference on Predictive Business Analytics & Forecasting.

Clicking on the ‘Visualise’ option calls up a view of the data, summary statistics and graphs. A quick look at the histogram of sales quantity shows that the data has some very large outliers. I’ll have to do something about those during the transformation step. You also get some handy summary statistics for each feature. Let’s have a look at the sales quantity column.

I’m guessing that zero will be Christmas week, when the office is closed. The max is likely to be a promotional offer. I can also see that the standard deviation is nearly 12,000 pieces, which is high compared to the mean. You can also compare columns/features to each other to see if there is any correlation:

Looking at a scatter plot comparison of sales quantity to the consumer confidence index value, that really doesn’t seem to be adding anything to the data. I’ll want to get rid of that feature. I’ve also included a quick Python line plot of sales over the two-year period.

As you can see, there is a lot of variability in the data and perhaps a slight downward trend. Without some powerful explanatory variables, this is going to be a challenge to accurately forecast. A lot of tutorials use rich datasets which the Machine Learning systems can predict well to give you a glossy version. I wanted to keep this real. I work in an SME and getting even basic sales data is an epic battle involving about fifty lines of code.

Pre-processing and Transforming

Now it’s time to transform the data. For simplicity, I’ve loaded a dataset with no missing or invalid entries by cleaning up and resampling sales by week with Python, but you can use the ‘scrub missing values’ module or execute a Python/R script in the Azure ML workspace to take care of this kind of problem.

In this case, all I need to do is change the ‘week’ column into a datetime feature (it loaded as a string object) and drop that OECD consumer confidence index feature as it wasn’t helping. I could equally have excluded the column without code using the select columns module:

One of the other things I’m going to do is to trim outliers from the dataset using another ‘Execute Python Script’ module to identify and remove outliers from the sales quantity column so the results are not skewed by rare sales events.

Again, I could have accomplished a similar effect by using Azure’s inbuilt ‘Clip Values’ module. You genuinely do not have to be able to write code to use Azure (but it helps.)

There are too many possible options within the transformation step to cover in a single article. I will mention one more important step. You should normalise the data to stop differences in scale of the features leading to certain features dominating over others. 90% of the work in forecasting is getting and cleaning the data so that it is usable for analysis (Adobe, take note. Pdf’s are evil and everyone who works with data hates them.) Luckily, you can do all your wrangling inside the machine model, so that when you use the service, it will do all the wrangling automatically based on your modules and code.

The Normalize data module allows you to select columns and choose a method of normalisation including Zscores and Min-Max.

Predictive Modelling In Microsoft Azure

Having completed the data transformation stage, you’re now ready to move on to the fun part – making a Machine Learning model. The first step is to split the data into a training set and a testing set. This should be a familiar practice for anyone working in forecasting. Before you let your forecast out into the wild you want to test how well it performs against the sales history. It’s that or face a screaming sales manager wanting to know where his stock is. I like my life as stress-free as possible.As with nearly everything in Azure ML, data splitting can be achieved by selecting a module. Just click on the search pane and type in what you want to do. I’m going to split my data 70-30.

The next step is to connect the left output of the ‘Split Data’ module to the right input of a ‘Train Model’ module, the right output of the ‘Split Data’ to a ‘Score Model’ module, and a learning model to the right input of the ‘Train model’.

At first this might seem a little complicated, but as you can see, the left output of the ‘Split Data’ is the training dataset which goes through the training model and then outputs the resulting learned technique to the ‘Score Model’ where this learned function is tested against the testing dataset which comes in through the right data input node. In the ‘Train Model’ module you must select a single column of interest. In this case it is the quantity of product sold that I want to know. 

Microsoft offer a couple of guides to help you choose the right machine learning algorithm. Here’s a broad discussion and if short on time, check this lightning quick guidance. In the above I’ve opted for a simple Linear Regression module and for comparison purposes I’ve included a Decision Forest Regression by adding connectors to the same ‘Split Data’ module. One of the great things about Azure ML is you can very quickly add and compare lots of models during your building and testing phase, and then clear them down before launching your web service.

Azure ML offers a wide array of machine learning algorithms from linear and polynomial regression to powerful adaptive boosted ensemble methods and neural networks. I think the best way to get to know these is to build your own models and try them out. As I have two competing models at work, I’ve added in an ‘Evaluate Model’ module and linked in the two ‘Score Model’ modules so that I can compare the results. I’ve also put in a quick Python script to graph the residuals and plot the forecasts against the results.

Here’s the Decision Forest algorithm predictions against the actual sales quantity:

Clearly something happened around May 2016 that the Decision Forest model is unable to explain, but it seems to do quite well in finding the peaks over the rest of the period 2017. Looking at the Linear Regression model, one can see that it does a better job of finding the peak around May 2016 but is consistently overestimating in the latter half of 2017.

Clicking on the ‘Evaluate Model’ module enables a more detailed statistical view of the comparative accuracy of the two models. The linear regression model is the top row and the decision forest model is the bottom row.

Coefficient of determinations of 0.60 and 0.72. The models are explaining between half and three-quarters of the variance in sales. The Decision Forest overall scored significantly better. As results go, neither brilliant nor terrible. A perfect coefficient of determination of 1 would suggest the model was overfitted and therefore unlikely to perform well on new data. The range of sales was from 0 to nearly 80,000, so I’ll take 4421 pieces of mean absolute error without a complaint.

It would really be ideal if we had a little more information at the feature engineering stage. The ending inventory in-stock value from each week, or customer forecasts from the S&OP process as features would help accuracy.

One of the benefits of forecasting in this way is you can incorporate features without having to worry about how accurate they are as the model will figure that out for you. I’d recommend having as many as possible and then pruning. I think the next step for this model would be to try incorporating inventory and S&OP pipeline customer forecasts as a feature. Building a model is an iterative process and one can and should keep improving it over time.

Publishing A Model And Consuming It In Excel

Azure ML makes setting up a model as a webservice and using it in Excel very easy. To deploy the model, simply click on the ‘Setup Web Service’ icon at the bottom of the screen.

Once you’ve deployed the webservice, you’ll get an API (Application Programming Interface) key and a Request Response URL link. You’ll need these to access your app in Excel and start predicting beyond your training and testing set. Finally, you’re ready to open good old Excel. Go to the ‘Insert tab’ and select the ‘Store’ icon to download the free Azure add-in for Excel.

Then all you need to do is click the ‘+ Add web service’ button and paste in your Response Request URL and your secure API key, so that only your team can access the service.

After that it’s a simple process to input the new sales weeks to be predicted for the item and the known data for other variables (in this case promotions, holiday days in the week, historic average annual/seasonal sales pattern for the category etc.). You can make this easy by clicking on the ‘Use sample data’ to populate the column headers so you don’t have to remember the order of the columns used in the training set.

Congratulations! You now have a basic predictive webservice built for producing forecasts. By adding in additional features to your dataset and retraining and improving the model, you can rapidly build up a business specific forecasting function using Machine Learning that is secure, shareable and scalable.

Good luck!

If you’re keen to leverage Python and R in your forecasting, we also recommend attending IBF’s upcoming Predictive Analytics, Forecasting & Planning conference in New Orleans where attendees will receive hands-on Python training. For practical and step-by-step insight into applying Machine Learning with R for forecasting in your organization, check out IBF’s Demand Planning & Forecasting Bootcamp w/ Hands-On Data Science & Predictive Business Analytics Workshop in Chicago.

 

 

 

 

]]>
https://demand-planning.com/2018/01/29/how-to-make-your-own-powerful-machine-learning-forecasting-models-for-free-without-coding/feed/ 1