In 2018, one of the world’s largest beauty brands estimated having suffered millions of dollars in lost sales in Asian markets by running out-of-stock in lipsticks alone. This problem could have been avoided if these lipsticks were produced in higher quantities or stock was replenished from places with excess inventory. However, this solution would have required a level of foresight traditional forecasting methods cannot provide. Artificial Intelligence and the benefits it can bring, will revolutionize how demand forecasting is used to drive value for businesses around the Globe.
Across functions in any company, decisions that rely on estimating future demand are taken daily.
For leaders in procurement or production, proactively aligning supply to upcoming demand is their raison d’être.
For those in charge of the supply chain, the mission is to ensure stock availability to never miss a sale whilst at the same time avoiding having to move them around the globe to balance inventory.
For research & development, the main question is what the potential market opportunity is for the products currently in development.
For CMOs, it is about maximizing the efficiency of investments by supporting the most likely successes of tomorrow.
Whether it concerns finance or PR, leadership will be happy to address sustainability by minimizing wastage and not being forced to slash prices to clear stocks.
From out-of-stock to over-production, from skyrocketing shipping costs to wasting money advertising the wrong product, from failed or under-priced product launches, there is great risk associated with our systemic reliance on accurate and granular demand forecasts that when ineffective can significantly impact both top and bottom lines.
Yet, demand sensing has always been particularly challenging, it must be reactive to early signals, have the ability to distinguish between short-term shocks from long term changes in trend, all whilst factoring in a large spectrum of information. As a primarily backwards-looking methodology, this difficulty is amplified for new products, for which there is no historical reference.
Despite critical and major leaps observed in the field of data science, the ways of forecasting product demand in large corporations have not fundamentally changed in the last 20 years. We observe 2 main approaches: expert-driven and tool-driven.
The former focuses on communication with involved parties, such as buyers or product managers, to obtain their best-educated guess. This does not mean it cannot involve number crunching, but it is usually decentralized, uncoordinated and not peer-reviewed. Beyond the risk associated with accuracy, internal consistency is almost impossible to guarantee, predictions will often vary depending on the product or the department involved due to differences in the assumptions being made. Consistency over time is another concern, rotation in such positions is unlikely to guarantee proper knowledge transfer of the forecasting methodology.
The latter relies on off-the-shelf software solutions such as FutureMaster. This approach often begins systematically when the topic falls into the hands of CIOs or IT teams, as they tend to have a technology-first mindset. They will value ease of integration within ERPs and versatility to try to lower the financial and human costs of set-up; criteria where plug-and-play solutions shine.
These tools bring more hard science into the process. Most of the big names rely on statistical methods that were state-of-the-art in the early 1990s (e.g. ARIMA), and we see an increasing number of players claiming to have artificial intelligence under the hood. However, they tend to be one-size-fits-all and are not designed to be adapted to each business specificity nor to be nourished with human expertise. The inherent uniformity of this solution also prevents the predictions from accounting for external factors that can be crucial drivers in certain industries, such as changes in the market, online buzz or competitor actions. Finally, it can be difficult for software solutions to understand relationships between products (e.g. cannibalization), which damages predictions at a product level, but most crucially, renders them incapable of forecasting demand of new products that have yet to be launched.
Both approaches lack accuracy and suffer systemic flaws. The expert-driven approach struggles to maintain consistency and operational efficiency, whilst the tool-driven approach falls short on adaptability, interpretability and limits the scope to existing products. For that very reason, lots of organizations adopt hybrid solutions, where a forecasting tool may be used by one department, but others do not trust or cannot make sense of it, consequentially they revert to manual forecasting. Cost-effectiveness and forecast reconciliation, therefore, suffers even further.
We now live in a different era, where data science has emerged out of the tech community and regularly makes global headlines. You would be challenged to find a CEO who is not wondering how to better leverage their company’s data to take advantage of the progress in the field. Sales forecasting is often brought up as a key business case for AI and rightly so. It can be a game-changer for business with the ability to improve prediction accuracy by orders of magnitude, but not without putting up a fight first. Artificial Intelligence should be viewed as a solution to the aforementioned issues of demand forecasting.
As opposed to expert-driven or tool-driven sales forecasting, an approach increasingly popular in the realm of data science is the proof of concept. This meaning, starting small and exploring a highly tailored solution on a reduced scope to show its potential value. This addresses the two main shortcomings of an off-the-shelf product for demand sensing.
First, when governed properly, starting with a small pilot can foster business adoption and better control of costs. Here it is crucial to involve the business from the framing stage and having end users test the solution as soon as it comes out of the oven to ensure relevance. Should it not satisfy the business, it’s back to the drawing board or the option to pull the plug before having sunk large amounts of investment. And so, we iterate. This agile validation loop involving both IT and the business is essential to make the project a success. In practice though, cross-departmental collaboration such as these can be challenging to conduct in organizations without dedicated data governance in place. In order to prevent conflicting interests interfering with the project, a joint steering committee, ideally involving both CIO and COO, has proven to be the most effective route.
Second, a POC’s flexibility is instrumental in maximizing the accuracy of forecasts. Beyond guaranteeing the solution’s perfect alignment with the needs defined by the business, flexibility ensures relevant external factors are being considered in predictions by feeding in product knowledge or accounting for relationships between products.
We can also benefit from the most recent developments in this bustling field and pick the most relevant algorithms for the unique needs of the business. This does not mean it has to be developed from scratch every time. Indeed, there are many machine learning algorithms out there that can be leveraged for demand sensing, and unfortunately, not a single one trumps all the others in all situations, even within a single company and a single market. We observe the best results when combining several algorithms, having them explicitly or implicitly assigned to different cases. Besides, being able to yield the full potential of AI algorithms requires substantial work on feature engineering and parameterization. There is incredible value in learning from other forecasting experiences. That is why we prefer developing a solution based on our proprietary forecasting engine that has been continuously improved and perfected over the years, optimizing it for the intended scope and integrating it into the client’s environment. This not only ensures higher performance but is also more cost-efficient.
On paper then, everything seems great with POCs. However, we have seen more fail to be industrialized into company-level assets than succeed. Why? Technical debt is the primary reason.
Developing a small-scale pilot on a non-production environment is a vastly different exercise compared to building a live program securely plugged into the ERP. Converting one into the other is usually more cumbersome than starting from scratch. That is why we advocate starting small, but with industrialization in mind from the pilot stage. This is sometimes called a Minimum Viable Project, or MVP. It is important to make sure potentially scaling up will not require fundamental changes in the logic nor the programming language. It also means considering resource optimization and the handling of large volumes of data in the technical design from the get-go. Although this may slightly increase the cost of launching a pilot, it will significantly reduce both time-to-market and the cost of the final industrialized forecasting solution.
Now let’s imagine for a minute that we have it, the AI-driven demand sensing solution live in production, fulfilling everyone’s forecasting needs. It works. It performs. Yet, no one can really explain how. Fully automated AI can be dangerous on two levels.
In 90% of cases, its unprecedented accuracy will be celebrated, and no one will complain about the fact that it is a black box. It is the 10% of cases where it will go wrong that will shed the light on its main handicap: interpretability. In business terms, this translates into serious issues in accountability. Forecasts can potentially feed numerous critical business decisions; as such, an erroneous prediction can have a significant business impact. Should it happen, who is responsible for it and how to make sure it will not happen again?
On top of that, in most industries, many important aspects that drive demand are almost impossible for a computerized system to understand. How do you ensure your algorithm will grasp a stylistic trend, account for bad PR buzz or be aware of a change in regulation? Even though technical solutions exist to almost each of these individual questions, building a system that will automatically account for the infinite number of drivers is not realistic.
The workaround most of our clients who made it this far in their AI forecasting journey have adopted to mitigate both these issues is adding a layer of human control at the end of the process. It is a very natural way to create accountability and open the door for injecting human expertise into the forecasts. According to a study run by the Boston Consulting Group, allowing post-machine human intervention to adjust the forecasts that they deem incorrect actually does more harm than good and lowers overall accuracy. Therefore, the most impactful way to seamlessly combine AI with human judgment and expertise is not just as an afterthought to control the output but as an input to enrich the data feeding the predictions.
Effectively, fusing the human into this process is a complex endeavour. It requires developing purpose-built UIs for collecting inputs (such as marketing plans, product similarity, market trends) as well as interpretability modules for better validation and analysis capabilities. Moreover, it requires deploying a more extensive change management program versus what is needed for fully-automatic solutions.
Navigating all these challenges can seem intimidating, especially if the solution has to serve several business use-cases. That is why we prefer a modular approach, getting traction with early first results, then delivering gradual improvements to ultimately address all the business needs.
There are plenty of quick-wins in using AI for forecasting purposes, especially when the best practices listed above are followed. Fully leveraging the power of AI to take product-level demand sensing to another level is a much taller order, but the stakes are high and the business value it can unlock is immense. If it permits your organization to reap high economic rewards through millions more in lipstick sales, leaner supply chains and more efficient marketing investment whilst also addressing sustainability through reduced wastage and unnecessary transportation, that is a lot of value for your money.
Sean J. Taylor & Benjamin Letham (2017): Forecasting at Scale, The American Statistician, DOI: 10.1080/00031305.2017.1380080
David Salinas, Valentin Flunkert, Jan Gasthaus, Tim Januschowski, 2019, DeepAR: Probabilistic forecasting with autoregressive recurrent networks, Amazon Research, Germany
Ger Koole, 2019, New Product Demand Forecasting A literature study, Vrije Universiteit Amsterdam
Amershi, S., Cakmak, M., Knox, W. B., & Kulesza, T. (2014). Power to the People: The Role of Humans in Interactive Machine Learning. AI Magazine, 35(4), 105-120. https://doi.org/10.1609/aimag.v35i4.2513
Maclaurin, D., Duvenaud, D., & Adams, R.P. (2015). Gradient-based Hyperparameter Optimization through Reversible Learning. ArXiv, abs/1502.03492.
—> Subscribe to the data science & AI newsletter!