en

LLM-Enhanced Demand Forecast

Optimize your supply chain with dynamic, Generative AI-driven recommendations based on demand forecast predictions.


Feature Highlights

  • Improve Responsiveness: Increase response time in purchase and markdown decisions by gaining instant insights from real-time analytics.
  • Increase Revenue: Ensure an improved, continuous response to product demand to minimize revenue impact from unfulfilled orders.
  • Empower Teams: Improve communication and collaboration between teams with increased response time and the ability for more people to turn analytics into action — all with full data traceability. 

How It Works: Architecture

After creating a demand forecast modeling project in Dataiku, a supply chain analyst interacts with data through a dedicated interface, allowing them to select specific SKUs, markets, stores, and timelines. A Generative AI app is embedded in this interface to give the analyst an eased understanding of recommended actions, such as purchase order or markdown. 

For example, a planned surge in demand for a given SKU in a context of limited stock is immediately identified by the LLM, resulting in a purchase order suggestion. The supply chain analyst is provided with accelerated capacity to push this as an email to engage for a procurement activity, or can revisit the recommendation as needed thanks to full data access. 

The analyst’s initial query is sent to an LLM via API along with:

  1. A template for the type of comments, and
  2. Detailed data on the current stock, forecast, and more.  

Considering this data is sensitive to the company and can give important competitive insights to peers, using a containerized LLM to avoid any data leakage is advised.

Responsibility Considerations

This project uses an LLM to support comments generation based on outputs from a demand forecast machine learning model. Therefore it has different Responsible AI considerations for the two different models.

For the demand forecasting model, consider that it uses historic data to support provision optimization. Recommended actions are to embed explainability for predictions into model outputs and to monitor the model for robustness and reliability over time.

For the Generative AI app, as the LLM is used to generate insight via a chatbot approach, it is important that the insights provided by the LLM are marked as AI generated and that end users know they are interacting with an AI system. 

Additionally, review of the insights should be conducted at a meta level, and supply chain analysts should have a way to report incorrect or inconsistent insights. Finally, limitations of the model should be documented, and end users should be encouraged to use best judgment when working with outputs of the model.  

All of these recommendations should of course be underpinned and supported by an overarching Responsible AI policy to enforce consistent practices across AI projects.