Central sales analytics teams are tasked with managing a significant volume of data and analytics projects to support the business better understand sales performance. Often, the sales organization has many questions that go unanswered due to the limited resources and high demand of these teams. With Generative AI — more specifically large language models (LLMs) — and Dataiku, sales leaders can use a simple conversational approach, based on common data starting point, to derive useful analysis.
- Empower Data-Driven Decisions: Sales leaders can quickly respond to challenges based on trusted data and easily conduct their own analysis.
- Increase Capacity: Empower central sales analytics teams to focus on more impactful insights vs. answering ad-hoc questions.
- Improve Collaboration: Increase response time and communication between teams.
- Gain Collective Visibility: Empower more people to use data to understand where the pipeline sits at any given moment.
How It Works: Architecture
A sales analytics project has been created in Dataiku. The project is connected to a self-service analytics exploration application that allows sales professionals to easily get answers to their questions with simple prompts, for example:
- Give me an overview of the sales for store X over the last three months.
- Give me a ranking of the sales for product Y across the stores.
- Which stores had the best performance?
- Show me the sales of the product FOODS_3_090 over the last year.
This prompt — as well as a data scheme and possible small data sample — is sent to an LLM via API. The answers are used by sales leaders to answer questions on the fly. The model’s provided response is derived from the entirety of available data, ensuring constant relevancy and an unlimited scope, regardless of the quantity of existing data.
Considering the limited amount of data sent out and the possibility to anonymize any sensitive data, the public version of the API could be used. Tighter control on data and input could be achieved with a containerized version of the LLM.
In addition to an overarching Responsible AI policy to enforce consistent practices across AI projects, other recommendations specific to this use case include:
- As the LLM is used to generate insight via a chatbot approach, it is important that the insights provided by the LLM are marked as AI generated and that end users know they are interacting with an AI system.
- Limitations of the model should be documented, and end users should be encouraged to use best judgment when working with outputs of the model.
- In particular when dealing with issues that could impact human safety concerns, teams should pay attention to the error rate of the underlying model.
- To address transparency and potential limitations of the LLM/chatbot’s responses, a panel titled “Data Sources Used for the Insights” provides users with an understanding of which columns from which datasets were used to generate the insights.
- As answers are delivered on the fly, retroactive auditability is not enabled and exact answers could vary based on data and behavior of the LLM. Training of the teams to advise on scope of usage is strongly advised.