With Dataiku and Generative AI — more specifically large language models (LLMs) — you can speed up customer support and turn readily available contracts and legal information into instant answers for detailed policyholder questions like:
- Can I be reimbursed if I miss my train?
- Is my contract covering me for damages linked to flooding?
- Can I get access to a specific network of partners to repair my car?
- Increase Efficiency: Customer support agents spend less time manually searching for answers and are able to respond to more requests.
- Reduce Costs: Maintain current resources while increasing capacity, potentially reducing future hiring needs.
- Reduce Risk: Increase the accuracy of answers generated by pulling directly from trusted documentation to reduce negative impact and risk associated with sharing incorrect information.
- Increase Customer Satisfaction: Empower agents to answer questions in the moment and come to helpful resolutions, faster.
How It Works: Architecture
A flow built in Dataiku reads documents and splits them into meaningful chunks of a few hundred words. These chunks are then encoded using a sentence embedding transformer.
A support agent enters their question into a web application. The application then encodes the question and looks for the 5-10 chunks with the closest match. The relevant parts of documents and the prompt are sent over into an LLM. The LLM generates an answer in the web application based on the chunks provided and sources are reordered and displayed based on their similarity with the answer. The customer service manager can generate a new answer to a new question in a matter of a few clicks.
Using this application for existing contracts does not require strong scrutiny on data privacy considerations. Leveraging public available APIs with the right data pre-processing can allow organizations to accelerate on such use-cases, as long as the application is used by a professional having access to the underlying documents to do the right checks.
To go deeper and contemplate direct usage by customers, specific retraining of the LLM and usage in a private environment would be advised.
This is a use case that covers personal documents and provides an answer to an end user for review. As a result, care should be taken to ensure sensitive or protected data points are not made available to unauthorized end users.
If model results are provided directly to a client, documentation should indicate that an AI system is in use to generate answers. Outputs should be regularly reviewed for consistency and to ensure personal data is not leaked by the model.