IT support teams spend valuable time searching through documentation and responding to support tickets. As operations scale or at times of high volume, SLAs can suffer as a result of this manual process. With Dataiku, you can use an LLM to search through documents and extract the relevant information, reducing response time as well as time spent on mundane tasks.
- Scalable: Tickets are processed in batch by Dataiku, increasing the capacity of service teams at times of higher volume.
- Reduce Cost: Maximize limited IT resources through increased productivity and efficiency.
- Increased Accuracy: Base responses on trusted documentation to reduce potential miscommunication or human error.
- Controlled: The result is pushed to the support team for human verification, along with a link to the documentation used to answer the question.
How It Works: Architecture
Support tickets are processed in batch by Dataiku as new tickets are received. Using a public API, a first large language model (LLM) call classifies which tool the ticket is about.
Next, semantic search is performed on publicly available documentation to find text chunks relevant to the query, which is then sent over to the LLM. The LLM then provides input to the answer based on the pre-processed text and the question. It includes links to the source documentation where it found the answer.
The result is pushed to the support team via Dataiku Advisor, which is accessible as an app via browser. If using a hook to tools within support teams workflows such as Slack or Microsoft Teams, agents can be notified instantly of updates. The support agent simply writes a response in their own words.
The documents and text-to-search are related to documentation and not personal information, and the results are delivered as a series of suggestions to a human agent.
To ensure the responsible use and deployment of this model, make sure the documentation used for text analysis is up to date and correct. The model results are communicated to a support agent to support their answer, so it is not necessary to communicate the use of an LLM to the client.