Go beyond understanding simple patterns and trends based on the volume of customer reviews with Generative AI — more specifically large language models (LLMs). Use this technology to extract elements from reviews to include entity recognition meta-data, down to sentiment analysis on specific product dimensions such as size, fit, quality, fabric, color, or other product-specific values.
In this use case built in Dataiku, users can derive these insights through a self-service analytics generation application once the input is validated and added to a structured data set.
- Uncover Patterns & Proactively Identify Issues: Analyze massive amounts of data with simple queries to gain insights that would have been previously been out of reach, such as spikes of complaints on specific product features to engage in corrective actions.
- Increase Collaboration and Efficiency: Different teams across the integrated value chain gain the ability to ask questions most relevant to their roles, increasing collaboration among marketing, product, supply, and sales.
- Limit Data-Related Risks: Personal data is extracted from reviews, reducing data governance requirements.
How It Works: Architecture
A first LLM analyzes the reviews and automatically extracts the relevant information in a structured way. The application provides a user-friendly interface that validates the extracted data by highlighting corresponding segments of the text for each category. The product or customer analyst can confirm and amend the text as needed before saving it as structured data.
A self-service analytics application leverages a second LLM to provide answers and insight to any user query. Upon receiving a query, the model generates a set of Dataiku instructions executed locally to generate the required dashboard, providing the user a tailored response to their question. This approach allows the model to maintain constant relevancy and unlimited scope no matter the size or complexity of the underlying data while preserving the highest level of data privacy, as no actual data values are transmitted to the model during the process.
In addition to an overarching Responsible AI policy to enforce consistent practices across AI projects:
- The model should be secured to ensure data privacy.
- Human reviewers should be able to provide feedback on model performance to improve the underlying extraction algorithm.
- Output data should be regularly reviewed to ensure the model does not extract private or personal information.
- Data scientists should review data about extractions and their overall correctness for consistency and fairness across different subgroups and to ensure biases are not present in how extraction prioritizes certain information.