en

Production Quality Data Explorer

Quickly identify defect-related challenges at scale with Generative AI-powered self-service exploration of production quality indicators.


With interactive dashboards, quality control teams can be alerted to changes in key metrics like real time sensor values, production rates, or defect rates per machine to more proactively respond to issues as they arise. Now, with Generative AI — and more specifically large language models (LLMs_ — teams can get even more efficient at identifying root causes and defect patterns. Get instant answers to the most relevant topics at hand with simple commands like these: 

  • Show me an overview of the data for (X)
  • Compare the temperature between defect and non-defect population 
  • Show me a control chart of the temperature with 180 and 210 as lower and upper bounds

Feature Highlights

  • Faster Response Time: Quality control analysts and technicians can get alerted to issues and perform analysis in a fraction of the time, so problems are addressed at speed.
  • Gain Real-Time Insights: Explore operations at scale across all production lines, all in real-time.
  • Reduce Cost: With faster response time, minimize the downstream financial impact of defects
  • Improve Operational Efficiency: With increased visibility, process speed, and faster response time, make your production process more efficient.

How It Works: Architecture

A comprehensive production quality control application is powered by Dataiku, leveraging sensor data to predict product defects. Quality control teams and technicians can interact with the production quality control project’s results through a chat interface that generates visuals based on requests, which are sent to an LLM via API. The answer will be generated instantly in the language of the user.

Upon receiving a query, the model generates a set of Dataiku instructions that is executed locally to generate the required dashboard, providing the user with a tailored response to their question. This approach allows the model to maintain constant relevancy and an unlimited scope, no matter the size or complexity of the underlying data while preserving the highest level of data privacy, as no actual data values are transmitted to the model during the process.

Leveraging public available APIs with the right data pre-processing can allow organizations to accelerate on such use cases, as long as the application is used by a professional having access to the underlying documents to do the right checks. A containerized version of the LLM could offer stricter control over data and input.

Responsibility Considerations

This project uses an LLM to support the use and insights generated from a predictive maintenance model. The delivery of information produced by the model is via chatbot. It is important that the insights provided by the LLM are marked as AI generated and that end users know they are interacting with an AI system. 

In addition to transparency and potential limitations of the LLM/chatbot’s responses, a panel titled “Data Sources Used for the Insights” provides users with an understanding of which columns from which datasets were used to generate the insights.