Advancing healthcare with Dataiku and Nvidia
Curious how Dataiku and Nvidia partnership is advancing healthcare? Checkout the following blog post:
Advancing healthcare with Dataiku and Nvidia
Version | 1.0.0 |
---|---|
Author | Dataiku |
Released | 2024-07-17 |
Last updated | 2025-06-13 |
License | Apache License |
With this plugin, you can leverage and deploy NVIDIA NIM LLMs.
The plugin provides the following two components:
1. Dataiku LLM Mesh connection
A custom LLM Mesh connection that provides the following capabilities:
The plugin is agnostic regarding the deployment location for the NIM LLM; for example, LLMs can be hosted in NVIDIA Cloud, self-hosted using the deployment macro provided by this plugin, or hosted elsewhere.
2. NIM Deployment Macro
The NIM deployment macro provides the following capabilities:
Note: it is not mandatory to use the macro to deploy the GPU and NIM Operators. In fact, in some instances it is preferable (or even necessary) to deploy the GPU and NIM Operators externally (for example, using the OpenShift OperatorHub) instead of using the provided deployment action.
Common pre-requisites:
NIM Deployment pre-requisites:
Install the Plugin using the installation guide. Once installed, configure the appropriate plugin presets.
This preset stores per-user NIM API keys. Use this preset when the NIM LLM endpoints require an API key authentication, such as when using NIM hosted on NVIDIA Build.
This preset stores NIM Container Registry and NIM Model Repository credentials. Use this preset when self-hosting NIMs on an attached Kubernetes cluster using the NIM Deployment Macro.
This preset provides a mechanism to override the values of NIM environment variables. It should only be used when self-hosting NIMs on an attached Kubernetes cluster using the NIM Deployment Macro.
Once the setup is complete, you can access NIM models both in LLM Powered Visual Recipes, Prompt Studios and using Python and REST LLM Mesh APIs.
The NIM deployment macro is located in the Adminsitration → Clusters → *Cluster name* → Actions tab of the Kubernetes cluster.
The first three macro actions provide the option to list, deploy and remove the NVIDIA GPU and NIM Operators. If these Operators are not already available onto the cluster, they must be deployed prior to deploying your first NIM Serivce.
The next three macro actions provide the option to list, deploy and remove NVIDIA NIM Services. Under the hood, Dataiku leverages the NVIDIA NIM Operator, so all the options presented in the UI are simply those descripted in the NIM Operator documentation.
Once the setup is complete, you can access models both in LLM Powered Visual Recipes, Prompt Studios and using Python and REST LLM Mesh APIs
Curious how Dataiku and Nvidia partnership is advancing healthcare? Checkout the following blog post:
Advancing healthcare with Dataiku and NvidiaLearn everything you ever wanted to know about Dataiku (but were afraid to ask), including detailed specifications on features and integrations.
Get the data sheet