en

ONNX Exporter

The ONNX exporter plugin allows to export Visual Deep Learning and Keras .h5 models to ONNX format.

Plugin information

Version 1.2.0
Author Dataiku (Arnaud d’Esquerre, Nicolas Dalsass)
Released 2020-07-15
Last updated 2023-04-26
License Apache 2.0
Source code Github
Reporting issues Github

What is ONNX?

ONNX is an open format built to represent machine learning models for scoring (inference) purposes. ONNX defines a common set of operators – the building blocks of machine learning and deep learning models – and a common file format to enable AI developers to use models with a variety of frameworks, tools, runtimes, and compilers. More info here

Description

This plugin aims at offering an easy way to deploy deep learning models to various systems with the ONNX runtime.
You can learn more about the languages, architectures and hardware accelerations supported by ONNX runtime here.

It offers the conversion of two kinds of models:

  • Visual Deep Learning models trained with DSS
  • Keras .h5 models.

Note that for Visual Deep Learning models, the plugin will export the model itself but not the features handling defined in DSS.

This plugin contains one recipe and one macro for each conversion type.

Visual Deep Learning models trained with DSS:

Keras .h5 models:

Installation Notes

The plugin can be installed from the Plugin Store or via the zip download (see installation panel on the right).

How to use the Macros ?

Convert saved model to ONNX macro

  1. (If you don’t already have one) create a DSS python3 code environment
    with the following packages (i.e. packages needed for Visual Deep Learning with Tensorflow>=2.8.0):
    tensorflow>=2.8.0,<3.0
    scikit-learn>=0.20,<0.21
    scipy>=1.2,<1.3
    statsmodels==0.12.2
    Jinja2>=2.11,<2.12
    MarkupSafe<2.1.0
    itsdangerous<2.1.0
    flask>=1.0,<1.1
    pillow==8.4.0
    cloudpickle>=1.3,<1.6
    h5py==3.1.0
  2. Train a Visual Deep learning model in DSS with this code environment (here is a tutorial to help you get started)
  3. Install this plugin
  4. Go the flow
  5. Click on the saved model
  6. In the right panel in the other actions section, click on Export to ONNX
    "
  7. Fill in the parameters (details below)
  8. Click on RUN MACRO
  9. Click on Download ONNX model to trigger the download. The model has also been added to the output folder

Available parameters

  • Saved model (DSS saved model): Visual Deep Learning model trained in DSS to convert
  • Output folder (DSS managed folder): Folder where the ONNX model will be added
  • Output model path (String): Path where the ONNX model will be stored
  • Overwrite if exists (boolean): Whether the model should overwrite the existing file at same path (if it already exists)
  • Fixed batch size (boolean): Some runtimes do not support dynamic batch size and thereefore the size should be specified during export.
  • Batch size (int) [optional]: Batch size of the model’s input
  • Force input/output to Float (32 bits) (boolean): Some runtimes do not support Double. Uncheck if your runtime supports Double

Convert Keras .h5 model to ONNX macro

  1. Put a .h5 model file obtained through Keras’s model.save() method into a DSS Managed Folder
  2. Go to flow
  3. Click on the folder
  4. In the right panel in the other actions section, click on Export .h5 to ONNX
  5. Fill in the parameters (details below)
  6. Click on RUN MACRO
  7. Click on Download onnx model to trigger the download. The model has also been added to the output folder

Available parameters

  • Input folder (DSS managed folder): Folder where the .h5 model is
  • Model path(String): Path to the .h5 model to convert
  • Output folder (DSS managed folder) [optional]: Folder where the ONNX model will be added.
    If Output folder is left empty the model is added to the input folder.
  • Output model path (String): Path where the ONNX model will be stored
  • Overwrite if exists (boolean): Whether the model should overwrite the existing file at same path (if it already exists)
  • Fixed batch size (boolean): Some runtimes do not support dynamic batch size and thereefore the size should be specified during export.
  • Batch size (int) [optional]: Batch size of the model’s input
  • Force input/output to Float (32 bits) (boolean): Some runtimes do not support Double. Uncheck if your runtime supports Double

The macros are also available in the Macro menu of a project.

How to use the Recipes ?

Convert saved model to ONNX recipe

  1. Create (if you don’t already have one) a DSS python3 code env
    with the following packages (i.e. packages needed for Visual Deep Learning with Tensorflow>=2.8.0):
    tensorflow>=2.8.0,<3.0
    scikit-learn>=0.20,<0.21
    scipy>=1.2,<1.3
    statsmodels==0.12.2
    Jinja2>=2.11,<2.12
    MarkupSafe<2.1.0
    itsdangerous<2.1.0
    flask>=1.0,<1.1
    pillow==8.4.0
    cloudpickle>=1.3,<1.6
    h5py==3.1.0
  2. Train a Visual Deep learning model in DSS with this code env
  3. Create a managed folder by clicking on + DATASET > Folder
  4. Install this plugin
  5. Go the flow
  6. Click on the + RECIPE > ONNX exporter button
  7. Click on the Convert saved model option in the modal
  8. Choose the saved model you just trained as input and the folder you created as output
  9. Fill in the parameters on the recipe page (details below)

Available parameters

  • Output model path (String): Path where the ONNX model will be stored
  • Overwrite if exists (boolean): Whether the model should overwrite the existing file at same path (if it already exists)
  • Fixed batch size (boolean): Some runtimes do not support dynamic batch size and thereefore the size should be specified during export.
  • Batch size (int) [optional]: Batch size of the model’s input
  • Force input/output to Float (32 bits) (boolean): Some runtimes do not support Double. Uncheck if your runtime supports Double

Convert Keras .h5 model to onnx recipe

  1. Put a .h5 model file obtained through Keras’s model.save() method into a DSS Managed Folder
  2. Go to flow
  3. Click on the folder
  4. In the right panel in the Plugin recipes section, click on ONNX exporter
  5. Click on the Convert Keras .h5 model option in the modal
  6. Fill in the parameters (details below)

Available parameters

  • Model path(String): Path to the .h5 model to convert
  • Output folder (DSS managed folder) [optional]: Folder where the ONNX model will be added.
    If Output folder is left empty the model is added to the input folder.
  • Output model path (String): Path where the ONNX model will be stored
  • Overwrite if exists (boolean): Whether the model should overwrite the existing file at same path (if it already exists)
  • Fixed batch size (boolean): Some runtimes do not support dynamic batch size and thereefore the size should be specified during export.
  • Batch size (int) [optional]: Batch size of the model’s input
  • Force input/output to Float (32 bits) (boolean): Some runtimes do not support Double. Uncheck if your runtime supports Double

Get the Dataiku Data Sheet

Learn everything you ever wanted to know about Dataiku (but were afraid to ask), including detailed specifications on features and integrations.

Get the Data Sheet