sample project

Custom Search

June 03, 2016

Building a custom search for support tickets AKA a bug tracker

When submitting a bug report, you want to make sure that no one has submitted the same report than you, to avoid duplicates as well as mean comments from the community. The problem is, the default search engine for github tickets does not handle synonyms or even plurals, so it's not very efficient. So we decided to build another search engine.

To do this, we used the Algolia API, as well as two plugins, in Dataiku DSS of course, because we wanted something quick to setup.

This is actually a project that we use internally at Dataiku to search bugs reports, and our support tickets.

Business Goals

  1. Save time by finding the relevant github ticket right away:

    • find answer to already asked questions
    • avoid duplicates
    • comment on issues encountered by another user
    • etc.
  2. Visualise stats one those tickets, for fun

How did we do this ?

The idea is simple: - We download issues from github, - We format them so they can be used by the Algolia API, - Sync them to Algolia.

To save time, we make sure that everyday, only the tickets updated the day before are pushed to Algolia.


Should you want to reuse and adapt this project to your own need, here are the steps. This project requires:

  • the python package markdown, which you can install by typing DATA_DIR/bin/pip install markdown
  • the plugins github and algolia, which you can install on the plugins management page.
  • a patch to PyGitHub to avoid exceeding the rate limit: modify DATA_DIR/pyenv/lib/python2.7/site-packages/github/ according to
  • a few images for the webapp. Please unzip this archive in DATA_DIR/local/static (create the later directory if needed. See here for more info).
  • a setup of Algolia:

Algolia Setup

  • Create an account on and choose the free plan. Skip the tutorial that offers to import data: this project will import data.
  • Browse to “API keys” and copy-paste the application ID and Admin API key into the algolia dataset settings. While at it, copy-paste the Search-Only API Key into the JS tab of the webapp.
  • On, create an new index “issues”.
  • Push data to Algolia: build the dataset “algoliaunpartitionedforinitialimport”, build mode “recursive”. Note that downloading issues from github is slow, maybe 5K issues per hour.
  • Optionnaly, schedule a daily data update: in the scheduler, create a new job schedule on the dataset “algolia” to build the partition “PREVIOUS_DAY”, daily at 3:10, build mode “force-rebuild”.
  • Reload the page to see the fresh data, then configure the index: click “ranking”, add this in “attribute to index”: title, texts, objectID, state, tags, user, milestone, assignee. In Custom ranking, add updatedat_ts.
  • Finally, configure the Display tab of Algolia: in “Attributes for faceting”, enter tags, assignee, createdatts, milestone, state, updatedat_ts, user.

Explore this sample project

  • Search Page

    So you can understand what we're trying to build, let's start by exploring the finished project. It's a search page. You can type in a few words to look for issues, and refine your search with the facets on the left. 

    Explore !
  • Flow

    The flow is linear. We started by downloading all the issues from the Scikit repository on git. You can obviously choose another repository, even a private one to which you have access. We do this with Dataiku DSS's github plugin, which provides a custom dataset. This is the input dataset at the left of the flow.

    • This dataset does not store the data (just a sample for preview), it mereley provides the connection. So a sync recipe copies this data into a managed local dataset.
    • Then a Python recipe formats the comments.
    • The resulting dataset goes through a preparation script to rename some columns and format dates according to Algolia's needs. The result is partitioned thanks to the “Redispatch partitioning according to input columns” option. (We partition according to the “updated_at_partition_id” column).
    • This final dataset is uploaded to Algolia by a sync recipe, which every day uploads just the tickets updated the day before. Here as well, the output dataset does not store the data, it only provides the connection.

    Explore !
  • Graphs

    Finally, here are some graphs. We could run many more stats on this dataset, this is just an illustration. 

    Explore !

Ready to enter Dataiku DSS ?

If you never used DSS, it might be worthy to familiarize yourself with DSS concepts in the first place.

Learn the concepts Enter DSS

This sample is already available in your DSS!

From your DSS home page, click on "Sample projects".

If your DSS server doesn't have Internet access, you can download this sample and import it manually (click on "Import project")

Don't have Dataiku DSS yet? Try for free now

From your DSS home page, click on "Sample projects".

If your DSS server doesn't have Internet access, you can download this sample and import it manually (click on "Import project")

Don't have Dataiku DSS yet? Try for free now

Can't access DSS from a mobile device

Sorry please try again from a desktop device (Chrome and Firefox support only).

Only Chrome and Firefox are supported

Sorry you seem to use another browser not supported by DSS, please try again from Chrome or Firefox.