Building A Custom Search For Support Tickets A.K.A. A Bug Tracker
When submitting a bug report, you want to make sure that no one has submitted the same report than you, to avoid duplicates as well as mean comments from the community. The problem is, the default search engine for github tickets does not handle synonyms or even plurals, so it’s not very efficient. So we decided to build another search engine.
To do this, we used the Algolia API, as well as two plugins, in Dataiku DSS of course, because we wanted something quick to setup.
This is actually a project that we use internally at Dataiku to search bugs reports, and our support tickets.
1. Save time by finding the relevant Github ticket right away:
- find answers to already asked questions,
- avoid duplicates,
- comment on issues encountered by another user,
2. Visualize stats on those tickets, for fun!
How Did We Do This?
The steps are simple:
- Download issues from Github,
- Format them so they can be used by the Algolia API,
- Sync them to Algolia.
To save time, we make sure that everyday, only the tickets updated the day before are pushed to Algolia.
Should you want to reuse and adapt this project to your own need, here are the steps. This project requires:
- the python package markdown, which you can install by typing
DATA_DIR/bin/pip install markdown
- the plugins Github and Algolia, which you can install on the plugins management page.
- a patch to PyGitHub to avoid exceeding the rate limit: modify
DATA_DIR/pyenv/lib/python2.7/site-packages/github/Requester.py according to https://github.com/PyGithub/PyGithub/pull/378/files
- a few images for the webapp. Please unzip this archive in DATA_DIR/local/static (create the latter directory if needed. See here for more info).
- a setup of Algolia:
- Create an account on algolia.com and choose the free plan. Skip the tutorial that offers to import data: this project will import data.
- Browse to “API keys” and copy-paste the application ID and Admin API key into the algolia dataset settings. While at it, copy-paste the Search-Only API Key into the JS tab of the webapp.
- On algolia.com, create an new index “issues”.
- Push data to Algolia: build the dataset “algolia_unpartitioned_for_initial_import”, build mode “recursive”. Note that downloading issues from github is slow, maybe 5K issues per hour.
- Optionally, schedule a daily data update: in the scheduler, create a new job schedule on the dataset “algolia” to build the partition “PREVIOUS_DAY”, daily at 3:10, build mode “force-rebuild”.
- Reload the page algolia.com to see the fresh data, then configure the index: click “ranking”, add this in “attribute to index”: title, texts, objectID, state, _tags, user, milestone, assignee. In Custom ranking, add updated_at_ts.
- Finally, configure the Display tab of Algolia: in “Attributes for faceting”, enter _tags, assignee, created_at_ts, milestone, state, updated_at_ts, user.
Explore This Sample Project
So you can understand what we're trying to build, let's start by exploring the finished project. It's a search page. You can type in a few words to look for issues, and refine your search with the facets on the left.
The flow is linear. We started by downloading all the issues from the Scikit repository on git. You can obviously choose another repository, even a private one to which you have access. We do this with Dataiku DSS's github plugin, which provides a custom dataset. This is the input dataset at the left of the flow.
- This dataset does not store the data (just a sample for preview), it mereley provides the connection. So a sync recipe copies this data into a managed local dataset.
- Then a Python recipe formats the comments.
- The resulting dataset goes through a preparation script to rename some columns and format dates according to Algolia's needs. The result is partitioned thanks to the “Redispatch partitioning according to input columns” option. (We partition according to the “updated_at_partition_id” column).
- This final dataset is uploaded to Algolia by a sync recipe, which every day uploads just the tickets updated the day before. Here as well, the output dataset does not store the data, it only provides the connection.
Finally, here are some graphs. We could run many more stats on this dataset, this is just an illustration.