Tutorial: From Lab to Flow


Now that you have prepared Haiku T-Shirt’s order logs, you are ready to join the customer data with it.

In this tutorial, we will join Haiku T-Shirt’s customer data with the prepared orders data. We will then enrich the combined data using the interactive Lab for further analysis.

Create your project

From the Dataiku homepage, click +New Project, select DSS Tutorials from the list, and select 102: From Lab to Flow (Tutorial).

Click on Go to Flow.

In the flow, you see the steps used in the previous tutorial to create and prepare the orders dataset. There is also a new dataset customers that we are going to describe in the next section.

Alternatively, you can continue in the same project you started in the Basics tutorial, by downloading a copy of the customers.csv file and uploading it to the project.

Use the Join recipe to enrich customers with orders data

A video below goes through the content of this section.

Open the customers dataset by double-clicking on its icon in the Flow.

Each row in this dataset represents a separate customer, and records:

  • the unique customer ID,
  • the customer’s gender,
  • the customer’s birthdate,
  • the user agent most commonly used by the customer,
  • the customer’s IP address,
  • whether the customer is part of Haiku T-Shirts’ marketing campaign

We are now ready to enrich the customers dataset with information about the aggregate orders customers have made. From the Actions menu choose Join with… from the list of visual recipes.

Select orders_by_customer as the second input dataset. Change the name of the output dataset to customers_orders_joined. Click Create Recipe.

The Join recipe has several steps (shown in the left navigation bar). The core step is the Join step, where you choose how to match rows between the datasets. In this case, we want to match rows from customers and orders_by_customer that have the same value of customerID and customer_id. Note that Dataiku DSS has automatically discovered the join key, even though the columns have different names.

By default, the Join recipe performs a left join, which retains all rows in the left dataset, even if there is no matching information in the right. Since we only want to work with customers who have made at least one order, let us modifiy the join type.

  • Click on the “Left join” indicator
  • The join details open. Click on “Join type”
  • Change the join type to an “inner join”. An inner join only retains rows of the datasets if they match. This will retain only the customers who have made an order, and remove the others from this analysis.

We want to carry over all columns from both datasets into the output dataset, with the exception of customer_id (since the customerID column from the customers dataset should be sufficient)

  • Click on the Selected columns step
  • Uncheck the customer_id column in the orders_by_customer dataset

Click Run to execute the recipe. Since you removed a column, DSS warns that a schema update is required. Accept the schema change. The recipe runs. When it is done, click to explore the dataset customers_orders_joined.

The following video goes through what we just covered

Types of joins

There are multiple methods for joining two datasets; the method you choose will depend upon your data and your goals in analysis.

  • Left join keeps all rows of the left dataset and adds information from the right dataset when there is a match. This is useful when you need to retain all the information in the rows of the left dataset, and the right dataset is providing extra, possibly incomplete, information.
  • Inner join keeps only the rows that that match in both datasets. This is useful when only the rows with complete information from both datasets will be useful downflow.
  • Outer join keeps all rows from both datasets, combining rows where there is a match. This is useful when you need to retain all the information in both datasets.
  • Right join is similar to a left join, but keeps all rows of the right dataset and adds information from the left dataset when there is a match.
  • Cross join is a Cartesian product that matches all rows of the left dataset with all rows of the right dataset. This is useful when you need to compare every row in one dataset to every row of another
  • Advanced join provides custom options for row selection and deduplication for when none of the other options are suitable.

By default, the Join recipe performs a Left join.

Discover the Lab

So far you have learnt how the datasets are created with recipes and how this create a data pipeline in the Flow. In this tutorial your are going to see how you can perform preliminary work on data outside the flow in a dedicated environment called the Lab.

Let us see which tools are available in the Lab. Open the customers_orders_joined dataset and click Lab. The Lab window opens.

Key concept: Lab

The Lab is a place for drafting your work, whether it is preliminary data exploration and cleansing or machine learning models creation. The Lab environment contains:

  • the Visual Analysis tool to let you draft data preparation, charts, and machine learning models
  • the Code Notebooks to let you explore your data interactively in the language of your choice

Note that some tasks can be performed both in the lab environment and using recipes in the flow. Here are the main differences and how to use them complementarily :

  • A Lab environment is attached to a Dataset in the flow, allowing you to organise your draft and preliminary works easily without overcrowding the Flow with unnecessary items. The Flow is mostly meant to keep the work that is steady and will be reused in the future by you or your colleagues.
  • When working in the Lab, the original dataset is never modified and no new dataset is created. Instead you will interactively visualizing the results of the changes you will performing on the data (most of the time on a sample). The speed of this interactivity will provide you a comfortable space to quickly assess what your data contain.
  • Once you're satisfied your labwork, you can deploy it to the Flow as a code or visual recipe. The newly created recipe and the associated output dataset will be appended to the original dataset pipeline making all your labwork available for future data reconstruction or automation.

In this tutorial, we are going to use the Visual analysis tool of the Lab.

Click on the New button below Visual analysis. You will be prompted to specify a name for your analysis. Let’s leave the default name Analyze customers_orders_joined for now.

The Visual analysis has three main tabs:

  • Script for interactive data preparation
  • Charts for creating charts
  • Models for creating machine learning models.

In this tutorial we are going to cover the first two. Modeling will be the topic of the next tutorial.

Interactively prepare you data

A video below goes through the content of this section.

First, let’s parse the birthdate column. We’ve done this before, so it’s easy: open the column dropdown and select Parse date, then clear the Output column in the script step so that the parsed date simply replaces the original birthdate column.

With a customer’s birthdate, and the date on which they made their first order, we can compute their age on the date of their first order. From the birthdate column dropdown, choose Compute time since. This creates a new Compute time difference step in the Prepare script, and we just need to make a couple edits.

  • Choose “until” to be “Another date column”
  • Choose the time until to be the first_order_date column
  • Change the output time unit to years
  • Then edit the output column name to age_first_order.

From the new column age_first_order header dropdown, select Analyze in order to see if the distribution of ages looks okay. As it turns out, there are a number of outliers with ages well over 120. These are indicative of bad data. Within the Analyze dialog, choose to clear values outside 1.5 IQR (interquartile range). This will set those values to missing. Now the distribution looks more reasonable, but there are still a few suspicious values over 100. Let’s alter the script step to change the upper bound to 100.

Lastly, now that we’ve computed age_first_order, we won’t need birthdate or first_order_date anymore, so let’s remove them in the script. Open the column dropdown and select Delete. This creates a new Remove step in the Prepare script.

The following video goes through what we just covered

Let us now enrich the data by processing the user_agent and ip_address columns.

Leveraging the user agent

A video below goes through the content of this section.

The user_agent column contains information about the browser and operating system, and we want to pull this information out into separate columns so that it’s possible to use it in further analyses.

Dataiku recognizes that the user_agent column carries information about the User Agent, so when you open the dropdown on the column heading, you can simply select Classify User Agent. This adds a new step to the Prepare script and 7 new columns to the dataset. For this tutorial, we are only interested in the user_agent_brand, which specifies the browser, and the user_agent_os, which specifies the operating system, so we will remove the columns we don’t need. The column view makes it easy to remove several columns at once.

The following video goes through what we just covered

Leveraging the IP address

A video below goes through the content of this section.

Dataiku recognizes the ip_address column as containing values that are IP addresses, so when you open the dropdown on the column heading, you can select Resolve GeoIP. This adds a new step to the script and 7 new columns to the dataset that tell us about the general geographic location of each IP address. For this tutorial, we are only interested in the country and GeoPoint (approximate longitude and latitude of the IP address), so in the script step, deselect Extract country code, Extract region, and Extract city. Finally, delete ip_address.

Now we want to create a label on the customers generating a lot of revenue. Let’s say that customers with a value of total orders in excess of 300 are considered “high revenue” customers. Add a new step in to the script of type Formula step to the script. Type high_revenue as the output column name. Click the Edit button to open the expression editor and then type if(total_sum>300, "True", "False") as the expression. Dataiku validates the expression. Hit Save

Visualize your data with charts

A video below goes through the content of this section.

Visualization is often the key to exploring your data and getting valuable insights, so we will now build some charts on top of the enriched data.

The visualization screen to build charts is available by clicking on the Charts tab of the Analysis.

Key concept: Charts in analysis

We have already used charts on a dataset in the first tutorial. When you create charts in a visual analysis, the charts actually use the preparation script that is being defined as part of the visual analysis.

In other words, you can create new columns or clean data in the Script tab, and immediately start graphing this new or cleaned data in the Charts tab. This provides a very productive and efficient loop to view the results of your preparation steps

Since we extracted the browsers used by customers from user_agent, it’s natural to want to know which browsers are most popular. A common way to visualize this is with a pie or donut chart.

Click the chart type tool and select Donut. Click and drag user_agent_brand to the By box, and Count of records to the Show box.

This shows that nearly 3/4 of customers who have placed orders use Chrome. While the donut chart does a nice job of showing the relative share of each browser to the total, we’d also like to include the OS in the visualization.

Click the chart type tool and select Stacked bars chart. Count of records and user_agent_brand are automatically carried over into the bar chart. Click and drag user_agent_os to the And box.

Adding OS gives us further insight to the data. As expected, IE and Edge are only available on Windows and Safari only on MacOS; what is enlightening is that there are approximately double the number of customers using Chrome on MacOS as Safari and Firefox combined. There is a similar relationship between use of Chrome versus Firefox on Linux.

Sales by age and campaign

There are a number of insights we can glean from the combined Haiku T-shirts data that we couldn’t from the individual datasets. For a start, let’s see if there is a relationship between a customer’s age, whether that customer was part of a Haiku T-Shirt campaign, and how much they spend.

A video below goes through the content of this section.

Click +Chart at the bottom center of the screen. From the chart type tool, select the scatter category and the Scatter Plot chart. Select age_first_order as the X column, and total_sum as the Y column. Select campaign as the column to color bubbles in the plot. Select count as the column to set the size of bubbles.

The default bubble size is too large, and the bubbles overlap. From the size dropdown, change the Base radius from 5 to 1.

The scatterplot shows that older customers, and those who are part of the campaign, tend to have spent the most. The bubble sizes show that some of the moderately valued customers are those who have made a lot of small purchases, while others have made a few larger purchases.

Sales by geography

Since we extracted locations from ip_address, it’s also natural to want to know where Haiku T-Shirt’s customers come from. We can visualize this with a map.

A video below goes through the content of this section.

Click +Chart. Select the geographical category and the Scatter Map plot. Select ip_address_geopoint as the Geo field.

This gives us an initial view of where orders come from. We can gain more knowledge about the data by adding information about order totals and whether the customer is part of Haiku T-Shirts’ marketing campaign to the map.

Select campaign as the column to color bubbles by. Select total_sum as the column to set the size of bubbles.

The default bubble size is too large, and the bubbles overlap. From the size dropdown, change the Base radius from 5 to 2. This looks much better, and you can quickly get a feel for which customers are located where.

If we then want to focus on the largest sales, drag total_sum to the Filters box, then type 300 as the lower bound. This filters all customers who have spent less than 300 from the map.

While you work in charts in an Analysis, keep in mind that these are built on a sample of your data. You can change the sample in the “Sampling and Engine” tab, but since DSS has to reapply the latest preparation each time, it will not be very efficient for very large datasets.

Also, charts in analysis cannot be shared with your team through the dashboard, only charts built upon datasets can be shared on the dashboard. If you want to get business insights to share with your team, you will need to first deploy your script.

Deploy the labwork in the flow

The labwork you have done so far, the cleaning of our data and the charts, were perfomed on a sample of our data which allowed Dataiku DSS to provide a highly reactive interactivity. It is now time to deploy the work done in the Lab in the Flow so that:

  • Dataiku applies all the data preparation to our whole input dataset and saves the resulted data in a new dataset.
  • The newly created dataset (and its building recipe) are added to the lineage of the Flow, so that you can easily reconstruct it in the future (when the input data changes for example)

A video below goes through the content of this section.

To do this, go up to top right corner of the screen and click on Deploy Script.

A popup appears to Deploy the script as a Preparation recipe. Note that, by default, charts created in the Lab will be carried over to the new dataset so that you can view them on the whole output data, rather than a sample. Rename the output dataset customers_labeled.

Click Deploy to create the recipe and the output dataset. Save the recipe and go to the flow. You can see the Preparation recipe you just deployed from the Lab, and the new output dataset.

Open the dataset and see that it is empty; this is because you have not yet run the recipe to build the full output dataset. In the Action dropdown on the top right, click Build; this opens a dialog that asks whether you want to build just this dataset (Non-recursive) or reconstruct datasets leading to this dataset (Recursive). Since the input dataset is up-to-date, Non-recursive is sufficient. Click Build Dataset.

While the job executes, you are taken to the detailed log. When the job completes, click Explore output dataset to view it. The preparation script has now been applied to all the data.

Let’s configure the stacked bar chart to use the entire dataset. Click Sampling & Engine. Uncheck Use same sample as Explore. Select No sampling (whole data) as the sampling method.

Chart Engines and Sampling

To display charts, DSS needs to compute the values to display. Computation is performed by an engine. Depending on the dataset kinds, several different engines can be used.

By default, DSS uses a builtin engine which preprocesses the data for high visualization performance. This builtin engine is efficient for datasets up to a few millions records.

When working with huge datasets, it is advised to store datasets so that DSS can push down all these computations to an external processing engine. This is the case when the dataset is stored in a SQL database or when you are able to use Impala or Hive. You can find more information on Sampled vs. Complete data for more information on in-database charts creation.

The following video goes through what we just covered

Learn more

Congratulations, now that the orders and customers datasets are joined, cleaned, and prepared, you’re ready to build a model to predict customer value!

Proceed to Tutorial: Machine Learning to learn how.