ML-Fundamentals - Decision Trees

Introduction

In this exercise you will train and evaluate a decision tree. Opposed to the previous exercises with the topics univariate linear regression, multivariate linear regression, logistic regression and bias variance tradeoff, you will not implement the algorithms from scratch using numpy. Instead you will use two python packages written on top of numpy, namely the pandas and the scikit-learn package, which are both widely used by the machine learning community. The steps can be broken down into:

  1. Load the data from a csv file into a pandas DataFrame object
  2. Preprocess the data
  3. Train a decision tree and visualize it.
  4. Do hyperparameter optimization by dividing the dataset into a fixed training set and a fixed validation set.
  5. Do hyperparameter optimization by crossvalidation
  6. Do hyperparameter optimization by gridsearch

Afterwards, show that you understand, what computations are done by the scikit-learn package you have used by manually computing:

  1. The entropy for one node
  2. The information gain for one node

Note:

As this exercise heavily focuses on the usage of the high-level APIs pandas and scikit-learn, reading their documentations (just the parts corresponding to the current task) is strongly recommended:

Requirements

Knowledge

You should have a basic knowledge of:

  • Decision Trees
  • Entropy
  • Information gain
  • Crossvalidation

Python Modules

By deep.TEACHING convention, all python modules needed to run the notebook are loaded centrally at the beginning.

import numpy as np
import pandas as pd

#from sklearn.preprocessing import Imputer # in newer versions: 
from sklearn.impute import SimpleImputer as Imputer

from sklearn import tree

#old: from sklearn.externals.six import StringIO
from six import StringIO

from sklearn.model_selection import cross_val_score
from sklearn.model_selection import GridSearchCV
from graphviz import Source

Teaching Content

Decision Trees for Classification

In a decision tree the data is split at each node according to a decision rule. This corresponds to nested if-then-else-rules. In the if-part of such a rule are decision is made based on a feature of the data record.

We will use the scikit learn implementation. For this implementation the features must be binary or have (at least) ordinal characteristic. If a feature is e.g. nominal with many values, it must be converted to a set of binary (one-hot-coded) features.

The splitting rules in the scikit learn implementation are binary and are based on a threshold, e.g.

  • if$ x_6 <= 2.14 $ then left subbranch, else right subbranch.
  • binary features must be coded as 0/1, so the rule becomes: if$ x_2 <= 0.5 $ then left subbranch, else right subbranch.

In the leaves of the tree the (class) predictions are made. There are two possibilities for such an inference:

  • hard assignment: Predict for the data records which end up on a leaf by the majority class of the training data that end up on that leaf.
  • soft assignment: Assign probabilities according to the distribution of the classes in respect to the training data which end up on that leaf.

As an example of a decision tree we will learn the following tree from the titanic data set:

A full explanation of the tree will be given later. Here just look at the decision rules (first line of the inner nodes) and at the last line of the leafs. In each leaf you see an array (values) with counts of the different targets for the train data: [number_died, number_survivors] .

Learning

Finding a tree that splits the training data optimal is np-hard. Therefore often a greedy-strategy is used:

To build up a decision tree the algorithm starts at the root of the tree. The feature and the threshold that splits the training data best (with respect to the classes) are chosen. In an iterative way the whole tree is build up by such splitting rules.

There are different criteria for measuring the "separation (split) quality". The most important ones are:

  • Gini Impurity
  • Information Gain

In this tutorial we concentrate on the information gain.

Information Gain as Splitting Criterion

The entropy with respect to the target class variable$ y $ of a training data set$ \mathcal D $ is defined as:

$ H(y, \mathcal D) = - \sum_{y \in \mathcal Y} p(y|\mathcal D) \log_2 p(y|\mathcal D) $ with the domain of the target values$ \mathcal Y = \{t_1, t_2,... \} $.

The probabilities are estimated by $ p(y=t_i, \mathcal D) = |\mathcal D^{(y=t_i)}| /|\mathcal D| $

with the number of training data$ |\mathcal D| $ and the number of training data$ |\mathcal D^{(y=t_i)}| $ with target label$ t_i $:

On a node a (binary) split on a feature$ x_k $ is made by the split rule$ x_k \leq v $. As result there are two data sets$ \mathcal D_0 $ and$ \mathcal D_1 $ for the left resp. the right branch.

The feature$ x_k $ and the split value$ v $ are chosen that they maximize the 'reduction of the entropy' measured by the information gain$ I $: $ I(y; x_k) = H(y, \mathcal D) - H(y|x_k) = H(y, \mathcal D) - \sum_{j=0}^1 p_jH(y, \mathcal D_j) = H(y, \mathcal D) + \sum_{j=0}^1 \sum_{y \in \mathcal Y} \frac{|\mathcal D_j|}{|\mathcal D|} p(y|\mathcal D_j) \log_2 p(y|\mathcal D_j) $ Note that$ p_{j=0} $ is the estimated probability that a random data record of$ \mathcal D $ has feature value$ x_k \leq v $ which can be estimated by$ {|\mathcal D_0|}/{|\mathcal D|} $ (analog for$ j=1 $). $ p(y=t_i|\mathcal D_0) $ can also be estimated by the fraction of the counts$ {|\mathcal D_0^{(y=t_i)}|}/{|\mathcal D_0|} $. So the information gain can be computed just with counts:

$ I(y; x_k) = - \sum_{y \in \mathcal Y} \frac{|\mathcal D^{(y=t_i)}|}{|\mathcal D|} \log_2 \frac{|\mathcal D^{(y=t_i)}|}{|\mathcal D|} + \sum_{j=0}^1 \sum_{y \in \mathcal Y} \frac{|\mathcal D_j^{(y=t_i)}|}{|\mathcal D|} \log_2 \frac{|\mathcal D_j^{(y=t_i)}|}{|\mathcal D_j|} $

<!-$ |\mathcal D_0| $ respectivly$ |\mathcal D_1| $ is the number of elements in the splitted data sets.-->

Overfitting

Deep decision trees generalize often poorly. The following remedies reduce overfitting:

  • Limitation of the maximal depth of the tree.
  • Pruning with an validation set either during training (pre-pruning) or after training (post-pruning).
  • Dimensionality reduction (reducing the number of features before training)

Also often combining decision trees to an ensemble (decision forests) is used against overfitting.

Exercises

Loading the Data

First we read in the titanic data set with pandas

Task:

  1. Complete the code to load the dataset as pandas DataFrame object.
  2. Either download the titanic-train.csv or alternatively read the url directly into a pandas DataFrame.
### Exercise: Load the csv file as pandas DataFrame
url = 'https://gitlab.com/deep.TEACHING/educational-materials/raw/master/notebooks/data/titanic-train.csv'
train_df = None

The DataFrame class implements (and overwrites) a lot of (standard) methods. Some of the are:

### Not an exercise, just execute the cell

print(train_df.ndim)
print(train_df.shape)
2
(891, 12)
### Not an exercise, just execute the cell

### To view the dataset as pretty table (when using jupyter) 
train_df
PassengerId Survived Pclass Name Sex Age SibSp Parch Ticket Fare Cabin Embarked
0 1 0 3 Braund, Mr. Owen Harris male 22.0 1 0 A/5 21171 7.2500 NaN S
1 2 1 1 Cumings, Mrs. John Bradley (Florence Briggs Th... female 38.0 1 0 PC 17599 71.2833 C85 C
2 3 1 3 Heikkinen, Miss. Laina female 26.0 0 0 STON/O2. 3101282 7.9250 NaN S
3 4 1 1 Futrelle, Mrs. Jacques Heath (Lily May Peel) female 35.0 1 0 113803 53.1000 C123 S
4 5 0 3 Allen, Mr. William Henry male 35.0 0 0 373450 8.0500 NaN S
... ... ... ... ... ... ... ... ... ... ... ... ...
886 887 0 2 Montvila, Rev. Juozas male 27.0 0 0 211536 13.0000 NaN S
887 888 1 1 Graham, Miss. Margaret Edith female 19.0 0 0 112053 30.0000 B42 S
888 889 0 3 Johnston, Miss. Catherine Helen "Carrie" female NaN 1 2 W./C. 6607 23.4500 NaN S
889 890 1 1 Behr, Mr. Karl Howell male 26.0 0 0 111369 30.0000 C148 C
890 891 0 3 Dooley, Mr. Patrick male 32.0 0 0 370376 7.7500 NaN Q

891 rows × 12 columns

### Not an exercise, just execute the cell

### To return the data as numpy array:
train_df.values
### Not an exercise, just execute the cell

### To return just certain features (columns)
train_df[['PassengerId', 'Survived', 'Sex']].values

Data Preprocessing

Feature Transformation

Scikit's learn decision trees can handle only numeric data. So we must convert the nominal Sex feature.

Task:

Convert the nominal feature into a binary feature. Convert 'male' to 0 and 'female' to 1.

### Exercise: Convert to binary feature
train_df["Sex"]
assert train_df["Sex"].values.sum() == 314

Feature Selection

Survived is the target, that we want to predict from the values of the other columns.
But not all of the other columns are helpful for classification. So we choose a feature set by hand and convert the features into a numpy array for scikit learn.

Tasks:

  1. Query the values of the feature we have chosen as target feature (class) and save them as numpy array
  2. Query the values of the features we want to use for training ("Fare", "Pclass", "Sex", "Age", "SibSp") and save them as numpy array.
columns = ["Fare", "Pclass", "Sex", "Age", "SibSp"]
y = None # Exercise: Extract target feature "Survived" as 1D array
x = None # Exercise: Extract the features we want to use for training as 2D array

Missing Values

There are missing values (NaN) for some examples in the "Age" column. Use the scikit learn Imputer class to replace them by the mean of the columns.

print("-----------First 5 with nan BEFORE----------")
nanMask = np.argwhere(np.isnan(x))
print(x[nanMask[0:5,0]])
-----------First 5 with nan BEFORE----------
[[ 8.4583  3.      0.         nan  0.    ]
 [13.      2.      0.         nan  0.    ]
 [ 7.225   3.      1.         nan  0.    ]
 [ 7.225   3.      0.         nan  0.    ]
 [ 7.8792  3.      1.         nan  0.    ]]
# TODO as Exercise
print("-----------First 5 with nan AFTER----------")
print(x[nanMask[0:5,0]])
-----------First 5 with nan AFTER----------
[[ 8.4583  3.      0.         nan  0.    ]
 [13.      2.      0.         nan  0.    ]
 [ 7.225   3.      1.         nan  0.    ]
 [ 7.225   3.      0.         nan  0.    ]
 [ 7.8792  3.      1.         nan  0.    ]]

Training and Visualization

Now we are ready to define and learn a decision tree by the criterion 'entropy' and we restrict the depth of the tree to 3. We use the scikit learn decison tree module.

Task:

Define and train the model.

clf = None # Exercise: Define and train the classifier
assert clf.criterion == "entropy"
assert clf.max_depth == 3

clf is an instance of a trained decision tree classifier.

The decision tree can be visualized (after training). For this we must write an graphviz dot-File

graph = Source(tree.export_graphviz(clf, out_file=None
   , feature_names=columns, class_names=['died', 'survived'] 
   , filled = True))
graph
Tree 0 Sex <= 0.5 entropy = 0.961 samples = 891 value = [549, 342] class = died 1 Fare <= 26.269 entropy = 0.699 samples = 577 value = [468, 109] class = died 0->1 True 8 Pclass <= 2.5 entropy = 0.824 samples = 314 value = [81, 233] class = survived 0->8 False 2 Age <= 13.5 entropy = 0.558 samples = 415 value = [361, 54] class = died 1->2 5 SibSp <= 2.5 entropy = 0.924 samples = 162 value = [107, 55] class = died 1->5 3 entropy = 0.567 samples = 15 value = [2, 13] class = survived 2->3 4 entropy = 0.477 samples = 400 value = [359, 41] class = died 2->4 6 entropy = 0.964 samples = 139 value = [85, 54] class = died 5->6 7 entropy = 0.258 samples = 23 value = [22, 1] class = died 5->7 9 Fare <= 28.856 entropy = 0.299 samples = 170 value = [9, 161] class = survived 8->9 12 Fare <= 23.35 entropy = 1.0 samples = 144 value = [72, 72] class = died 8->12 10 entropy = 0.469 samples = 70 value = [7, 63] class = survived 9->10 11 entropy = 0.141 samples = 100 value = [2, 98] class = survived 9->11 13 entropy = 0.977 samples = 117 value = [48, 69] class = survived 12->13 14 entropy = 0.503 samples = 27 value = [24, 3] class = died 12->14

According to the decision tree the main criterion (root node) for survival is the sex of the passenger (if your implementation is correct so far). In the left subtree are the male passengers (sex = 0), in the right subtree the female (sex=1).

In the leafs the class information is given by a value array. Here the second value is the number of survivors in the leaves.

For example the leftmost leaf represents passengers that are male (sex=0) with fare<=26.2687 and age<=13.5. 13 of such boys survived and 2 of them died.

The entropy$ - \sum p_i \log_2 (p_i) $ is displayed also at each node (splitting criterion).

Prediction / Validation

To make predictions with scikit learn, we must convert the data in the same way as we did it with the training data. Then we could use the method:

clf.predict(validation_data)

The depth of the decision tree is a hyperparameter. So the depth should be determined with the help of a validation set or by cross validation.

Task:

  1. Manually split your training data in 75% for training and 25% for validation
  2. Repeat the training process several times for different tree depths (define a new model every time)
  3. Use clf.predict(validation_data) to predict your validation data
  4. Manually calculate the accuracy for the validation set each time
  • Accuracy is defined as$ \frac{\text{number of correct predictions}}{\text{number of all predictions}} $
  1. From your results: Which seems to be the best tree depths?
### Exercise: Your code below to divide x and y
### Exercise: Your code below to train and predict trees with different depths

for depth in range(1,20):
    # Your code here
    pass

Crossvalidation

Now with a dataset with less than 900 data examples (we have 891), chances to unluckily divide training and validation split seem pretty high. Also "wasting" 25% for validation on such a small data set is also not the ideal solution.

Task:

Now do not manually divide your data and do not manually compute the accuracy. Instead use the function sklearn.model_selection.cross_val_score to search the best tree depth. Use 10-fold crossvalidation.

### Exercise: Your code below for crossvalidation

for depth in range(1,20):
    # Your code here
    pass

Gridsearch

This already works pretty well to determine one hyperparameter. But imagine you had several hyperparameters. Since the computation (even with 10 fold crossvalidation) is very fast for decision trees, it is no problem at all to try out every possible combination of hyperparameters. For each hyperparameter you would have to implement another for-loop. Should be no problem. But for convenience, you should know scikit-learn also has a built-in function for that.

Task:

Use sklearn.model_selection.GridSearchCV to do a grid search for the best hyperparameters. In addition to max_depth parameter between [1, 20], add the criterion as another hyper parameter with the possible values "entropy" and "gini".

Further useful functions after the grid search: clf.best_score_ and clf.best_params_

### Exercise: Your code below for grid search

Splitting Criterion Entropy / Information Gain

As you might have noticed, packages like scikit learn can be very comfortable, since they do most of the work for you as long as you provide the data in the right format. Though a good advice is, that you should know what happens inside of such a black box.

Task 1:

Recompute the root node entropy. On Pen & paper or with just some lines of code here using basic mathematical operations.

# Code below
np.testing.assert_almost_equal(entropy_before_split, 0.9607079018756469, decimal=5)

Task 2:

Compute the information gain of the first split node (root node). Use the shown entropy values and number of data records (samples). Again on Pen & paper or with just some lines of code here using basic mathematical operations.

# Code below
np.testing.assert_almost_equal(information_gain_split_root_node, 0.2176601066606142, decimal=5)

Task 3:

Compute the information gain of the following split table:

The numbers are the corresponding data records, e.g. there are 13 data records with target class 1 and feature$ \le $ v.

Write a python function that computes the information gain.The data is given by a python array:

Like always, you are free to write as many helper functions as you want.

data = np.array([[2.,13.],[359., 41.]])
def information_gain(data):
    raise NotImplementedError()
np.testing.assert_almost_equal(information_gain(data), 0.07765787804281093)
print("Information Gain:", information_gain(data))
Information Gain: 0.07765787804281093

Summary and Outlook

Machine learning is not exclusively about neural networks - In this notebook you implemented a decision tree, which chains a set of if-then-else decision rules to make a prediction. You employed two popular APIs, scikit-learn and pandas. You also calculated entropy and information gain, measures that are widely applicable in a host of statistics tasks.

Literature

Licenses

Notebook License (CC-BY-SA 4.0)

The following license applies to the complete notebook, including code cells. It does however not apply to any referenced external media (e.g., images).

Exercise: Logistic Regression and Regularization
by Christian Herta, Klaus Strohmenger
is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
Based on a work at https://gitlab.com/deep.TEACHING.

Code License (MIT)

The following license only applies to code cells of the notebook.

Copyright 2018 Christian Herta, Klaus Strohmenger

Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.