Explaining AI: Interpreting using SHAP
The possibilities of Machine Learning models are almost limitless. Ranging from predicting customer churn over determining the right discount to maintaining your machines using predictive maintenance. The outcome of a machine learning model is often straightforward: a zero or one in classification problems, a continuous number in case of a regression problem. However, companies might also be interested in understanding how ‘the number’ has been obtained. Depending on your choice of machine learning algorithm, this can be easy or very hard. There is typically a distinction made between white box models and black box models. While white box models are often easily interpretable, we need the help of some additional tools in order to explain the black box models. The tool that I will discuss today is SHAP, a state of the art explainable AI tool that combines parts of several other well established tools and also borrows a theoretical background from cooperative game theory.
White vs Black Box Models
When working with several machine learning algorithms, the distinction is often made between white and black box models. We separate these two types of algorithms using the interpretability of the model, the degree to which a human being can understand the decisions made by the algorithm/model. Among the white box models we have the “simpler” algorithms such as decision trees, logistic or linear regression, nearest neighbor… These algorithms are relatively simple. Most of the people can interpret a simple decision tree or a nearest neighbor. If these algorithms are used, it is easy to spot which variables have a significant influence on the prediction, and also in what way the variable influences the prediction. Below is a simple decision tree for the price of strawberries.

While the interpretability of such models is a big plus, the accuracy sometimes lacks. To increase prediction performances, more complex algorithms were developed. An example of such a black box model is Random Forest, which Is closely tied to the decision trees as the name suggests. This model generates often a multitude of decision trees to get a better prediction. Interpreting 1 decision tree is fairly simple, interpreting 1000 different decision trees and assimilating all those interpretations in one conclusion is a lot harder. Fortunately, researchers have developed numerous different tools to interpret the black box models (Partial Dependence Plots, Feature Importance, ICE plots,…). But the method I will be discussing Is SHAP (SHapley Additive exPlanations)
What is SHAP?
SHAP Is a method that leans heavily on cooperative game theory. The basis of this technique are the Shapley values. These values measure the contribution of each variable given a certain coalition (a combination of all variables). An example might clarify this. Suppose we have a model to predict the probability that a football player will play in the upcoming match and we have the following variables: number of goals in the last 5 matches, intensity on the training last week and whether the coach likes the player or not. We now want to see what the influence of the intensity on the training is. We will pick one entry where the player scored 5 goals in the last 5 matches, the coach likes him and his intensity was low. When we predict the probability using these variables, we get a probability of 60%. We will now replace the intensity with a random value from another entry. Now we have the following coalition: He scored 5 goals, the intensity was high and the coach likes him, which resulted in a probability of 80% that the player would play. We can see that the intensity played a factor in the increase in probability

This was now an example for one coalition. To obtain the Shapley values, we have to do this for all coalitions and average it out. So using Shapley values you get 1 value per feature. SHAP uses these values and creates a linear model, allowing for a lot more interpretable plots, which I will now demonstrate.
SHAP in Action
The python package “shap” allows for a fairly easy use of the SHAP values in Python. Documentation on this package can be found here. We will now look at some of the plots you can use from the package, based on the California Housing Dataset which is available on the scikit-learn library. The dependent variable is the house price, so we are dealing with a regression problem here. To get the SHAP values, you need to fit a machine learning model to your data, to which you can then fit a SHAP explainer which will calculate the SHAP values. In this example, a RandomForest Regressor was used as machine learning model. As mentioned before, this model generates a multitude of decision trees and assimilates the outcome to get a prediction. For the code I refer to this githubrepository.
A great plus of using SHAP values is that you can check for global explanations as well as local explanations. With the global explanations, we can check what the influence of a variable is at the level of the whole dataset, while with the local explanations, we can investigate the influence of a variable on one specific instance of the dataset. In light of the dataset used here, we would investigate a single house with the local explanations, while we would investigate all the houses with the global explanations. We will first tackle the global explanation. The picture below shows a barplot with the mean SHAP values for the different variables.

We can clearly see that MedInc, Latitude and Longitude will have a lot of influence on the outcome of the predictions. However, this does not yet show us in which direction the variable influences the prediction. For this, we can use the summary plot.

The summary plot gives us more Information about how the feature influences the dependent variable. Let’s take the MedInc variable as an example. While the x-axis depicts how strong the feature influences the prediction, the colors indicate when for what value of the variable the prediction gets influenced. We can see that a high value for MedInc (red color) will have a positive influence on the prediction. If we look at Latitude and Longitude, having high values for these variables will have a negative influence on the prediction. To make things a bit more specific: if we have a high value for MedInc and low values for Latitude and Longitude, the value for the prediction will be higher than If we were to have a low MedInc value and high values for Latitude and Longitude.
As mentioned, we can also use the SHAP values to inspect single observations. This is called the local explanation. Below you can find the barplot for a single observation

Here we can see that the latitude played the biggest part in determining the prediction for this single instance. Using this technique, we can investigate “strange” or unexpected predictions and better understand why the model predicted it that way. It can be useful to check anomalies and to get a better understanding of the data in general.
Conclusion
SHAP is an extremely useful tool to Interpret your machine learning models. Using this tool, the tradeoff between interpretability and accuracy is of less importance, since we can interpret even the most complex black box models. Furthermore, it provides for a broad range of plots to fully gasp the way your machine learning models acts. You can check for global trends as well as investigate specific data points.

Jarne Demunter
Jarne is data analytics consultant who likes to code ETL pipelines in Spark, make machine learning models and who is also passioned by sports and music.