Interpretability of ML models – application to domain categorization

When training machine learning models, we often want to not only obtain a model with high accuracy but also want to know how important individual features for the importance of the model are.

There are several reasons why we are interested in this. Let us say that we are building a regression ML model which predicts prices of office as function of various features, like location, area, etc. When we train the model and it turns out that a certain feature A most affects the price, this can give important information to real estate development companies – which features are most important for customer price expectations in a given area or class of offices.

Feature importance can thus allow us to better understand the underlying problem.

The second reason for determining the importance of features is as part of the feature engineering for machine learning models. Each feature that we include in our machine learning model increases both the memory footprint of the model as well as the inference latency (time that ML model needs to carry out the inference for instance).

To build an efficient and fast ML model we thus want to only use the features that are important for the ML model prediction.

Permutation importance method can help us better understand why ML model makes specific classification in context of domain categorization model.

Domain categorization concerns itself with assigning classes or categories to domains based on texts of their webpages. This is also known as website classification problem.

Interpretability of ML models

When dealing with interpretability of ML models, there are two groups of approaches. First approach is to use a ML model which is naturally interpretable. Examples for such naturally interpretable ML models are linear regression, logistic regression and decision trees. E.g. in linear regression, absolute value of coefficients for features provides information about their importance.

The second approach to ML interpretability is to use whatever ML model is appropriate for the problem considered, regardless of its natural interpretability and then leave the interpretability to special methods, designed just for this. These so-called model agnostic interpretation models are highly flexible as you do not have to worry about the specifics of each model you use and it also allows you to easily compare several models that one may consider for the ML problem.

A great article on model agnostic methods is the following one (one of its authors is btw C. Guestrin who was also one of the founders of the ML library Turi Create that I mentioned for the recommender project):

https://arxiv.org/abs/1606.05386

As we are using a very complex XGBoost ML model I will focus here on the model agnostic methods.

One of the possible methods in this class is to use the mean decrease in impurity (Gini importance) for this purpose. This method is implemented for RandomForestClassifier in scikit-learn for determination of feature importance.  But it has been known for a long time that this approach has several problems, especially when dealing with features with different order of magnitudes or different number of categories. This is an excellent article describing the problems:

https://link.springer.com/article/10.1186%2F1471-2105-8-25

A better approach than the above is the so-called permutation importance method, which is the one that I used.