Using scikit to determine contributions of each fe

2019-01-13 13:56发布

I am using a scikit extra trees classifier:

model = ExtraTreesClassifier(n_estimators=10000, n_jobs=-1, random_state=0)

Once the model is fitted and used to predict classes, I would like to find out the contributions of each feature to a specific class prediction. How do I do that in scikit learn? Is it possible with extra trees classifier or do I need to use some other model?

4条回答
爱情/是我丢掉的垃圾
2楼-- · 2019-01-13 14:27

Update

Being more knowledgable about ML today than I was 2.5 years ago, I will now say this approach only works for highly linear decision problems. If you carelessly apply it to a non-linear problem you will have trouble.

Example: Imagine a feature for which neither very large nor very small values predict a class, but values in some intermediate interval do. That could be water intake to predict dehydration. But water intake probably interacts with salt intake, as eating more salt allows for a greater water intake. Now you have an interaction between two non-linear features. The decision boundary meanders around your feature-space to model this non-linearity and to ask only how much one of the features influences the risk of dehydration is simply ignorant. It is not the right question.

Alternative: Another, more meaningful, question you could ask is: If I didn't have this information (if I left out this feature) how much would my prediction of a given label suffer? To do this you simply leave out a feature, train a model and look at how much precision and recall drops for each of your classes. It still informs about feature importance, but it makes no assumptions about linearity.

Below is the old answer.


I worked through a similar problem a while back and posted the same question on Cross Validated. The short answer is that there is no implementation in sklearn that does all of what you want.

However, what you are trying to achieve is really quite simple, and can be done by multiplying the average standardised mean value of each feature split on each class, with the corresponding model._feature_importances array element. You can write a simple function that standardises your dataset, computes the mean of each feature split across class predictions, and does element-wise multiplication with the model._feature_importances array. The greater the absolute resulting values are, the more important the features will be to their predicted class, and better yet, the sign will tell you if it is small or large values that are important.

Here's a super simple implementation that takes a datamatrix X, a list of predictions Y and an array of feature importances, and outputs a JSON describing importance of each feature to each class.

def class_feature_importance(X, Y, feature_importances):
    N, M = X.shape
    X = scale(X)

    out = {}
    for c in set(Y):
        out[c] = dict(
            zip(range(N), np.mean(X[Y==c, :], axis=0)*feature_importances)
        )

    return out

Example:

import numpy as np
import json
from sklearn.preprocessing import scale

X = np.array([[ 2,  2,  2,  0,  3, -1],
              [ 2,  1,  2, -1,  2,  1],
              [ 0, -3,  0,  1, -2,  0],
              [-1, -1,  1,  1, -1, -1],
              [-1,  0,  0,  2, -3,  1],
              [ 2,  2,  2,  0,  3,  0]], dtype=float)

Y = np.array([0, 0, 1, 1, 1, 0])
feature_importances = np.array([0.1, 0.2, 0.3, 0.2, 0.1, 0.1])
#feature_importances = model._feature_importances

result = class_feature_importance(X, Y, feature_importances)

print json.dumps(result,indent=4)

{
    "0": {
        "0": 0.097014250014533204, 
        "1": 0.16932975630904751, 
        "2": 0.27854300726557774, 
        "3": -0.17407765595569782, 
        "4": 0.0961523947640823, 
        "5": 0.0
    }, 
    "1": {
        "0": -0.097014250014533177, 
        "1": -0.16932975630904754, 
        "2": -0.27854300726557779, 
        "3": 0.17407765595569782, 
        "4": -0.0961523947640823, 
        "5": 0.0
    }
}

The first level of keys in result are class labels, and the second level of keys are column-indices, i.e. feature-indices. Recall that large absolute values corresponds to importance, and the sign tells you whether it's small (possibly negative) or large values that matter.

查看更多
姐就是有狂的资本
3楼-- · 2019-01-13 14:35

So far I have been checking eli5 and treeinterpreter (both have been mentioned before) and I think eli5 will be the most helpfull, because I think have more options and is more generic and updated.

Nevertheless after some time I apply eli5 for a particular case and I could not obtained negative contributions for ExtraTreesClassifier researching a little bit more I realised I was obtaining the importance or weight as seen here. Because I was more interested in something like contribution, as mentioned of the title of this questions, I understand some feature could have a negative effect but when measuring the importance the sign is not important, so feature with positive effects and negatives are put together.

Because I was very interested in the sign I did as follows: 1) obtain the contributions for all cases 2) agreage all the results to be able to distinguish the same. No very elegant solution, probably there is something better out there, I post it here in case it helps.

I reproduce the same that previous post.

from sklearn import datasets
from sklearn.cross_validation import train_test_split
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import  (ExtraTreesClassifier, RandomForestClassifier, 
                              AdaBoostClassifier, GradientBoostingClassifier)
import eli5


iris = datasets.load_iris()  #sample data
X, y = iris.data, iris.target
#split into training and test 
X_train, X_test, y_train, y_test = train_test_split( 
    X, y, test_size=0.33, random_state=0)

# fit the model on the training set
#model = DecisionTreeClassifier(random_state=0)
model = ExtraTreesClassifier(n_estimators= 100)

model.fit(X_train,y_train)


aux1 = eli5.sklearn.explain_prediction.explain_prediction_tree_classifier(model,X[0], top=X.shape[1])

aux1

Whith output enter image description here

The previous results work with one case I want to run all and create an average:

This is how a datrame with the results looks like:

aux1 = eli5.sklearn.explain_prediction.explain_prediction_tree_classifier(model,X[0], top=X.shape[0])
aux1 = eli5.format_as_dataframe(aux1)
# aux1.index = aux1['feature']
# del aux1['target']
aux


target  feature weight  value
0   0   <BIAS>  0.340000    1.0
1   0   x3  0.285764    0.2
2   0   x2  0.267080    1.4
3   0   x1  0.058208    3.5
4   0   x0  0.048949    5.1
5   1   <BIAS>  0.310000    1.0
6   1   x0  -0.004606   5.1
7   1   x1  -0.048211   3.5
8   1   x2  -0.111974   1.4
9   1   x3  -0.145209   0.2
10  2   <BIAS>  0.350000    1.0
11  2   x1  -0.009997   3.5
12  2   x0  -0.044343   5.1
13  2   x3  -0.140554   0.2
14  2   x2  -0.155106   1.4

So I create a function to combine previous kind of tables:

def concat_average_dfs(aux2,aux3):
    # Putting the same index together
#     I use the try because I want to use this function recursive and 
#     I could potentially introduce dataframe with those indexes. This
#     is not the best way.
    try:
        aux2.set_index(['feature', 'target'],inplace = True)
    except:
        pass
    try:
        aux3.set_index(['feature', 'target'],inplace = True)
    except:
        pass
    # Concatenating and creating the meand
    aux = pd.DataFrame(pd.concat([aux2['weight'],aux3['weight']]).groupby(level = [0,1]).mean())
    # Return in order
    #return aux.sort_values(['weight'],ascending = [False],inplace = True)
    return aux
aux2 = aux1.copy(deep=True)
aux3 = aux1.copy(deep=True)

concat_average_dfs(aux3,aux2)

enter image description here

So now I only have to use previous function with all the examples I wish. I will take the whole population not only the training set. Check the average effect in all real cases

for i in range(X.shape[0]):


    aux1 = eli5.sklearn.explain_prediction.explain_prediction_tree_classifier(model,X\[i\], top=X.shape\[0\])
    aux1 = eli5.format_as_dataframe(aux1)

    if 'aux_total'  in locals() and 'aux_total' in  globals():
        aux_total = concat_average_dfs(aux1,aux_total)
    else:
        aux_total = aux1

With result:

enter image description here

Las table show the average effects of each feature for all my real population.

Companion notebook in my github.

查看更多
仙女界的扛把子
4楼-- · 2019-01-13 14:38

This is modified from the docs

from sklearn import datasets
from sklearn.ensemble import ExtraTreesClassifier

iris = datasets.load_iris()  #sample data
X, y = iris.data, iris.target

model = ExtraTreesClassifier(n_estimators=10000, n_jobs=-1, random_state=0)
model.fit_transform(X,y) # fit the dataset to your model

I think feature_importances_ is what you're looking for:

In [13]: model.feature_importances_
Out[13]: array([ 0.09523045,  0.05767901,  0.40150422,  0.44558631])

EDIT

Maybe I misunderstood the first time (pre-bounty), sorry, this may be more along the lines of what you are looking for. There is a python library called treeinterpreter that produces the information I think you are looking for. You'll have to use the basic DecisionTreeClassifer (or Regressor). Following along from this blog post, you can discretely access the feature contributions in the prediction of each instance:

from sklearn import datasets
from sklearn.cross_validation import train_test_split
from sklearn.tree import DecisionTreeClassifier

from treeinterpreter import treeinterpreter as ti

iris = datasets.load_iris()  #sample data
X, y = iris.data, iris.target
#split into training and test 
X_train, X_test, y_train, y_test = train_test_split( 
    X, y, test_size=0.33, random_state=0)

# fit the model on the training set
model = DecisionTreeClassifier(random_state=0)
model.fit(X_train,y_train)

I'll just iterate through each sample in X_test for illustrative purposes, this almost exactly mimics the blog post above:

for test_sample in range(len(X_test)):
    prediction, bias, contributions = ti.predict(model, X_test[test_sample].reshape(1,4))
    print "Class Prediction", prediction
    print "Bias (trainset prior)", bias

    # now extract contributions for each instance
    for c, feature in zip(contributions[0], iris.feature_names):
        print feature, c

    print '\n'

The first iteration of the loop yields:

Class Prediction [[ 0.  0.  1.]]
Bias (trainset prior) [[ 0.34  0.31  0.35]]
sepal length (cm) [ 0.  0.  0.]
sepal width (cm) [ 0.  0.  0.]
petal length (cm) [ 0.         -0.43939394  0.43939394]
petal width (cm) [-0.34        0.12939394  0.21060606]

Interpreting this output, it seems as though petal length and petal width were the most important contributors to the prediction of third class (for the first sample). Hope this helps.

查看更多
甜甜的少女心
5楼-- · 2019-01-13 14:47

The paper "Why Should I Trust You?": Explaining the Predictions of Any Classifier was submitted 9 days after this question, providing an algorithm for a general solution to this problem! :-)

In short, it is called LIME for "local interpretable model-agnostic explanations", and works by fitting a simpler, local model around the prediction(s) you want to understand.

What's more, they have made a python implementation (https://github.com/marcotcr/lime) with pretty detailed examples on how to use it with sklearn. For instance this one is on two-class random forest problem on text data, and this one is on continuous and categorical features. They are all to be found via the README on github.

The authors had a very productive year in 2016 concerning this field, so if you like reading papers, here's a starter:

查看更多
登录 后发表回答