I am using a scikit extra trees classifier:
model = ExtraTreesClassifier(n_estimators=10000, n_jobs=-1, random_state=0)
Once the model is fitted and used to predict classes, I would like to find out the contributions of each feature to a specific class prediction. How do I do that in scikit learn? Is it possible with extra trees classifier or do I need to use some other model?
Update
Being more knowledgable about ML today than I was 2.5 years ago, I will now say this approach only works for highly linear decision problems. If you carelessly apply it to a non-linear problem you will have trouble.
Example: Imagine a feature for which neither very large nor very small values predict a class, but values in some intermediate interval do. That could be water intake to predict dehydration. But water intake probably interacts with salt intake, as eating more salt allows for a greater water intake. Now you have an interaction between two non-linear features. The decision boundary meanders around your feature-space to model this non-linearity and to ask only how much one of the features influences the risk of dehydration is simply ignorant. It is not the right question.
Alternative: Another, more meaningful, question you could ask is: If I didn't have this information (if I left out this feature) how much would my prediction of a given label suffer? To do this you simply leave out a feature, train a model and look at how much precision and recall drops for each of your classes. It still informs about feature importance, but it makes no assumptions about linearity.
Below is the old answer.
I worked through a similar problem a while back and posted the same question on Cross Validated. The short answer is that there is no implementation in
sklearn
that does all of what you want.However, what you are trying to achieve is really quite simple, and can be done by multiplying the average standardised mean value of each feature split on each class, with the corresponding
model._feature_importances
array element. You can write a simple function that standardises your dataset, computes the mean of each feature split across class predictions, and does element-wise multiplication with themodel._feature_importances
array. The greater the absolute resulting values are, the more important the features will be to their predicted class, and better yet, the sign will tell you if it is small or large values that are important.Here's a super simple implementation that takes a datamatrix
X
, a list of predictionsY
and an array of feature importances, and outputs a JSON describing importance of each feature to each class.Example:
The first level of keys in
result
are class labels, and the second level of keys are column-indices, i.e. feature-indices. Recall that large absolute values corresponds to importance, and the sign tells you whether it's small (possibly negative) or large values that matter.So far I have been checking eli5 and treeinterpreter (both have been mentioned before) and I think eli5 will be the most helpfull, because I think have more options and is more generic and updated.
Nevertheless after some time I apply eli5 for a particular case and I could not obtained negative contributions for ExtraTreesClassifier researching a little bit more I realised I was obtaining the importance or weight as seen here. Because I was more interested in something like contribution, as mentioned of the title of this questions, I understand some feature could have a negative effect but when measuring the importance the sign is not important, so feature with positive effects and negatives are put together.
Because I was very interested in the sign I did as follows: 1) obtain the contributions for all cases 2) agreage all the results to be able to distinguish the same. No very elegant solution, probably there is something better out there, I post it here in case it helps.
I reproduce the same that previous post.
Whith output
The previous results work with one case I want to run all and create an average:
This is how a datrame with the results looks like:
So I create a function to combine previous kind of tables:
So now I only have to use previous function with all the examples I wish. I will take the whole population not only the training set. Check the average effect in all real cases
With result:
Las table show the average effects of each feature for all my real population.
Companion notebook in my github.
This is modified from the docs
I think
feature_importances_
is what you're looking for:EDIT
Maybe I misunderstood the first time (pre-bounty), sorry, this may be more along the lines of what you are looking for. There is a python library called
treeinterpreter
that produces the information I think you are looking for. You'll have to use the basicDecisionTreeClassifer
(or Regressor). Following along from this blog post, you can discretely access the feature contributions in the prediction of each instance:I'll just iterate through each sample in
X_test
for illustrative purposes, this almost exactly mimics the blog post above:The first iteration of the loop yields:
Interpreting this output, it seems as though petal length and petal width were the most important contributors to the prediction of third class (for the first sample). Hope this helps.
The paper "Why Should I Trust You?": Explaining the Predictions of Any Classifier was submitted 9 days after this question, providing an algorithm for a general solution to this problem! :-)
In short, it is called LIME for "local interpretable model-agnostic explanations", and works by fitting a simpler, local model around the prediction(s) you want to understand.
What's more, they have made a python implementation (https://github.com/marcotcr/lime) with pretty detailed examples on how to use it with sklearn. For instance this one is on two-class random forest problem on text data, and this one is on continuous and categorical features. They are all to be found via the README on github.
The authors had a very productive year in 2016 concerning this field, so if you like reading papers, here's a starter: