In machine learning ensemble tree models such as random forest are common. This models consist of an ensemble of so called decision tree models. How can we analyse, however, what those models have specifically learned?
相关问题
- How to conditionally scale values in Keras Lambda
- Trying to understand Pytorch's implementation
- ParameterError: Audio buffer is not finite everywh
- How to calculate logistic regression accuracy
- How to parse unstructured table-like data?
相关文章
- How to use cross_val_score with random_state
- How to measure overfitting when train and validati
- McNemar's test in Python and comparison of cla
- How to disable keras warnings?
- Invert MinMaxScaler from scikit_learn
- Use of randomforest() for classification in R?
- How should I vectorize the following list of lists
- ValueError: Unknown metric function when using cus
You cannot in this sense in what you can just plot simple decision tree. Only extremely simple models can be easily investigated. More complex methods require more complex tools, which are just approximations, general ideas of what to look for. So for ensembles you can try to look at some expectation of property of a single model. For example you can look for some feature importances measures, which shows you which features are used to make prediction to same extent. You will not get a simple if/else structure, this is simply impossible, but some fuzzy idea. For RF you can take out feature importances which is more or less expected fraction of samples which actually "hit" a decision node considering a particular feature.