I am trying to implement an application that uses AdaBoost algorithm. I know that AdaBoost uses set of weak classifiers, but I don't know what these weak classifiers are. Can you explain it to me with an example and tell me if I have to create my own weak classifiers or I'm suppoused to use some kind of algorithm?
相关问题
- How to conditionally scale values in Keras Lambda
- Trying to understand Pytorch's implementation
- Bulding a classification model in R studio with ke
- ParameterError: Audio buffer is not finite everywh
- Splitting list and iterating in prolog
相关文章
- how to flatten input in `nn.Sequential` in Pytorch
- What are the problems associated to Best First Sea
- How to use cross_val_score with random_state
- Looping through training data in Neural Networks B
- How to measure overfitting when train and validati
- McNemar's test in Python and comparison of cla
- How to disable keras warnings?
- Invert MinMaxScaler from scikit_learn
When I used AdaBoost, my weak classifiers were basically thresholds for each data attribute. Those thresholds need to have a performance of more than 50%, if not it would be totally random.
Here is a good presentation about Adaboost and how to calculate those weak classifiers: http://www.cse.cuhk.edu.hk/~lyu/seminar/07spring/Hongbo.ppt
Weak classifiers (or weak learners) are classifiers which perform only slightly better than a random classifier. These are thus classifiers which have some clue on how to predict the right labels, but not as much as strong classifiers have like, e.g., Naive Bayes, Neurel Networks or SVM.
One of the simplest weak classifiers is the Decision Stump, which is a one-level Decision Tree. It selects a threshold for one feature and splits the data on that threshold. AdaBoost will then train an army of these Decision Stumps which each focus on one part of the characteristics of the data.