Hi I am performing SVM classification using SMO, in which my kernel is RBF, now I want to select c and sigma values, using grid search and cross validation, I am new to kernel functions, please help, in step by step process
相关问题
- How to conditionally scale values in Keras Lambda
- Trying to understand Pytorch's implementation
- ParameterError: Audio buffer is not finite everywh
- Where is the standard kernel libraries to let kern
- How to calculate logistic regression accuracy
相关文章
- 关于使用SVM进行AE信号分类的问题
- How do I get to see DbgPrint output from my kernel
- How to use cross_val_score with random_state
- Is it possible to run 16 bit code in an operating
- How to measure overfitting when train and validati
- McNemar's test in Python and comparison of cla
- How to disable keras warnings?
- Invert MinMaxScaler from scikit_learn
You can also use Uniform Design model selection which reduces the number of tuples you need to check. The paper which explains it is "Model selection for support vector machines via uniform design" by Chien-Ming Huang Some implementation in python are exist in ssvm 0.2
Read A Practical Guide to Support Vector Classication by Chih-Wei Hsu, Chih-Chung Chang, and Chih-Jen. They address this exact issue and explain methods for performing a grid-search for parameter selection. http://www.csie.ntu.edu.tw/~cjlin/papers/guide/guide.pdf
I will just add a little bit of explanation to larsmans' answer.
The C parameter is a regularization/slack parameter. Its smaller values force the weights to be small. The larger it gets, the allowed range of weights gets wider. Resultantly, larger C values increase the penalty for misclassification and thus reduce the classification error rate on the training data (which may lead to over-fitting). Your training time and number of support vectors will increase as you increase the value of C.
You may also find it useful to read Extending SVM to a Soft Margin Classifier by K.K. Chin.