I am running into the problem that the hyperparameters of my svm.SVC()
are too wide such that the GridSearchCV()
never gets completed! One idea is to use RandomizedSearchCV()
instead. But again, my dataset is relative big such that 500 iterations take about 1 hour!
My question is, what is a good set-up (in terms of the range of values for each hyperparameter) in GridSearchCV ( or RandomizedSearchCV ) in order to stop wasting resources?
In other words, how to decide whether or not e.g. C
values above 100 make sense and/or step of 1 is neither big not small? Any help is very much appreciated. This is the set-up am currently using:
parameters = {
'C': np.arange( 1, 100+1, 1 ).tolist(),
'kernel': ['linear', 'rbf'], # precomputed,'poly', 'sigmoid'
'degree': np.arange( 0, 100+0, 1 ).tolist(),
'gamma': np.arange( 0.0, 10.0+0.0, 0.1 ).tolist(),
'coef0': np.arange( 0.0, 10.0+0.0, 0.1 ).tolist(),
'shrinking': [True],
'probability': [False],
'tol': np.arange( 0.001, 0.01+0.001, 0.001 ).tolist(),
'cache_size': [2000],
'class_weight': [None],
'verbose': [False],
'max_iter': [-1],
'random_state': [None],
}
model = grid_search.RandomizedSearchCV( n_iter = 500,
estimator = svm.SVC(),
param_distributions = parameters,
n_jobs = 4,
iid = True,
refit = True,
cv = 5,
verbose = 1,
pre_dispatch = '2*n_jobs'
) # scoring = 'accuracy'
model.fit( train_X, train_Y )
print( model.best_estimator_ )
print( model.best_score_ )
print( model.best_params_ )
To search for hyperparameters, it is always better to understand what each of them is doing...
You should try to change it by order of magnitude (0, 0.1, 1, 10, 100) and maybe then reduce your search between magnitude but I don't think it will improve that much your model.
Here you should change the way you are doing your grid search because as the documentation suggests, degree is only used for polynomial kernel, so you will waste time looking for each degree when using the 'rbf' kernel. Other point is that using two many degrees will just overfit your data. Here use something like (1, 2, 3, 4, 5)
Same remark for coef0 because it is only used with 'poly' kernel
I would not touch that, your range of value doesn't really make any sense.
I'm not that familiar with the gamma parameter.
So use this representation instead of yours (http://scikit-learn.org/stable/modules/grid_search.html#exhaustive-grid-search):
And try to understand what each of those parameters mean:
http://www.csie.ntu.edu.tw/~cjlin/papers/guide/guide.pdf
http://scikit-learn.org/stable/modules/generated/sklearn.svm.SVC.html
Which kernel works best depends a lot on your data. What is the number of samples and dimensions and what kind of data do you have? For the ranges to be comparable, you need to normalize your data, often StandardScaler, which does zero mean and unit variance, is a good idea. If your data is non-negative, you might try MinMaxScaler.
For
kernel="gamma"
, I usually dowhich is based on nothing but served me well the last couple of years. I would strongly advice against non-logarithmic grids, and even more though against randomized search using discrete parameters. One of the main advantages of randomized search is that you can actually search continuous parameters using continuous distributions [see the docs].