Before scikit-learn 0.20 we could use result.grid_scores_[result.best_index_]
to get the standard deviation. (It returned for exemple: mean: 0.76172, std: 0.05225, params: {'n_neighbors': 21}
)
What's the best way in scikit-learn 0.20 to get the standard deviation of the best score ?
In newer versions, the grid_scores_
is renamed as cv_results_
. Following the documentation, you need this:
best_index_ : int
The index (of the cv_results_ arrays) which corresponds to the best >
candidate parameter setting.
The dict at search.cv_results_['params'][search.best_index_] gives the >
parameter setting for the best model, that gives the highest mean
score (search.best_score_).
So in your case, you need