Model help using Scikit-learn when using GridSearc

2020-01-29 07:05发布

问题:

As part of the Enron project, built the attached model, Below is the summary of the steps,

Below model gives highly perfect scores

cv = StratifiedShuffleSplit(n_splits = 100, test_size = 0.2, random_state = 42)
gcv = GridSearchCV(pipe, clf_params,cv=cv)

gcv.fit(features,labels) ---> with the full dataset

for train_ind, test_ind in cv.split(features,labels):
    x_train, x_test = features[train_ind], features[test_ind]
    y_train, y_test = labels[train_ind],labels[test_ind]

    gcv.best_estimator_.predict(x_test)

Below model gives more reasonable but low scores

cv = StratifiedShuffleSplit(n_splits = 100, test_size = 0.2, random_state = 42)
gcv = GridSearchCV(pipe, clf_params,cv=cv)

gcv.fit(features,labels) ---> with the full dataset

for train_ind, test_ind in cv.split(features,labels):
     x_train, x_test = features[train_ind], features[test_ind]
     y_train, y_test = labels[train_ind],labels[test_ind]

     gcv.best_estimator_.fit(x_train,y_train)
     gcv.best_estimator_.predict(x_test)
  1. Used Kbest to find out the scores and sorted the features and trying a combination of higher and lower scores.

  2. Used SVM with a GridSearch using a StratifiedShuffle

  3. Used the best_estimator_ to predict and calculate the precision and recall.

The problem is estimator is spitting out perfect scores, in some case 1

But when I refit the best classifier on training data then run the test it gives reasonable scores.

My doubt/question was what exactly GridSearch does with the test data after the split using the Shuffle split object we send in to it. I assumed it would not fit anything on Test data, if that was true then when I predict using the same test data, it should not give this high scores right.? since i used random_state value, the shufflesplit should have created the same copy for the Grid fit and also for the predict.

So, is using the same Shufflesplit for two wrong?

回答1:

Basically the grid search will:

  • Try every combination of your parameter grid
  • For each of them it will do a K-fold cross validation
  • Select the best available.

So your second case is the good one. Otherwise you are actually predicting data that you trained with (which is not the case in the second option, there you only keep the best parameters from your gridsearch)



回答2:

GridSearchCV as @Gauthier Feuillen said is used to search best parameters of an estimator for given data. Description of GridSearchCV:-

  1. gcv = GridSearchCV(pipe, clf_params,cv=cv)
  2. gcv.fit(features,labels)
  3. clf_params will be expanded to get all possible combinations separate using ParameterGrid.
  4. features will now be split into features_train and features_test using cv. Same for labels
  5. Now the gridSearch estimator (pipe) will be trained using features_train and labels_inner and scored using features_test and labels_test.
  6. For each possible combination of parameters in step 3, The steps 4 and 5 will be repeated for cv_iterations. The average of score across cv iterations will be calculated, which will be assigned to that parameter combination. This can be accessed using cv_results_ attribute of gridSearch.
  7. For the parameters which give the best score, the internal estimator will be re initialized using those parameters and refit for the whole data supplied into it(features and labels).

Because of last step, you are getting different scores in first and second approach. Because in the first approach, all data is used for training and you are predicting for that data only. Second approach has prediction on previously unseen data.