I'd like to use scikit-learn's GridSearchCV to determine some hyper parameters for a random forest model. My data is time dependent and looks something like
import pandas as pd
train = pd.DataFrame({'date': pd.DatetimeIndex(['2012-1-1', '2012-9-30', '2013-4-3', '2014-8-16', '2015-3-20', '2015-6-30']),
'feature1': [1.2, 3.3, 2.7, 4.0, 8.2, 6.5],
'feature2': [4, 4, 10, 3, 10, 9],
'target': [1,2,1,3,2,2]})
>>> train
date feature1 feature2 target
0 2012-01-01 1.2 4 1
1 2012-09-30 3.3 4 2
2 2013-04-03 2.7 10 1
3 2014-08-16 4.0 3 3
4 2015-03-20 8.2 10 2
5 2015-06-30 6.5 9 2
How can I implement the following cross validation folding technique?
train:(2012, 2013) - test:(2014)
train:(2013, 2014) - test:(2015)
That is, I want to use 2 years of historic observations to train a model and then test its accuracy in the subsequent year.
There's standard
sklearn
approach to that, usingGroupShuffleSplit
. From the docs:Very much convenient for your use case. Here how it looks like:
And passing that to
GridSearchCV
like before:There is also the TimeSeriesSplit function in
sklearn
, which splits time-series data (i.e. with fixed time intervals), in train/test sets. Note that unlike standard cross-validation methods, successive training sets are supersets of those that come before them, i.e. in each split, test indices must be higher than before, and thus shuffling in cross validator is inappropriate.You just have to pass an iterable with the splits to GridSearchCV. This split should have the following format:
To get the idxs you can do something like this:
This looks like this:
Then you can do: