scikit-learn cross validation custom splits for ti

2019-03-16 10:54发布

I'd like to use scikit-learn's GridSearchCV to determine some hyper parameters for a random forest model. My data is time dependent and looks something like

import pandas as pd

train = pd.DataFrame({'date': pd.DatetimeIndex(['2012-1-1', '2012-9-30', '2013-4-3', '2014-8-16', '2015-3-20', '2015-6-30']), 
'feature1': [1.2, 3.3, 2.7, 4.0, 8.2, 6.5],
'feature2': [4, 4, 10, 3, 10, 9],
'target': [1,2,1,3,2,2]})

>>> train
        date  feature1  feature2  target
0 2012-01-01       1.2         4       1
1 2012-09-30       3.3         4       2
2 2013-04-03       2.7        10       1
3 2014-08-16       4.0         3       3
4 2015-03-20       8.2        10       2
5 2015-06-30       6.5         9       2

How can I implement the following cross validation folding technique?

train:(2012, 2013) - test:(2014)
train:(2013, 2014) - test:(2015)

That is, I want to use 2 years of historic observations to train a model and then test its accuracy in the subsequent year.

3条回答
爷、活的狠高调
2楼-- · 2019-03-16 11:32

There's standard sklearn approach to that, using GroupShuffleSplit. From the docs:

Provides randomized train/test indices to split data according to a third-party provided group. This group information can be used to encode arbitrary domain specific stratifications of the samples as integers.

For instance the groups could be the year of collection of the samples and thus allow for cross-validation against time-based splits.

Very much convenient for your use case. Here how it looks like:

cv = GroupShuffleSplit().split(X, y, groups)

And passing that to GridSearchCV like before:

GridSearchCV(estimator, param_grid, cv=cv, ...)
查看更多
Juvenile、少年°
3楼-- · 2019-03-16 11:38

There is also the TimeSeriesSplit function in sklearn, which splits time-series data (i.e. with fixed time intervals), in train/test sets. Note that unlike standard cross-validation methods, successive training sets are supersets of those that come before them, i.e. in each split, test indices must be higher than before, and thus shuffling in cross validator is inappropriate.

查看更多
你好瞎i
4楼-- · 2019-03-16 11:41

You just have to pass an iterable with the splits to GridSearchCV. This split should have the following format:

[
 (split1_train_idxs, split1_test_idxs),
 (split2_train_idxs, split2_test_idxs),
 (split3_train_idxs, split3_test_idxs),
 ...
]

To get the idxs you can do something like this:

groups = df.groupby(df.date.dt.year).groups
# {2012: [0, 1], 2013: [2], 2014: [3], 2015: [4, 5]}
sorted_groups = [value for (key, value) in sorted(groups.items())] 
# [[0, 1], [2], [3], [4, 5]]

cv = [(sorted_groups[i] + sorted_groups[i+1], sorted_groups[i+2])
      for i in range(len(sorted_groups)-2)]

This looks like this:

[([0, 1, 2], [3]),  # idxs of first split as (train, test) tuple
 ([2, 3], [4, 5])]  # idxs of second split as (train, test) tuple

Then you can do:

GridSearchCV(estimator, param_grid, cv=cv, ...)
查看更多
登录 后发表回答