Is it possible to use Keras's scikit-learn API together with fit_generator()
method? Or use another way to yield batches for training? I'm using SciPy's sparse matrices which must be converted to NumPy arrays before input to Keras, but I can't convert them at the same time because of high memory consumption. Here is my function to yield batches:
def batch_generator(X, y, batch_size):
n_splits = len(X) // (batch_size - 1)
X = np.array_split(X, n_splits)
y = np.array_split(y, n_splits)
while True:
for i in range(len(X)):
X_batch = []
y_batch = []
for ii in range(len(X[i])):
X_batch.append(X[i][ii].toarray().astype(np.int8)) # conversion sparse matrix -> np.array
y_batch.append(y[i][ii])
yield (np.array(X_batch), np.array(y_batch))
and example code with cross validation:
from sklearn.model_selection import StratifiedKFold, GridSearchCV
from sklearn import datasets
from keras.models import Sequential
from keras.layers import Activation, Dense
from keras.wrappers.scikit_learn import KerasClassifier
import numpy as np
def build_model(n_hidden=32):
model = Sequential([
Dense(n_hidden, input_dim=4),
Activation("relu"),
Dense(n_hidden),
Activation("relu"),
Dense(3),
Activation("sigmoid")
])
model.compile(loss="categorical_crossentropy", optimizer="adam", metrics=["accuracy"])
return model
iris = datasets.load_iris()
X = iris["data"]
y = iris["target"].flatten()
param_grid = {
"n_hidden": np.array([4, 8, 16]),
"nb_epoch": np.array(range(50, 61, 5))
}
model = KerasClassifier(build_fn=build_model, verbose=0)
skf = StratifiedKFold(n_splits=5).split(X, y) # this yields (train_indices, test_indices)
grid = GridSearchCV(model, param_grid, cv=skf, verbose=2, n_jobs=4)
grid.fit(X, y)
print(grid.best_score_)
print(grid.cv_results_["params"][grid.best_index_])
To explain it more, it uses all the possible combinations of hyper-parameters in param_grid
to build a model. Each model is then trained and tested one by one on the train-test data splits (folds) provided by StratifiedKFold
. Then final score for a given model is a mean score from all folds.
So is it somehow possible to insert some preprocessing substep to the code above to convert data (sparse matrices) before the actual fitting?
I know I can write my own cross validation generator, but it must yield indexes, not the real data!