What if we specify batch size as 15 and the sample size of 1000 which is not divisible by 15 in Keras model training?.should it still able to train?
also I have looked in to this answer but it's not helping question
please can anybody explain this Thank you.
Hi guys i found the answer for this.
if this is the case it will take it will take the remaining 10 samples to the last step of the epoch.
Eg: 15x66+10=1000
that means it will take 66 batches of size 15 and for the final steps it takes only 10.
Anyway this will only work with input_shape
,if we use batch_input_shape
it will give us an error because we are specifying the batch shape in the graph level.
This is no problem for your training and validation data. The generator will take care of this. Hence you can simply use:
STEPS = train_generator.n // train_generator.batch_size
VALID_STEPS = validation_generator.n // train_generator.batch_size
history = model.fit_generator(
train_generator,
steps_per_epoch=STEPS,
epochs=100,
validation_data=validation_generator,
validation_steps=VALID_STEPS)
However, for your testset make sure that the batch size fits the data, otherwise you run a risk of your predictions not matching your true labels when comparing both (please check this article which highlights this https://medium.com/difference-engine-ai/keras-a-thing-you-should-know-about-keras-if-you-plan-to-train-a-deep-learning-model-on-a-large-fdd63ce66bd2). You can ensure that the batch size fits your data by using a loop for example:
for i in range(1,160):
if len(test_data) % i == 0:
div = i
batch_size = div