I have checked all the solutions, but still, I am facing the same error. My training images shape is (26721, 32, 32, 1)
, which I believe it is 4 dimension, but I don't know why error shows it is 5 dimension.
model = Sequential()
model.add(Convolution2D(16, 5, 5, border_mode='same', input_shape= input_shape ))
So this is how I am defining model.fit_generator
model.fit_generator(train_dataset, train_labels, nb_epoch=epochs, verbose=1,validation_data=(valid_dataset, valid_labels), nb_val_samples=valid_dataset.shape[0],callbacks=model_callbacks)
The problem is input_shape
.
It should actually contain 3 dimensions only. And internally keras will add the batch dimension making it 4.
Since you probably used input_shape
with 4 dimensions (batch included), keras is adding the 5th.
You should use input_shape=(32,32,1)
.
The problem is with input_shape
. Try adding an extra dimension/channel for letting keras know that you are working on a grayscale image ie -->1
input_shape= (56,56,1)
.
Probably if you are using a normal Deep learning model then it won't raise an issue but for Convnet it does.
Here you need to check the "channels_first" whenever CNN is used as 2d,Also reshape your train_data and test data as:
if K.image_data_format() == 'channels_first': #check for channels_first
train_img.reshape(train_img.shape[0],1,x,x)
Input_shape=(1,x,x) #In your case x is 32
else:
train_img.reshape(train_img.shape[0],x,x,1)
Input_shape=(x,x,1)