input0 = keras.layers.Input((32, 32, 3), name='Input0')
flatten = keras.layers.Flatten(name='Flatten')(input0)
relu1 = keras.layers.Dense(256, activation='relu', name='ReLU1')(flatten)
dropout = keras.layers.Dropout(1., name='Dropout')(relu1)
softmax2 = keras.layers.Dense(10, activation='softmax', name='Softmax2')(dropout)
model = keras.models.Model(inputs=input0, outputs=softmax2, name='cifar')
just to test whether dropout is working..
I set dropout rate to be 1.0
the state in each epoch should be freezed without any tuning to parameters
however the accuracy keep growing although i drop all the hidden nodes
what's wrong?
Nice catch!
It would seem that the issue linked in the comment above by Dennis Soemers, Keras Dropout layer changes results with dropout=0.0, has not been fully resolved, and it somehow blunders when faced with a dropout rate of 1.0 [see UPDATE at the end of post]; modifying the model shown in the Keras MNIST MLP example:
gives indeed a model being trained, despite all neurons being dropped, as you report:
Nevertheless, if you try a dropout rate of 0.99, i.e. replacing the two dropout layers in the above model with
then indeed you have effectively no training taking place, as it should be the case:
UPDATE (after comment by Yu-Yang in OP): It seems as a design choice not to do anything when the dropout rate is equal to either 0 or 1; the
Dropout
class becomes effective onlyNevertheless, as already commented, a warning message in such cases (and a relevant note in the documentation) would arguably be a good idea.