how to obtain the runtime batch size of a Keras mo

2020-03-30 05:06发布

问题:

Based on this post. I need some basic implementation help. Below you see my model using a Dropout layer. When using the noise_shape parameter, it happens that the last batch does not fit into the batch size creating an error (see other post).

Original model:

def LSTM_model(X_train,Y_train,dropout,hidden_units,MaskWert,batchsize):
   model = Sequential()
   model.add(Masking(mask_value=MaskWert, input_shape=(X_train.shape[1],X_train.shape[2]) ))
   model.add(Dropout(dropout, noise_shape=(batchsize, 1, X_train.shape[2]) ))   
   model.add(Dense(hidden_units, activation='sigmoid', kernel_constraint=max_norm(max_value=4.) ))   
   model.add(LSTM(hidden_units, return_sequences=True, dropout=dropout, recurrent_dropout=dropout))  

Now Alexandre Passos suggested to get the runtime batchsize with tf.shape. I tried to implement the runtime batchsize idea it into Keras in different ways but never working.

   import Keras.backend as K

   def backend_shape(x):
       return K.shape(x)

   def LSTM_model(X_train,Y_train,dropout,hidden_units,MaskWert,batchsize):    
       batchsize=backend_shape(X_train)
       model = Sequential()
       ...
       model.add(Dropout(dropout, noise_shape=(batchsize[0], 1, X_train.shape[2]) )) 
       ...  

But that did just give me the input tensor shape but not the runtime input tensor shape.

I also tried to use a Lambda Layer

def output_of_lambda(input_shape):
   return (input_shape)

def LSTM_model_2(X_train,Y_train,dropout,hidden_units,MaskWert,batchsize):       
   model = Sequential()
   model.add(Lambda(output_of_lambda, outputshape=output_of_lambda))
   ...
   model.add(Dropout(dropout, noise_shape=(outputshape[0], 1, X_train.shape[2]) )) 

And different variants. But as you already guessed, that did not work at all. Is the model definition actually the correct place? Could you give me a tip or better just tell me how to obtain the running batch size of a Keras model? Thanks so much.

回答1:

The current implementation does adjust the according to the runtime batch size. From the Dropout layer implementation code:

symbolic_shape = K.shape(inputs)
noise_shape = [symbolic_shape[axis] if shape is None else shape
               for axis, shape in enumerate(self.noise_shape)]

So if you give noise_shape=(None, 1, features) the shape will be (runtime_batchsize, 1, features) following the code above.