I'm trying to train an LSTM model using return_sequence to return the hidden state output for each input time step, solving a regression problem.
My Data shape is: (31, 2720, 16) i.e 31 batches of 2720 samples with 16 features.
My target shape is: (31, 2720, 1) i.e 31 batches of 2720 rows containing 1 value.
I've built the following model:
model = Sequential()
opt = Adam(learning_rate=0.0001, clipnorm=1)
num_samples = train_x.shape[1]
num_features = train_x.shape[2]
model.add(Masking(mask_value=-10., input_shape=(num_samples, num_features)))
model.add(LSTM(32, return_sequences=True, stateful=False, activation='tanh'))
model.add(Dropout(0.3))
#this is the last LSTM layer, use return_sequences=False
model.add(LSTM(16, return_sequences=False, stateful=False, activation='tanh'))
model.add(Dropout(0.3))
model.add(Dense(16, activation='tanh'))
model.add(Dense(8, activation='tanh'))
model.add(Dense(1, activation='sigmoid'))
model.compile(loss='mse', optimizer='adam' ,metrics=[metrics.mean_absolute_error, metrics.mean_squared_error])
logdir = os.path.join(logs_base_dir, datetime.datetime.now().strftime("%Y%m%d-%H%M%S"))
tensorboard_callback = TensorBoard(log_dir=logdir, update_freq=1)
model.summary()
summary:
Model: "sequential_33"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
masking_24 (Masking) (None, 2720, 16) 0
_________________________________________________________________
lstm_61 (LSTM) (None, 2720, 32) 6272
_________________________________________________________________
dropout_51 (Dropout) (None, 2720, 32) 0
_________________________________________________________________
lstm_62 (LSTM) (None, 16) 3136
_________________________________________________________________
dropout_52 (Dropout) (None, 16) 0
_________________________________________________________________
dense_67 (Dense) (None, 16) 272
_________________________________________________________________
dense_68 (Dense) (None, 8) 136
_________________________________________________________________
dense_69 (Dense) (None, 1) 9
=================================================================
Total params: 9,825
Trainable params: 9,825
Non-trainable params: 0
_________________________________________________________________
When trying to fit the model, I get the following error:
ValueError Traceback (most recent call last)
<ipython-input-354-afdba8dea179> in <module>()
----> 1 model.fit(train_x, train_y, epochs=1000, batch_size=128,validation_split = 0.2, callbacks=[tensorboard_callback,checkpoint])
5 frames
/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/keras/engine/training_utils.py in check_loss_and_target_compatibility(targets, loss_fns, output_shapes)
808 raise ValueError('A target array with shape ' + str(y.shape) +
809 ' was passed for an output of shape ' + str(shape) +
--> 810 ' while using as loss `' + loss_name + '`. '
811 'This loss expects targets to have the same shape '
812 'as the output.')
I'm trying to grasp the right way to structure the data, what am I missing?
Your target is of shape
(31, 2720, 1)
and the output of your current model will be of shape(31, 1)
. The error in this case is self explainatory.You can solve this in one of two ways:
Looking at your model, I'm guessing you only want the loss with respect to the last sequence. In this case, you can call
model.fit
as follows:If you want to compute the loss across all timesteps, add
return_sequences=True
to the second LSTM layer:Your desired target shape is inconsistent with the model output.
Change this line
model.add(LSTM(16, return_sequences=False, stateful=False, activation='tanh'))
to
model.add(LSTM(16, return_sequences=True, stateful=False, activation='tanh'))
So that the temporal dimension is there.