Recurrent convolutional BLSTM neural network - arb

2019-04-09 13:17发布

Using Keras + Theano I successfully made a recurrent bidirectional-LSTM neural network that is capable of training on and classifying DNA sequences of arbitrary lengths, using the following model (for fully working code see: http://pastebin.com/jBLv8B72):

sequence = Input(shape=(None, ONE_HOT_DIMENSION), dtype='float32')
dropout = Dropout(0.2)(sequence)

# bidirectional LSTM
forward_lstm = LSTM(
    output_dim=50, init='uniform', inner_init='uniform', forget_bias_init='one', return_sequences=True,
    activation='tanh', inner_activation='sigmoid',
)(dropout)
backward_lstm = LSTM(
    output_dim=50, init='uniform', inner_init='uniform', forget_bias_init='one', return_sequences=True,
    activation='tanh', inner_activation='sigmoid', go_backwards=True,
)(dropout)
blstm = merge([forward_lstm, backward_lstm], mode='concat', concat_axis=-1)

dense = TimeDistributed(Dense(NUM_CLASSES))(blstm)

self.model = Model(input=sequence, output=dense)
self.model.compile(
    loss='binary_crossentropy',
    optimizer='adam',
    metrics=['accuracy']
)

To increase the model's performance I want to add additional layers. Most preferably convolution and max-pooling layers. I made several attempts, but failed each time. For example changing line 2 to the following 3 lines:

convolution = Convolution1D(filter_length=6, nb_filter=10)(sequence)
max_pooling = MaxPooling1D(pool_length=2)(convolution)
dropout = Dropout(0.2)(max_pooling)

The model compiles, but throws an error:

ValueError: Input dimension mis-match. (input[0].shape[1] = 111, input[1].shape[1] = 53)
Apply node that caused the error: Elemwise{Composite{((i0 * log(i1)) + (i2 * log(i3)))}}(timedistributed_1_target, Elemwise{clip,no_inplace}.0, Elemwise{sub,no_inplace}.0, Elemwise{sub,no_inplace}.0)
Toposort index: 546
Inputs types: [TensorType(float32, 3D), TensorType(float32, 3D), TensorType(float32, 3D), TensorType(float32, 3D)]
Inputs shapes: [(1L, 111L, 2L), (1L, 53L, 2L), (1L, 111L, 2L), (1L, 53L, 2L)]
Inputs strides: [(888L, 8L, 4L), (424L, 8L, 4L), (888L, 8L, 4L), (424L, 8L, 4L)]
Inputs values: ['not shown', 'not shown', 'not shown', 'not shown']
Outputs clients: [[Sum{axis=[1, 2], acc_dtype=float64}(Elemwise{Composite{((i0 * log(i1)) + (i2 * log(i3)))}}.0)]]

Obviously there is a problem with the dimensions. I have experimented with reshape layers, without success though.

Is it even possible to use convolutional and/or max-pooling layers in the context of arbitrary sequence lengths?

Any help on this matter would be greatly appreciated!

0条回答
登录 后发表回答