Activation function error in a 1D CNN in Keras

2019-04-14 04:13发布

I'm creating a model to classify if the input waverform contains rising edge of SDA of I2C line.

My input has 20000 datapoints and 100 training data.

I've initially found an answer regarding the input in here Keras 1D CNN: How to specify dimension correctly?

However, I'm getting an error in the activation function:

ValueError: Error when checking target: expected activation_1 to have 3 dimensions, but got array with shape (100, 1)

My model is:

model.add(Conv1D(filters=n_filter,
             kernel_size=input_filter_length,
             strides=1,
             activation='relu',
             input_shape=(20000,1)))
model.add(BatchNormalization())
model.add(MaxPooling1D(pool_size=4, strides=None))

model.add(Dense(1))
model.add(Activation("sigmoid"))

adam = Adam(lr=learning_rate)

model.compile(optimizer= adam, loss='binary_crossentropy', metrics=['accuracy'])

model.fit(train_data, train_label,
      nb_epoch=10,
      batch_size=batch_size, shuffle=True)

score = np.asarray(model.evaluate(test_new_data, test_label, batch_size=batch_size))*100.0

I can't determine the problem in here. On why the activation function expects a 3D tensor.

2条回答
贼婆χ
2楼-- · 2019-04-14 04:50

Conv1D has its output with 3 dimensions (and it will keep like that until the Dense layer).

Conv output: (BatchSize, Length, Filters)

For the Dense layer to output only one result, you need to add a Flatten() or Reshape((shape)) layer, to make it (BatchSize, Lenght) only.

If you call model.summary(), you will see exactly what shape each layer is outputting. You have to adjust the output to be exactly the same shape as the array you pass as the correct results. The None that appears in those shapes is the batch size and may be ignored.


About your model: I think you need more convolution layers, reducing the number of filters gradually, because condensing so much data in a single Dense layer does not usually bring good results.

About dimensions: keras layers toturial and samples

查看更多
倾城 Initia
3楼-- · 2019-04-14 04:59

The problem lies in the fact that starting from keras 2.0, a Dense layer applied to a sequence will apply the layer to each time step - so given a sequence it will produce a sequence. So your Dense is actually producing a sequence of 1-element vectors and this causes your problem (as your target is not a sequence).

There are several ways on how to reduce a sequence to a vector and then apply a Dense to it:

  1. GlobalPooling:

    You may use GlobalPooling layers like GlobalAveragePooling1D or GlobalMaxPooling1D, eg.:

    model.add(Conv1D(filters=n_filter,
             kernel_size=input_filter_length,
             strides=1,
             activation='relu',
             input_shape=(20000,1)))
    model.add(BatchNormalization())
    model.add(GlobalMaxPooling1D(pool_size=4, strides=None))
    
    model.add(Dense(1))
    model.add(Activation("sigmoid"))
    
  2. Flattening:

    You might colapse the whole sequence to a single vector using Flatten layer:

    model.add(Conv1D(filters=n_filter,
             kernel_size=input_filter_length,
             strides=1,
             activation='relu',
             input_shape=(20000,1)))
    model.add(BatchNormalization())
    model.add(MaxPooling1D(pool_size=4, strides=None))
    model.add(Flatten())
    
    model.add(Dense(1))
    model.add(Activation("sigmoid"))
    
  3. RNN Postprocessing:

    You could also add a recurrent layer on a top of your sequence and make it to return only the last output:

    model.add(Conv1D(filters=n_filter,
             kernel_size=input_filter_length,
             strides=1,
             activation='relu',
             input_shape=(20000,1)))
    model.add(BatchNormalization())
    model.add(MaxPooling1D(pool_size=4, strides=None))
    model.add(SimpleRNN(10, return_sequences=False))
    
    model.add(Dense(1))
    model.add(Activation("sigmoid"))
    
查看更多
登录 后发表回答