I am currently working on a CNN network, in which i want to apply a 2d kernel on a image, but it only has to perform 1d convolution, meaning that it only has to move along one axis (x-axis in this case).
The shape of the kernel is same as the y-axis of the image. The number of filters applied is not a concern at the moment.
An example: Given a image of size (6,3,3) = (rows, cols, color_channel)
How should i perform a 1d convolution given a 2d filter?
Tried what was suggested by @Marcin Możejko
dim_x = 3
dim_y = 6
color_channels = 3
#model.add(ZeroPadding2D((6,4),input_shape=(6,3,3)))
model.add(Conv2D(filters = 32,kernel_size=(dim_y,1) , activation='linear' , input_shape = (6,3,3)))
print model.output_shape
model.add(Reshape((dim_x,color_channels)))
Error:
The total size of the new array must be unchanged
Assuming that your image
shape=(dim_x, dim_y, img_channels)
you can obtain a1D
convolution by setting:Remember that the output from this layer would have shape
(dim_x, 1, output_channels)
. If you want your input to be sequential you may use theReshape
layer by setting:This would produce output with shape
(dim_x, output_channels)
.An interesting fact is that this is exactly the way how
Conv1D
works inKeras
withtf
backend.