I would like to develop a convolutional network architecture where in the first layer (Conv1D in this case), I would like to prespecify some portion of untrainable fixed filters, while also having several trainable filters that the model can learn. Is this possible and how would this be done?
My intuition is that I can make two separate Conv1D layers - one trainable and one untrainable - and then somehow concatenate them, but I'm not sure what this would look like in code. Also, for the untrainable filters, how do I prespecify the weights?
All keras layers has a
set_weights
method (https://keras.io/layers/about-keras-layers/).You can freeze the layer of the
Conv1D
layer usingtrainable=False
(https://keras.io/getting-started/faq/#how-can-i-freeze-keras-layers).Concatenate the trainable
Conv1D
and the non-trainableConv1D
using theConcatenate
layer (https://keras.io/layers/merge/).This is quite easy with the functional API:
I haven't tried the code but it should be working after filling the small details.