I am defining a Lambda
layer with a function that uses the Conv2D
layer.
def lambda_func(x,k):
y = Conv2D(k, (3,3), padding='same')(x)
return y
And calling it using
k = 64
x = Conv2D(k, (3,3), data_format='channels_last', padding='same', name='block1_conv1')(inputs)
y = Lambda(lambda_func, arguments={'k':k}, name = 'block1_conv1_loc')(x)
But in model.summary()
, the lambda layer is showing no parameters!
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_1 (InputLayer) (None, 224, 224, 3) 0
_________________________________________________________________
block1_conv1 (Conv2D) (None, 224, 224, 64) 1792
_________________________________________________________________
block1_conv1_loc (Lambda) (None, 224, 224, 64) 0
_________________________________________________________________
activation_1 (Activation) (None, 224, 224, 64) 0
_________________________________________________________________
block1_pool (MaxPooling2D) (None, 112, 112, 64) 0
_________________________________________________________________
flatten (Flatten) (None, 802816) 0
_________________________________________________________________
(There is a Dense
layer under it, and a Softmax
2-class classifier under that). How can I ensure the Conv2D
parameters of the Lambda
layer show up and are also trainable? I have also tried using trainable=True
in the Lambda
function.
def lambda_func(x,k):
y = Conv2D(k, (3,3), padding='same', trainable=True)(x)
return y
But that did not make any difference.
Lambda layers don't have parameters.
Parameters, in the summary, are the variables that can "learn". Lambda layers never learn, they're functions created by you.
If you do intend to use a "Convolutional Layer", use it outside of the lambda layer.
Now, if you want to use a "convolution operation", then use it inside the lambda layer, but there is no learnable parameter, you define the filters yourself.
If you want to create a special layer that learns in a different way, then create a custom layer.