I want to manipulate the activations of the previous layer with a custom keras layer. The below layer simply multiplies a number with the activations of the previous layer.
class myLayer(Layer):
def __init__(self, **kwargs):
super(myLayer, self).__init__(**kwargs)
def build(self, input_shape):
self.output_dim = input_shape[0][1]
super(myLayer, self).build(input_shape)
def call(self, inputs, **kwargs):
if not isinstance(inputs, list):
raise ValueError('This layer should be called on a list of inputs.')
mainInput = inputs[0]
nInput = inputs[1]
changed = tf.multiply(mainInput,nInput)
forTest = changed
forTrain = inputs[0]
return K.in_train_phase(forTrain, forTest)
def compute_output_shape(self, input_shape):
print(input_shape)
return (input_shape[0][0], self.output_dim)
I am creating the model as
inputTensor = Input((5,))
out = Dense(units, input_shape=(5,),activation='relu')(inputTensor)
n = K.placeholder(shape=(1,))
auxInput = Input(tensor=n)
out = myLayer()([out, auxInput])
out = Dense(units, activation='relu')(out)
out = Dense(3, activation='softmax')(out)
model = Model(inputs=[inputTensor, auxInput], outputs=out)
model.compile(optimizer='adam', loss='categorical_crossentropy', metrics='acc'])
I get this error when I try to use
model.fit(X_train, Y_train, epochs=epochs, verbose=1)
Error
InvalidArgumentError: You must feed a value for placeholder tensor 'Placeholder_3' with dtype float and shape [1]
And when I try to give the value to the placeholder like
model.fit([X_train, np.array([3])], Y_train, epochs=epochs, verbose=1)
I get:
ValueError: Error when checking model input: the list of Numpy arrays that you are passing to your model is not the size the model expected. Expected to see 1 arrays but instead got the following list of 2 arrays:
How should I initialize this placeholder? My goal is to use model.evaluate to test effect of different values of n the model during inference. Thanks.
You can use
Input(shape=(1,))
instead of a placeholder. Also, there's no need to provideinput_shape
toDense
sinceInput(shape=(5,))
already handles it.Repeat the value
n
when feeding it into the model, for example:Edit:
What's been described above is just a quick hack. If you want to provide multiple parameters to the layer, you can initialize
K.variable
in the constructor__init__()
.For example,
By assigning a name to this layer, it'll be easier to get the variables and modify the value in test phase. E.g. ,
K.set_value(model.get_layer('my_layer').scale, 5)
.I found a solution avoiding the use of an array for
n
.Instead of using a
placeholder
, use aK.variable
:Then you can set the value of
n
like this at any time, even after compiling the model:This allows you to keep training without having to recompile the model, and without passing
n
to thefit
method.If working with many inputs like that, you can make it:
And inside the layers, the second input which is the tensor for
n
will have 4 elements: