I'm building a model in Keras using some tensorflow function (reduce_sum and l2_normalize) in the last layer while encountered this problem. I have searched for a solution but all of it related to "Keras tensor".
Here is my code:
import tensorflow as tf;
from tensorflow.python.keras import backend as K
vgg16_model = VGG16(weights = 'imagenet', include_top = False, input_shape = input_shape);
fire8 = extract_layer_from_model(vgg16_model, layer_name = 'block4_pool');
pool8 = MaxPooling2D((3,3), strides = (2,2), name = 'pool8')(fire8.output);
fc1 = Conv2D(64, (6,6), strides= (1, 1), padding = 'same', name = 'fc1')(pool8);
fc1 = Dropout(rate = 0.5)(fc1);
fc2 = Conv2D(3, (1, 1), strides = (1, 1), padding = 'same', name = 'fc2')(fc1);
fc2 = Activation('relu')(fc2);
fc2 = Conv2D(3, (15, 15), padding = 'valid', name = 'fc_pooling')(fc2);
fc2_norm = K.l2_normalize(fc2, axis = 3);
est = tf.reduce_sum(fc2_norm, axis = (1, 2));
est = K.l2_normalize(est);
FC_model = Model(inputs = vgg16_model.input, outputs = est);
and then the error:
ValueError: Output tensors to a Model must be the output of a TensorFlow
Layer
(thus holding past layer metadata). Found: Tensor("l2_normalize_3:0", shape=(?, 3), dtype=float32)
I noticed that without passing fc2 layer to these functions, the model works fine:
FC_model = Model(inputs = vgg16_model.input, outputs = fc2);
Can someone please explain to me this problem and some suggestion on how to fix it?
I have found a way to work around to solve the problem. For anyone who encounters the same issue, you can use the Lambda layer to wrap your tensorflow operations, this is what I did:
I had this issue because I was adding 2 tensors as
x1+x2
somewhere in my model instead of usingAdd()([x1,x2])
.That solved the problem.