I am trying to create a custom loss function for my deep learning model and I run into an error.
I am going to give here an example of a code that is not what I want to use but if I understand how to make this little loss function work, then I think I'll be able to make my long loss function work. So I am pretty much asking for help to make this next function work, here it is.
model.compile(optimizer='rmsprop',loss=try_loss(pic_try), metrics=
['accuracy'])
def try_loss(pic):
def try_2_loss(y_true,y_pred):
return tf.py_function(func=try_3_loss,inp=[y_pred,pic], Tout=tf.float32)
return try_2_loss
def try_3_loss(y_pred,pic):
return tf.reduce_mean(pic)
Now I want to know the following: 1. Does the pic that I am entering into my model.compile line need to be a tensor? Can it be a numpy array? 2. In my try_3_loss function, can I replace tf.reduce_mean to np.mean? 3. In my try_3_loss function, can I use normal numpy commands on y_pred, such as np.mean(y_pred)?
My main thing is that I want to use as many numpy commands as possible.
I tried to use all sorts of stuff, I tried to have my pic be a numpy array, I tried to use with that the np.mean (pic) in my try_3_loss function, I tried to make my pic be a tensor object and then use the tf.reduce_mean in my try_3_project and I tried to do sess.run(pic) before running the model.compile line and in all of the above situations I got the following error:
TypeError Traceback (most recent call
last)
<ipython-input-75-ff45de7120bc> in <module>()
----> 1 model.compile(optimizer='rmsprop',loss=try_loss(pic_try),
metrics=['accuracy'])
1 frames
/usr/local/lib/python3.6/dist-packages/keras/engine/training.py in
compile(self, optimizer, loss, metrics, loss_weights, sample_weight_mode,
weighted_metrics, target_tensors, **kwargs)
340 with K.name_scope(self.output_names[i] +
'_loss'):
341 output_loss = weighted_loss(y_true, y_pred,
--> 342 sample_weight,
mask)
343 if len(self.outputs) > 1:
344 self.metrics_tensors.append(output_loss)
/usr/local/lib/python3.6/dist-packages/keras/engine/training_utils.py in
weighted(y_true, y_pred, weights, mask)
418 weight_ndim = K.ndim(weights)
419 score_array = K.mean(score_array,
--> 420 axis=list(range(weight_ndim,
ndim)))
421 score_array *= weights
422 score_array /= K.mean(K.cast(K.not_equal(weights, 0),
K.floatx()))
TypeError: 'NoneType' object cannot be interpreted as an integer
Some test code:
Test code to invoke the model:
Thank you so much for your help! I actually decided to switch to tf 2.0 and writing functions there is MUCH easier, although it is a bit expensive in terms of efficiency, I can always very easily switch from np arrays to tensors and back so I just wrote it all in numpy array format and switched it back. So the inputs and outputs to all of my functions are tensors, but inside the functions I switch it to numpy arrays and before I return it back I switch it back to tensors, but I still have an error. The code goes like this:
And when I actually try to run the loss functions (not in the model.compile) as so:
I get the following:
So I do get an output of a tensor with the loss that I wanted. (printed are things that will help understand the error) HOWEVER when I try to run the compile command I get this:
It's like the compiler doesn't understand that the y_pred will have the size of the output of my model.
My model:
Any ideas how to fix it? I will also look at the test code you sent me to get an idea.
Thank you!