I am applying an ML model to an experimental setup to optimise a driving signal. The driving signal itself is the thing being optimised, but its quality is evaluated indirectly (it is applied to an experimental setup to produce a different signal).
I am able to run and collect data from the experiment via functions in python.
I would like to set up an ML model with a custom loss function that invokes the experiment driver functions with the optimised signal to get the error used for back-prop.
I have looked into using keras however the restriction of having to use the keras backend functions exclusively means that I cannot call my driver functions in the function.
I would like to know if there is a way to do what I want if I were to use tensor-flow without keras front-end, and also if a different ML API allows this?
Thanks.
If I understood the question you want to be able to generate the loss based on code that you run when the model evaluates the loss function.
This would be an example:
import numpy as np
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras.layers import *
from tensorflow.keras.models import Model
from tensorflow.keras import backend as K
FACTORS = np.array([[0.5, 2.0, 4.0]])
def ext_function(inputs):
""" This can be an arbitrary python function of the inputs
inputs is a tf.EagerTensor which can be converted into a numpy array.
"""
r = np.dot(inputs, FACTORS.T)
return r
class LossFunction(object):
def __init__(self, model):
# Use model to obtain the inputs
self.model = model
def __call__(self, y_true, y_pred, sample_weight=None):
""" ignore y_true value from fit params and compute it instead using
ext_function
"""
y_true = tf.py_function(ext_function, [self.model.inputs[0]], Tout=tf.float32)
v = keras.losses.mean_squared_error(y_true, y_pred)
return K.mean(v)
def make_model():
inp = Input(shape=(3,))
out = Dense(1, use_bias=False)(inp)
model = Model(inp, out)
model.compile('adam', LossFunction(model))
return model
model = make_model()
model.summary()
Test:
import numpy as np
N_SAMPLES=100
X = np.random.rand(N_SAMPLES, 3)
Y_dummy = np.random.rand(N_SAMPLES)
history = model.fit(X, Y_dummy, epochs=1000, verbose=False)
print(history.history['loss'][-1])
And it actually does something:
model.layers[1].get_weights()
Please note that it will be much simpler to simply generate the correct value of Y as input. I don't know exactly the conditions of your problem. But if at all possible try to pre-generate Y. Rather than use the example above.
I've used the trick above to create custom metrics which are weighted by the class. i.e. in scenarios where one of the input params is a class and the desired loss function is an weighted per class average of the losses.