I am applying an ML model to an experimental setup to optimise a driving signal. The driving signal itself is the thing being optimised, but its quality is evaluated indirectly (it is applied to an experimental setup to produce a different signal).
I am able to run and collect data from the experiment via functions in python.
I would like to set up an ML model with a custom loss function that invokes the experiment driver functions with the optimised signal to get the error used for back-prop.
I have looked into using keras however the restriction of having to use the keras backend functions exclusively means that I cannot call my driver functions in the function.
I would like to know if there is a way to do what I want if I were to use tensor-flow without keras front-end, and also if a different ML API allows this?
Thanks.
If I understood the question you want to be able to generate the loss based on code that you run when the model evaluates the loss function.
This would be an example:
Test:
And it actually does something:
Please note that it will be much simpler to simply generate the correct value of Y as input. I don't know exactly the conditions of your problem. But if at all possible try to pre-generate Y. Rather than use the example above.
I've used the trick above to create custom metrics which are weighted by the class. i.e. in scenarios where one of the input params is a class and the desired loss function is an weighted per class average of the losses.