Is it possible to minimise a loss function by changing only some elements of a variable? In other words, if I have a variable X
of length 2, how can I minimise my loss function by changing X[0]
and keeping X[1]
constant?
Hopefully this code I have attempted will describe my problem:
import tensorflow as tf
import tensorflow.contrib.opt as opt
X = tf.Variable([1.0, 2.0])
X0 = tf.Variable([3.0])
Y = tf.constant([2.0, -3.0])
scatter = tf.scatter_update(X, [0], X0)
with tf.control_dependencies([scatter]):
loss = tf.reduce_sum(tf.squared_difference(X, Y))
opt = opt.ScipyOptimizerInterface(loss, [X0])
init = tf.global_variables_initializer()
with tf.Session() as sess:
sess.run(init)
opt.minimize(sess)
print("X: {}".format(X.eval()))
print("X0: {}".format(X0.eval()))
which outputs:
INFO:tensorflow:Optimization terminated with:
Message: b'CONVERGENCE: NORM_OF_PROJECTED_GRADIENT_<=_PGTOL'
Objective function value: 26.000000
Number of iterations: 0
Number of functions evaluations: 1
X: [3. 2.]
X0: [3.]
where I would like to to find the optimal value of X0 = 2
and thus X = [2, 2]
edit
Motivation for doing this: I would like to import a trained graph/model and then tweak various elements of some of the variables depending on some new data I have.
I'm not sure if it is possible with the SciPy optimizer interface, but using one of the regular
tf.train.Optimizer
subclasses you can do something like that by callingcompute_gradients
first, then masking the gradients and then callingapply_gradients
, instead of callingminimize
(which, as the docs say, basically calls the previous ones).Output:
You can use this trick to restrict the gradient calculation to one index:
part_X
becomes the value you want to change in a one-hot vector of the same shape as X.part_X + tf.stop_gradient(-part_X + X)
is the same as X in the forward pass, sincepart_X - part_X
is 0. However in the backward pass thetf.stop_gradient
prevents all unnecessary gradient calculations.This should be pretty easy to do by using the
var_list
parameter of theminimize
function.You should note that by convention all trainable variables are added to the tensorflow default collection
GraphKeys.TRAINABLE_VARIABLES
, so you can get a list of all trainable variables using:This is just a list of variables which you can manipulate as you see fit and use as the
var_list
parameter.As a tangent to your question, if you ever want to take customizing the optimization process a step further you can also compute the gradients manually using
grads = tf.gradients(loss, var_list)
manipulate the gradients as you see fit, then calltf.train.GradientDescentOptimizer(...).apply_gradients(grads_and_vars_as_list_of_tuples)
. Under the hood minimize is just doing these two steps for you.Also note that you are perfectly free to create different optimizers for different collections of variables. You could create an SGD optimizer with learning rate 1e-4 for some variables, and another Adam optimizer with learning rate 1e-2 for another set of variables. Not that there's any specific use case for this, I'm just pointing out the flexibility you now have.