How to use hidden layer activations to construct l

2020-07-23 05:43发布

Assume I have a model like this. M1 and M2 are two layers linking left and right sides of the model. The example model: Red lines indicate backprop directions

During training, I hope M1 can learn a mapping from L2_left activation to L2_right activation. Similarly, M2 can learn a mapping from L3_right activation to L3_left activation. The model also needs to learn the relationship between two inputs and the output. Therefore, I should have three loss functions for M1, M2, and L3_left respectively.

I probably can use:

model.compile(optimizer='rmsprop',
          loss={'M1': 'mean_squared_error',
                'M2': 'mean_squared_error', 
                'L3_left': mean_squared_error'})

But during training, we need to provide y_true, for example:

model.fit([input_1,input_2], y_true)

In this case, the y_true is the hidden layer activations and not from a dataset. Is it possible to build this model and train it using it's hidden layer activations?

标签: keras
2条回答
够拽才男人
2楼-- · 2020-07-23 06:03

Trying to answer to the last part: how to make gradients only affect one side of the model.

...well.... at first that sounds unfeasible to me. But, if that is similar to "train only a part of the model", then it's totally ok by defining models that only go to a certain point and making part of the layers untrainable.

By doing that, nothing will affect those layers. If that's what you want, then you can do it:

#using the previous vars to define other models

modelM1 = Model([inLef,inRig],diffM1L1Rig)

This model above ends in diffM1L1Rig. Before compiling, you must set L2Right untrainable:

modelM1.layers[??].trainable = False
#to find which layer is the right one, you may define then using the "name" parameter, or see in the modelM1.summary() the shapes, types etc. 

modelM1.compile(.....)
modelM1.fit([input_1, input_2], yM1)

This suggestion makes you train only a single part of the model. You can repeat the procedure for M2, locking the layers you need before compiling.

You can also define a full model taking all layers, and lock only the ones you want. But you won't be able (I think) to make half gradients pass by one side and half the gradients pass by the other side.

So I suggest you keep three models, the fullModel, the modelM1, and the modelM2, and you cycle them in training. One epoch each, maybe....

That should be tested....

查看更多
Juvenile、少年°
3楼-- · 2020-07-23 06:07

If you have only one output, you must have only one loss function.

If you want three loss functions, you must have three outputs, and, of course, three Y vectors for training.

If you want loss functions in the middle of the model, you must take outputs from those layers.

Creating the graph of your model: (if the model is already defined, see the end of this answer)

#Here, all "SomeLayer(blabla)" could be replaced by a "SomeModel" if necessary
    #Example of using a layer or a model:
        #M1 = SomeLayer(blablabla)(L12) 
        #M1 = SomeModel(L12)

from keras.models import Model
from keras.layers import *

inLef = Input((shape1))   
inRig = Input((shape2))

L1Lef = SomeLayer(blabla)(inLef)
L2Lef = SomeLayer(blabla)(L1Lef)
M1 = SomeLayer(blablaa)(L2Lef) #this is an output

L1Rig = SomeLayer(balbla)(inRig)

conc2Rig = Concatenate(axis=?)([L1Rig,M1]) #Or Add, or Multiply, however you're joining the models    
L2Rig = SomeLayer(nlanlab)(conc2Rig)
L3Rig = SomeLayer(najaljd)(L2Rig)

M2 = SomeLayer(babkaa)(L3Rig) #this is an output

conc3Lef = Concatenate(axis=?)([L2Lef,M2])
L3Lef = SomeLayer(blabla)(conc3Lef) #this is an output

Creating your model with three outputs:

Now you've got your graph ready and you know what the outputs are, you create the model:

model = Model([inLef,inRig], [M1,M2,L3Lef])
model.compile(loss='mse', optimizer='rmsprop')

If you want different losses for each output, then you create a list:

#example of custom loss function, if necessary
def lossM1(yTrue,yPred):
    return keras.backend.sum(keras.backend.abs(yTrue-yPred))

#compiling with three different loss functions
model.compile(loss = [lossM1, 'mse','binary_crossentropy'], optimizer =??)

But you've got to have three different yTraining too, for training with:

model.fit([input_1,input_2], [yTrainM1,yTrainM2,y_true], ....)

If your model is already defined and you don't create it's graph like I did:

Then, you have to find in yourModel.layers[i] which ones are M1 and M2, so you create a new model like this:

M1 = yourModel.layers[indexForM1].output
M2 = yourModel.layers[indexForM2].output
newModel = Model([inLef,inRig], [M1,M2,yourModel.output])

If you want that two outputs be equal:

In this case, just subtract the two outputs in a lambda layer, and make that lambda layer be an output of your model, with expected values = 0.

Using the exact same vars as before, we'll just create two addictional layers to subtract outputs:

diffM1L1Rig = Lambda(lambda x: x[0] - x[1])([L1Rig,M1])
diffM2L2Lef = Lambda(lambda x: x[0] - x[1])([L2Lef,M2])

Now your model should be:

newModel = Model([inLef,inRig],[diffM1L1Rig,diffM2L2lef,L3Lef])    

And training will expect those two differences to be zero:

yM1 = np.zeros((shapeOfM1Output))
yM2 = np.zeros((shapeOfM2Output))
newModel.fit([input_1,input_2], [yM1,yM2,t_true], ...)
查看更多
登录 后发表回答