I'm keen to make use of the architecture proposed in the recent paper "Unsupervised Domain Adaptation by Backpropagation" in the Lasagne/Theano framework.
The thing about this paper that makes it a bit unusual is that it incorporates a 'gradient reversal layer', which inverts the gradient during backpropagation:
(The arrows along the bottom of the image are the backpropagations which have their gradient inverted).
In the paper the authors claim that the approach "can be implemented using any deep learning package", and indeed they provide a version made in caffe.
However, I'm using the Lasagne/Theano framework, for various reasons.
Is it possible to create such a gradient reversal layer in Lasagne/Theano? I haven't seen any examples of where one can apply custom scalar transforms to gradients like this. If so, can I do it by creating a custom layer in Lasagne?
Here's a sketch implementation using plain Theano. This can be integrated into Lasagne easily enough.
You need to create a custom operation which acts as an identity operation in the forward pass but reverses the gradient in the backward pass.
Here's a suggestion for how that could be implemented. It is not tested and I'm not 100% certain I've understood everything correctly, but you may be able to verify and fix as required.
Using the paper notation and naming conventions, here's a simple Theano implementation of the complete general model they propose.
This is untested but the following may allow this custom op to be used as a Lasagne layer: