I have been trying to implement the softmax version of the triplet loss in Caffe described in
Hoffer and Ailon, Deep Metric Learning Using Triplet Network, ICLR 2015.
I have tried this but I am finding it hard to calculate the gradient as the L2 in exponent is not squared.
Can someone please help me here?
Implementing the L2 norm using existing layers of caffe can save you all the hustle.
Here's one way to compute
||x1-x2||_2
in caffe for "bottom"sx1
andx2
(assumingx1
andx2
areB
-by-C
blobs, computingB
norms forC
dimensional diffs)For the triplet loss defined in the paper, you need to compute L2 norm for
x-x+
and forx-x-
, concat these two blobs and feed the concat blob to a"Softmax"
layer.No need for dirty gradient computations.
This is a math question, but here it goes. The first equation is what you're used to, and the second is what you do when it's not squared.