I try to learn how to use customized loss function with mxnet.
Bellow is a minimal (not) working example of linear regression. When I set 'use_custom = False' everything work fine, rather than custom loss wan't work. What I'm doing wrong?
import mxnet as mx
import logging
logging.basicConfig(level='DEBUG')
use_custom = False
mx.random.seed(1)
A = mx.nd.random.uniform(-1, 1, (5, 1))
B = mx.nd.random.uniform(-1, 1)
X = mx.nd.random.uniform(-1, 1, (100, 5))
y = mx.nd.dot(X, A) + B
iter = mx.io.NDArrayIter(data=X, label=y, data_name='data', label_name='label', batch_size=20, shuffle=True)
data = mx.sym.Variable('data')
label = mx.sym.Variable('label')
net = mx.sym.FullyConnected(data, num_hidden=1)
if use_custom:
net = mx.sym.MakeLoss(mx.sym.square(net - label))
else:
net = mx.sym.LinearRegressionOutput(net, label=label)
mod = mx.mod.Module(net, label_names=('label',))
mod.fit(iter, num_epoch=50, eval_metric='mse', optimizer='adam')
Questions answered here: https://discuss.mxnet.io/t/cannot-implement-customized-loss-function/797
Your custom loss is working as expected, you think it is not converging because the
eval_metric
is using the output of your network (the loss) and compare it with the label. In your case I would use a custom evaluation metric, the identity function.This gives you this: