It is not clear for me the difference between loss function and metrics in Keras. The documentation was not helpful for me.
相关问题
- batch_dot with variable batch size in Keras
- How to use Reshape keras layer with two None dimen
- How to use Reshape keras layer with two None dimen
- Why keras use “call” instead of __call__?
- How to conditionally scale values in Keras Lambda
相关文章
- Tensorflow: device CUDA:0 not supported by XLA ser
- how to flatten input in `nn.Sequential` in Pytorch
- How to downgrade to cuda 10.0 in arch linux?
- Change loss function dynamically during training i
- How to use cross_val_score with random_state
- Looping through training data in Neural Networks B
- Why does this Keras model require over 6GB of memo
- How to measure overfitting when train and validati
The loss function is used to optimize your model. This is the function that will get minimized by the optimizer.
A metric is used to judge the performance of your model. This is only for you to look at and has nothing to do with the optimization process.
The loss function is that parameter one passes to Keras model.compile which is actually optimized while training the model . This loss function is generally minimized by the model.
Unlike the loss function , the metric is another list of parameters passed to Keras model.compile which is actually used for judging the performance of the model.
For example : For some reason you may want to minimize the MSE loss for a regression model while also want to check the AUC for the model . In this case the MSE is the loss function and the AUC is the metric . Metric is the model performance parameter that one can see while the model is judging itself on the validation set after each epoch of training. It is important to note that the metric is important for few Keras callbacks like EarlyStopping when one wants to stop training the model in case the metric isn't improving for a certaining no. of epochs.
I have a contrived example in mind: Let's think about linear regression on a 2D-plane. In this case, loss function would be the mean squared error, the fitted line would minimize this error.
However, for some reason we are very very interested in the area under the curve from 0 to 1 of our fitted line, and thus this can be one of the metrics. And we monitor this metric while the model minimizes the mean squared error loss function.