Are my training and validation code (tensorflow) r

2019-09-21 04:13发布

问题:

This is my code:

for it in range(EPOCH*24410//BATCH_SIZE):
    tr_pa, tr_sp = sess.run([tr_para, tr_spec])
    train_loss, _ = sess.run([loss, fw_op], feed_dict={x: tr_pa, y: tr_sp})
    train_loss_.append(train_loss)
    it_tr.append(it)

    va_pa, va_sp = sess.run([va_para, va_spec])
    validate_loss = sess.run(loss, feed_dict={x: va_pa, y: va_sp})
    validate_loss_.append(validate_loss)

This is the training loss and validation loss:

My question is whether my validation code right or not. And does this model overfit?

回答1:

The telltale signature of overfitting is when your validation loss starts increasing, while your training loss continues decreasing, i.e.:

(Image adapted from Wikipedia entry on overfitting)

Here are some other plots indicating overfitting (source):

See also the SO thread How to know if underfitting or overfitting is occuring?.

Clearly, your plot does not exhibit such behavior, hence you are not overfitting.

Your code looks OK, keeping in mind that you don't show what exactly goes on inside your session sess.