What is the difference between epoch and iteration when training a multi-layer perceptron?
相关问题
- How to use Reshape keras layer with two None dimen
- How to conditionally scale values in Keras Lambda
- neural network does not learn (loss stays the same
- Trying to understand Pytorch's implementation
- Convolutional Neural Network seems to be randomly
相关文章
- how to flatten input in `nn.Sequential` in Pytorch
- What are the problems associated to Best First Sea
- How to downgrade to cuda 10.0 in arch linux?
- How to use cross_val_score with random_state
- Looping through training data in Neural Networks B
- Why does this Keras model require over 6GB of memo
- How to measure overfitting when train and validati
- McNemar's test in Python and comparison of cla
To my understanding, when you need to train a NN, you need a large dataset involves many data items. when NN is being trained, data items go in to NN one by one, that is called an iteration; When the whole dataset goes through, it is called an epoch.
An epoch contains a few iterations. That's actually what this 'epoch' is. Let's define 'epoch' as the number of iterations over the data set in order to train the neural network.
You have a training data which you shuffle and pick mini-batches from it. When you adjust your weights and biases using one mini-batch, you have completed one iteration. Once you run out of your mini-batches, you have completed an epoch. Then you shuffle your training data again, pick your mini-batches again, and iterate through all of them again. That would be your second epoch.
In the neural network terminology:
Example: if you have 1000 training examples, and your batch size is 500, then it will take 2 iterations to complete 1 epoch.
FYI: Tradeoff batch size vs. number of iterations to train a neural network
The term "batch" is ambiguous: some people use it to designate the entire training set, and some people use it to refer to the number of training examples in one forward/backward pass (as I did in this answer). To avoid that ambiguity and make clear that batch corresponds to the number of training examples in one forward/backward pass, one can use the term mini-batch.
To understand the difference between these you must understand the Gradient Descent Algorithm and its Variants.
Before I start with the actual answer, I would like to build some background.
A batch is the complete dataset. Its size is the total number of training examples in the available dataset.
Mini-batch size is the number of examples the learning algorithm processes in a single pass (forward and backward).
A Mini-batch is a small part of the dataset of given mini-batch size.
Iterations is the number of batches of data the algorithm has seen (or simply the number of passes the algorithm has done on the dataset).
Epochs is the number of times a learning algorithm sees the complete dataset. Now, this may not be equal to the number of iterations, as the dataset can also be processed in mini-batches, in essence, a single pass may process only a part of the dataset. In such cases, the number of iterations is not equal to the number of epochs.
In the case of Batch gradient descent, the whole batch is processed on each training pass. Therefore, the gradient descent optimizer results in smoother convergence than Mini-batch gradient descent, but it takes more time. The batch gradient descent is guaranteed to find an optimum if it exists.
Stochastic gradient descent is a special case of mini-batch gradient descent in which the mini-batch size is 1.
epoch is an iteration of subset of the samples for training, for example, the gradient descent algorithm in neutral network. A good reference is: http://neuralnetworksanddeeplearning.com/chap1.html
Note that the page has a code for the gradient descent algorithm which uses epoch
Look at the code. For each epoch, we randomly generate a subset of the inputs for the gradient descent algorithm. Why epoch is effective is also explained in the page. Please take a look.