I'm using keras to construct a simple neural network as follows:
import keras
from keras.models import Sequential
from keras.layers import Dense
classifier = Sequential()
classifier.add(Dense(10, kernel_initializer='uniform', activation= 'relu', input_dim = 2))
...
classifier.compile(optimizer= 'adam',loss='binary_crossentropy', metrics=['accuracy'])
classifier.fit(X_train,y_train,batch_size=10,epochs=100)
The code works totally fine and get 90% accuracy when I first run it in jupyter notebook. But when I rerun it, its accuracy dramatically dropped to 50%, and the accuracy didn't improve during the training process. Also, if I construct another NN like this in the same notebook page, it also has this problem.
So what should I do if I want to get the right result when I rerun the code or run the another NN in the same notebook page?
PS: I'm using tensorflow backend.
Edit: Results are different mostly because of weights initialization and batches. But seed fixing is not enough for full reproducibility, see:
Previous answer:
Neural networks learning have random results due to
For example, this code
gives a different result on each run.
You can fix seed in keras random generator (which is numpy) for reproducibility.
https://github.com/keras-team/keras/issues/2743#issuecomment-219777627
P.S. Code may have very different results, if there are some problems with data/model (as in this mnist example with too small data and too easy model). 90% could be just overfitting. Check classifier on another independent test data.