I am pretty new in using python and neurolab and I have a problem with the training of my feed forward neural network. I have built the net as following:
net = nl.net.newff([[-1,1]]*64, [60,1])
net.init()
testerr = net.train(InputT, TargetT, epochs=100, show=1)
and my target output is a vector between 0 and 4.
When I use the nl.train.train_bfgs I have in the console:
testerr = net.train(InputT, TargetT, epochs=10, show=1)
Epoch: 1; Error: 55670.4462766;
Epoch: 2; Error: 55649.5;
As you can see, I fixed the number of epochs to 100 but it stops at the second epoch and after the test of the net with Netresults=net.sim(InputCross)
I have as test output array a vector of 1 (totally wrong).
If I use the other training functions I have the same output testing vector full of 1 but in that case during the training, the epochs reach the number that I set but the error displayed doesn't change.
The same if the target output vector is between -1 and 1.
Any suggestion?
Thank you very much!
Finally, after a few hours with the same problem I kind of solved the problem.
Here is what is happening: Neurolab is using train_bfgs as its standard training algorithm. train_bfgs runs fmin_bfgs from scipy.optimize. As argument a function, epochf, is given. This function MUST be run after each iteration when training the network, in order for neurolab to exit propperly. Sadly, fmin_bfgs fails to do this when "optimization terminated successfully" (one can pass self.kwargs['disp'] = 1 to fmin_bfgs from /neurolab/train/spo.py to see output from scipy). I have not investigated further why fmin_bfgs returns "optimization terminated successfully" but it has to do with that the error is converging.
I have tried python 2.7 and python 3 with scipy versions 12.0 to 0.15 without this behavior changing (as this suggested).
My solution is to simply switch from train_bfgs training to regular train_gd (gradient descent) but I guess any other training algorithm is fine.
net = nl.net.newff(inputNodes, [ hidden, output])
#change traning func
net.trainf = nl.train.train_gd
For completeness, The code I tested on was:
import neurolab as nl
hidden = 10
output = 1
test = [[0],[0],[0],[1],[1]]
net = nl.net.newff([[0,1]], [ 10, 1])
err = net.train(test, test, epochs = 500, show=1)
The problem only occurs sometimes so repeted tests is needed
Edit: the problem is also described at https://github.com/zueve/neurolab/issues/25
Good luck!