多个对象以某种方式相互[原创版]干扰(Multiple objects somehow interf

2019-07-31 09:06发布

我有当施加到单个数据集,其完美地工作一神经网络(NN)。 但是,如果我想在运行NN,例如,一组数据,然后创建NN的新实例,对不同的数据集(甚至同一组再次)运行,那么新的实例会产生完全不正确的预测。

例如,在异或图案训练:

    test=[[0,0],[0,1],[1,0],[1,1]]
    data = [[[0,0], [0]],[[0,1], [0]],[[1,0], [0]],[[1,1], [1]]]

    n = NN(2, 3, 1) # Create a neural network with 2 input, 3 hidden and 1 output nodes
    n.train(data,500,0.5,0) # Train it for 500 iterations with learning rate 0.5 and momentum 0

    prediction = np.zeros((len(test)))
    for row in range(len(test)):
        prediction[row] = n.runNetwork(test[row])[0]

    print prediction

    #
    # Now do the same thing again but with a new instance and new version of the data.
    #

    test2=[[0,0],[0,1],[1,0],[1,1]]
    data2 = [[[0,0], [0]],[[0,1], [0]],[[1,0], [0]],[[1,1], [1]]]

    p = NN(2, 3, 1)
    p.train(data2,500,0.5,0)

    prediction2 = np.zeros((len(test2)))
    for row in range(len(test2)):
        prediction2[row] = p.runNetwork(test2[row])[0]

    print prediction2

将输出:

    [-0.01 -0.   -0.06  0.97]
    [ 0.  0.  1.  1.]

注意,第一个预测是相当不错的,其中作为第二个是完全错误的,而且我也看不出什么毛病类:

    import math
    import random
    import itertools
    import numpy as np

    random.seed(0)

    def rand(a, b):
        return (b-a)*random.random() + a

    def sigmoid(x):
        return math.tanh(x)

    def dsigmoid(y):
        return 1.0 - y**2

    class NN:
        def __init__(self, ni, nh, no):
            # number of input, hidden, and output nodes
            self.ni = ni + 1 # +1 for bias node
            self.nh = nh + 1
            self.no = no

            # activations for nodes
            self.ai = [1.0]*self.ni
            self.ah = [1.0]*self.nh
            self.ao = [1.0]*self.no

            # create weights (rows=number of features, columns=number of processing nodes)
            self.wi = np.zeros((self.ni, self.nh))
            self.wo = np.zeros((self.nh, self.no))
            # set them to random vaules
            for i in range(self.ni):
                for j in range(self.nh):
                    self.wi[i][j] = rand(-5, 5)
            for j in range(self.nh):
                for k in range(self.no):
                    self.wo[j][k] = rand(-5, 5)

            # last change in weights for momentum   
            self.ci = np.zeros((self.ni, self.nh))
            self.co = np.zeros((self.nh, self.no))


        def runNetwork(self, inputs):
            if len(inputs) != self.ni-1:
                raise ValueError('wrong number of inputs')

            # input activations
            for i in range(self.ni-1):
                #self.ai[i] = sigmoid(inputs[i])
                self.ai[i] = inputs[i]

            # hidden activations   
            for j in range(self.nh-1):
                sum = 0.0
                for i in range(self.ni):
                    sum = sum + self.ai[i] * self.wi[i][j]
                self.ah[j] = sigmoid(sum)

            # output activations
            for k in range(self.no):
                sum = 0.0
                for j in range(self.nh):
                    sum = sum + self.ah[j] * self.wo[j][k]
                self.ao[k] = sigmoid(sum)

            ao_simplified = [round(a,2) for a in self.ao[:]]
            return ao_simplified  


        def backPropagate(self, targets, N, M):
            if len(targets) != self.no:
                raise ValueError('wrong number of target values')

            # calculate error terms for output
            output_deltas = [0.0] * self.no
            for k in range(self.no):
                error = targets[k]-self.ao[k]
                output_deltas[k] = dsigmoid(self.ao[k]) * error

            # calculate error terms for hidden
            hidden_deltas = [0.0] * self.nh
            for j in range(self.nh):
                error = 0.0
                for k in range(self.no):
                    error = error + output_deltas[k]*self.wo[j][k]
                hidden_deltas[j] = dsigmoid(self.ah[j]) * error

            # update output weights
            for j in range(self.nh):
                for k in range(self.no):
                    change = output_deltas[k]*self.ah[j]
                    self.wo[j][k] = self.wo[j][k] + N*change + M*self.co[j][k]
                    self.co[j][k] = change
                    #print N*change, M*self.co[j][k]

            # update input weights
            for i in range(self.ni):
                for j in range(self.nh):
                    change = hidden_deltas[j]*self.ai[i]
                    self.wi[i][j] = self.wi[i][j] + N*change + M*self.ci[i][j]
                    self.ci[i][j] = change

            # calculate error
            error = 0.0
            for k in range(len(targets)):
                error = error + 0.5*(targets[k]-self.ao[k])**2
            return error

        def train(self, patterns, iterations=1000, N=0.5, M=0.1):
            # N: learning rate
            # M: momentum factor
            for i in range(iterations):
                error = 0.0
                for p in patterns:
                    inputs = p[0]
                    targets = p[1]
                    self.runNetwork(inputs)
                    error = error + self.backPropagate(targets, N, M)
                if i % 100 == 0: # Prints error every 100 iterations
                    print('error %-.5f' % error)

任何帮助将不胜感激!

Answer 1:

你的错误 - 如果有的话 - 没有什么关系的类。 作为@Daniel罗斯曼认为,自然的猜测是,这是一个类/实例变量的问题,也或许是可变的默认参数,或列表的乘法,或什么的,神秘的行为,最常见的原因。

在这里,虽然,你只是因为你使用不同的随机数,每次得到不同的结果。 如果您random.seed(0)打电话之前NN(2,3,1)你会得到完全相同的结果:

error 2.68110
error 0.44049
error 0.39256
error 0.26315
error 0.00584
[ 0.01  0.01  0.07  0.97]
error 2.68110
error 0.44049
error 0.39256
error 0.26315
error 0.00584
[ 0.01  0.01  0.07  0.97]

我无法判断你的算法是否正确。 顺便说一句,我认为你rand功能重塑random.uniform



文章来源: Multiple objects somehow interfering with each other [original version]