我有一个我在python中工作的神经网络模型 . 然而,反向传播似乎不起作用,而且我已经摆弄了一段时间 . 通过一系列培训,即使有足够的数据,输出也将平均为0.5 . 这是反向传播的代码和数据,只是一个简单的AND门输出:数据:

data = [[[1, 1], 1],
       [[1, 0], 0],
       [[0, 1], 0],
       [[0, 0], 0]]

backprop:

def backpropagate(self, input, output, learning_rate=0.2):
    expected = self.feed_forward(input)  # expected output
    state = self.feed_full(input)
    error = output - expected  # error
    delta = error * self.activation_function(expected, True)
    for weight_layer in reversed(range(len(self.weights))):
        error = delta.dot(self.weights[weight_layer].T)  # updating error
        delta = error * self.activation_function(state[weight_layer], True)  # updating delta for each layer
        self.weights[weight_layer] += np.transpose(state[weight_layer]).dot(delta) * learning_rate

为所有州提供食物并输出:

def feed_forward(self, x):
    ret = x
    for weight_layer in self.weights:
        ret = self.activation_function(np.dot(ret, weight_layer))
    return ret

def feed_full(self, x):
    state = x
    activations = [x]
    for weight_layer in self.weights:
        state = self.activation_function(np.dot(state, weight_layer))
        activations.append(state)
    return activations

网的形状是[2,3,1],我正在尝试设计它,以便形状可扩展,所以我可以将它用于其他项目 . 只需要backprop部分 . 谢谢 .