Rprop implementation

本文关键字:implementation Rprop | 更新日期: 2023-09-27 18:16:23

我正试图通过使用我的旧backprop代码作为基础来实现rprop。我在研究一个只有一个隐藏层的感知器。Rprop算法相当简单,但我还没有把所有的东西都弄清楚。这是我的代码:

for (j = 1; j <= nnh; j++)
{
    network.input2[j] = network.w12[0][j];
    for (i = 1; i <= nni; i++)
        network.input2[j] += network.input[i] * network.w12[i][j];
     network.output2[j] = (float)(1.0 / (1.0 + Math.Pow(Math.E, beta * -network.input2[j])));
}
for (k = 1; k <= nno; k++)
{
    network.input3[k] = network.w23[0][k];
    for (j = 1; j <= nnh; j++)
        network.input3[k] += network.output2[j] * network.w23[j][k];
     network.output[k] = (float)(1.0 / (1.0 + Math.Pow(Math.E, beta * -network.input3[k])));
    error += (float)(0.5 * (t[k - 1] - network.output[k]) * (t[k - 1] - network.output[k]));
    derivativeO[k] = (float)(t[k - 1] - network.output[k]) * network.output[k] * (1 - network.output[k]);
}
for (j = 1; j <= nnh; j++)
{
    saw[j] = 0;
    for (k = 1; k <= nno; k++)
        saw[j] += derivativeO[k] * network.output2[j];
    derivativeH[j] = saw[j] * network.output2[j] * (1 - network.output2[j]);
}
for (j = 1; j <= nnh; j++)//number of neurons in hidden layer
{
    for (i = 1; i <= nni; i++)//number of inputs
    {
        network.gradientH[i][j] = network.input[i] * derivativeH[j];
        if (network.gradientH[i][j] * network.gradientHPrev[i][j] > 0)
        {
            network.deltaH[i][j] = Math.Min(network.deltaH[i][j] * npos, dmax);
            network.w12d[i][j] = -Math.Sign(network.gradientH[i][j]) * network.deltaH[i][j];
            network.w12[i][j] += network.w12d[i][j];
            network.gradientHPrev[i][j] = network.gradientH[i][j];
        }
        else if (network.gradientH[i][j] * network.gradientHPrev[i][j] < 0)
        {
            network.deltaH[i][j] = Math.Max(network.deltaH[i][j] * nneg, dmin);
            network.gradientHPrev[i][j] = 0;
        }
        else if (network.gradientH[i][j] * network.gradientHPrev[i][j] == 0)
        {
            network.w12d[i][j] = -Math.Sign(network.gradientH[i][j]) * network.deltaH[i][j];
            network.w12[i][j] += network.w12d[i][j];
            network.gradientHPrev[i][j] = network.gradientH[i][j];
        }
    }
}
for (k = 1; k <= nno; k++)//number of outputs
{
    for (j = 1; j <= nnh; j++)//number of neurons in hidden layer
    {
        network.gradientO[j][k] = network.output2[j] * derivativeO[k];
        if (network.gradientOPrev[j][k] * network.gradientO[j][k] > 0)
        {
            network.deltaO[j][k] = Math.Min(network.deltaO[j][k] * npos, dmax);
            network.w23d[j][k] = -Math.Sign(network.gradientO[j][k]) * network.deltaO[j][k];
            network.w23[j][k] += network.w23d[j][k];
            network.gradientOPrev[j][k] = network.gradientO[j][k];
        }
        else if (network.gradientOPrev[j][k] * network.gradientO[j][k] < 0)
        {
            network.deltaO[j][k] = Math.Max(network.deltaO[j][k] * nneg, dmin);
            network.gradientOPrev[j][k] = 0;
        }
        else if (network.gradientOPrev[j][k] * network.gradientO[j][k] == 0)
        {
            network.w23d[j][k] = -Math.Sign(network.gradientO[j][k]) * network.deltaO[j][k];
            network.w23[j][k] += network.w23d[j][k];
            network.gradientOPrev[j][k] = network.gradientO[j][k];
        }
    }
}

前三个for循环与我在backprop中使用的相同。这部分代码运行良好。这个问题在权重更新期间出现。如果我正确地计算偏导数的话。网络有时是收敛的,有时是随机的。我想其他的都是对的。任何想法吗?

For循环从1开始,因为在以前的backprop实现中,偏置值存储在权重矩阵的第一个元素中。这是以前的backprop权重更新实现,工作得很好,也许它会使一些事情更清楚:

for (j = 1; j <= nnh; j++)
{
    network.w12d[0][j] = learningRate * derivativeH[j] + momentum * network.w12d[0][j];
    network.w12[0][j] += network.w12d[0][j];
    for (i = 1; i <= nni; i++)
    {
        network.w12d[i][j] = learningRate * network.input[i] * derivativeH[j] + momentum * network.w12d[i][j];
        network.w12[i][j] += network.w12d[i][j];
    }
}
for (k = 1; k <= nno; k++)
{
    network.w23d[0][k] = learningRate * derivativeO[k] + momentum * network.w23d[0][k];
    network.w23[0][k] += network.w23d[0][k];
    for (j = 1; j <= nnh; j++)
    {
        network.w23d[j][k] = learningRate * network.output2[j] * derivativeO[k] + momentum * network.w23d[j][k];
        network.w23[j][k] += network.w23d[j][k];
    }
}

Rprop implementation

Encog RPROP实现工作。它是MIT授权的。在这里查看它们的实现以进行比较:

https://github.com/encog/encog-dotnet-core/blob/master/encog-core-cs/Neural/Networks/Training/Propagation/Resilient/ResilientPropagation.cs