我们一直在使用基于以下模型的tensorflow AlexNet实现 .

https://github.com/SidHard/tfAlexNet

我们一直在尝试使用包含1000类图像的ILSVRC2012训练数据来训练模型 . 然而,在此训练期间,准确度几乎总是报告为零 . 在大约600次迭代之后,损失减少并最终趋于平稳(~7.xx) . 以下是1000次迭代的示例输出:

lr 0.001 Iter 20,Minibatch Loss = 32345.199219,训练精度= 0.00000

lr 0.001 Iter 40,Minibatch Loss = 19276.070312,训练精度= 0.00000

lr 0.001 Iter 60,Minibatch Loss = 10468.625977,训练精度= 0.00000

lr 0.001 Iter 80,Minibatch Loss = 8523.987305,训练精度= 0.00000

lr 0.001 Iter 100,Minibatch Loss = 6895.601562,训练精度= 0.00000

lr 0.001 Iter 120,Minibatch Loss = 5296.706055,训练精度= 0.00000

lr 0.001 Iter 140,Minibatch Loss = 4541.700195,训练精度= 0.00000

lr 0.001 Iter 160,Minibatch Loss = 3658.005371,训练准确度= 0.01562

lr 0.001 Iter 180,Minibatch Loss = 3368.450195,训练精度= 0.00000

lr 0.001 Iter 200,Minibatch Loss = 2641.639160,训练精度= 0.00000

lr 0.001 Iter 220,Minibatch Loss = 2349.733154,训练精度= 0.00000

lr 0.001 Iter 240,Minibatch Loss = 2258.051270,训练精度= 0.00000

lr 0.001 Iter 260,Minibatch Loss = 2042.907471,训练精度= 0.00000

lr 0.001 Iter 280,Minibatch Loss = 1863.766602,训练精度= 0.01562

lr 0.001 Iter 300,Minibatch Loss = 1761.209717,训练精度= 0.00000

lr 0.001 Iter 320,Minibatch Loss = 1295.963623,训练精度= 0.00000

lr 0.001 Iter 340,Minibatch Loss = 1295.362793,训练精度= 0.00000

lr 0.001 Iter 360,Minibatch Loss = 1384.312988,训练精度= 0.00000

lr 0.001 Iter 380,Minibatch Loss = 1054.358154,训练精度= 0.00000

lr 0.001 Iter 400,Minibatch Loss = 1121.298584,训练精度= 0.00000

lr 0.001 Iter 420,Minibatch Loss = 874.211853,训练精度= 0.00000

lr 0.001 Iter 440,Minibatch Loss = 643.881165,训练精度= 0.00000

lr 0.001 Iter 460,Minibatch Loss = 521.504211,训练精度= 0.00000

lr 0.001 Iter 480,Minibatch Loss = 312.603027,训练精度= 0.00000

lr 0.001 Iter 500,Minibatch Loss = 227.839951,训练精度= 0.00000

lr 0.001 Iter 520,Minibatch Loss = 87.130402,训练精度= 0.00000

lr 0.001 Iter 540,Minibatch Loss = 8.717880,训练精度= 0.03125

lr 0.001 Iter 560,Minibatch Loss = 7.396675,训练精度= 0.00000

lr 0.001 Iter 580,Minibatch Loss = 7.604724,训练精度= 0.00000

lr 0.001 Iter 600,Minibatch Loss = 7.333892,训练精度= 0.00000

lr 0.001 Iter 620,Minibatch Loss = 7.501148,训练精度= 0.00000

lr 0.001 Iter 640,Minibatch Loss = 7.662533,训练精度= 0.00000

lr 0.001 Iter 660,Minibatch Loss = 7.267312,训练精度= 0.00000

lr 0.001 Iter 680,Minibatch Loss = 7.307860,训练精度= 0.00000

lr 0.001 Iter 700,Minibatch Loss = 7.554455,训练精度= 0.00000

lr 0.001 Iter 720,Minibatch Loss = 7.164557,训练精度= 0.00000

lr 0.001 Iter 740,Minibatch Loss = 7.410737,训练精度= 0.00000

lr 0.001 Iter 760,Minibatch Loss = 7.430292,训练精度= 0.00000

lr 0.001 Iter 780,Minibatch Loss = 7.378484,训练精度= 0.00000

lr 0.001 Iter 800,Minibatch Loss = 7.348798,训练精度= 0.00000

lr 0.001 Iter 820,Minibatch Loss = 7.194911,训练精度= 0.00000

lr 0.001 Iter 840,Minibatch Loss = 7.180418,训练精度= 0.00000

lr 0.001 Iter 860,Minibatch Loss = 7.523817,训练精度= 0.00000

lr 0.001 Iter 880,Minibatch Loss = 7.398925,训练精度= 0.00000

lr 0.001 Iter 900,Minibatch Loss = 7.427280,训练精度= 0.00000

lr 0.001 Iter 920,Minibatch Loss = 7.344418,训练精度= 0.00000

lr 0.001 Iter 940,Minibatch Loss = 7.317279,训练精度= 0.00000

lr 0.001 Iter 960,Minibatch Loss = 7.290085,训练精度= 0.01562

lr 0.001 Iter 980,Minibatch Loss = 7.403413,训练精度= 0.00000

lr 0.0001 Iter 1000,Minibatch Loss = 7.329219,训练精度= 0.00000

我们已经进行了大约10,000次迭代的训练,在600次迭代后,损失或精度都没有变化 .

有没有人对这种训练行为有任何想法/想法?

此外,任何人都能够设置和训练张量流AlexNet模型使用ILSVRC2012数据集?

谢谢 .