首页 文章

ResNet:训练期间的准确率为100%,但使用相同数据的预测准确率为33%

提问于
浏览
16

我是机器学习和深度学习的新手,为了学习目的,我尝试使用Resnet . 我试图过度填充小数据(3个不同的图像),看看我是否可以获得几乎0的损失和1.0的准确度 - 我做到了 .

问题是对 training 图像的预测(即用于训练的相同3个图像)不正确 .

Training Images

image 1

image 2

image 3

Image labels

[1,0,0][0,1,0][0,0,1]

My python code

#loading 3 images and resizing them
imgs = np.array([np.array(Image.open("./Images/train/" + fname)
                          .resize((197, 197), Image.ANTIALIAS)) for fname in
                 os.listdir("./Images/train/")]).reshape(-1,197,197,1)
# creating labels
y = np.array([[1,0,0],[0,1,0],[0,0,1]])
# create resnet model
model = ResNet50(input_shape=(197, 197,1),classes=3,weights=None)

# compile & fit model
model.compile(loss='categorical_crossentropy', optimizer='adam',metrics=['acc'])

model.fit(imgs,y,epochs=5,shuffle=True)

# predict on training data
print(model.predict(imgs))

该模型过度拟合数据:

3/3 [==============================] - 22s - loss: 1.3229 - acc: 0.0000e+00
Epoch 2/5
3/3 [==============================] - 0s - loss: 0.1474 - acc: 1.0000
Epoch 3/5
3/3 [==============================] - 0s - loss: 0.0057 - acc: 1.0000
Epoch 4/5
3/3 [==============================] - 0s - loss: 0.0107 - acc: 1.0000
Epoch 5/5
3/3 [==============================] - 0s - loss: 1.3815e-04 - acc: 1.0000

但预测是:

[[  1.05677405e-08   9.99999642e-01   3.95520459e-07]
 [  1.11955103e-08   9.99999642e-01   4.14905685e-07]
 [  1.02637095e-07   9.99997497e-01   2.43751242e-06]]

这意味着所有图像都得到 label=[0,1,0]

为什么?这怎么可能发生?

2 回答

  • 8

    这是因为批量规范化层 .

    在培训阶段,批次被标准化为w.r.t.它的均值和方差 . 但是,在测试阶段,批次被标准化为w.r.t.先前观察到的均值和方差的 moving average .

    现在,当观察到的批次数很少时(例如,在您的示例中为5),这是一个问题,因为在 BatchNormalization 图层中,默认情况下 moving_mean 初始化为0并且 moving_variance 初始化为1 .

    鉴于默认 momentum 为0.99,您需要在收敛到"real"均值和方差之前更新移动平均线很多次 .

    这就是为什么预测在早期阶段是错误的,但在1000个时代之后是正确的 .


    您可以通过强制 BatchNormalization 层在"training mode"中运行来验证它 .

    在训练期间,准确度为1,损失接近于零:

    model.fit(imgs,y,epochs=5,shuffle=True)
    Epoch 1/5
    3/3 [==============================] - 19s 6s/step - loss: 1.4624 - acc: 0.3333
    Epoch 2/5
    3/3 [==============================] - 0s 63ms/step - loss: 0.6051 - acc: 0.6667
    Epoch 3/5
    3/3 [==============================] - 0s 57ms/step - loss: 0.2168 - acc: 1.0000
    Epoch 4/5
    3/3 [==============================] - 0s 56ms/step - loss: 1.1921e-07 - acc: 1.0000
    Epoch 5/5
    3/3 [==============================] - 0s 53ms/step - loss: 1.1921e-07 - acc: 1.0000
    

    现在,如果我们评估模型,我们将观察到高损耗和低精度,因为在5次更新后,移动平均线仍然非常接近初始值:

    model.evaluate(imgs,y)
    3/3 [==============================] - 3s 890ms/step
    [10.745396614074707, 0.3333333432674408]
    

    但是,如果我们手动指定"learning phase"变量并让 BatchNormalization 层使用"real"批次均值和方差,则结果与 fit() 中观察到的结果相同 .

    sample_weights = np.ones(3)
    learning_phase = 1  # 1 means "training"
    ins = [imgs, y, sample_weights, learning_phase]
    model.test_function(ins)
    [1.192093e-07, 1.0]
    

    也可以通过将动量更改为更小的值来验证它 .

    例如,通过将 momentum=0.01 添加到 ResNet50 中的所有批处理范数层,20个历元之后的预测为:

    model.predict(imgs)
    array([[  1.00000000e+00,   1.34882026e-08,   3.92139575e-22],
           [  0.00000000e+00,   1.00000000e+00,   0.00000000e+00],
           [  8.70998792e-06,   5.31159838e-10,   9.99991298e-01]], dtype=float32)
    
  • 0

    我之前遇到过类似的问题,但解决方案非常简单 . 你需要增加时代数 . 这是1000个纪元后的输出

    [[  9.99999881e-01   8.58182432e-08   9.54004670e-12]
     [  8.58779623e-20   9.99999881e-01   6.76907632e-08]
     [  2.12900631e-26   4.09224481e-34   1.00000000e+00]]
    

    这是培训日志..

    Epoch 1/1000
    3/3 [==============================] - 13s - loss: 2.4340 - acc: 0.3333
    Epoch 2/1000
    3/3 [==============================] - 0s - loss: 0.1069 - acc: 1.0000
    Epoch 3/1000
    3/3 [==============================] - 0s - loss: 1.5478e-05 - acc: 1.0000
    Epoch 4/1000
    
    3/3 [==============================] - 0s - loss: 2.1458e-06 - acc: 1.0000
    Epoch 5/1000
    
    3/3 [==============================] - 0s - loss: 5.9605e-07 - acc: 1.0000
    Epoch 6/1000
    
    3/3 [==============================] - 0s - loss: 3.7750e-07 - acc: 1.0000
    Epoch 7/1000
    
    3/3 [==============================] - 0s - loss: 2.7816e-07 - acc: 1.0000
    Epoch 8/1000
    
    3/3 [==============================] - 0s - loss: 3.7750e-07 - acc: 1.0000
    Epoch 9/1000
    
    3/3 [==============================] - 0s - loss: 2.9802e-07 - acc: 1.0000
    Epoch 10/1000
    
    
     ...
    
    
    
    Epoch 990/1000
    
    3/3 [==============================] - 0s - loss: 1.1921e-07 - acc: 1.0000
    Epoch 991/1000
    
    3/3 [==============================] - 0s - loss: 1.1921e-07 - acc: 1.0000
    Epoch 992/1000
    
    3/3 [==============================] - 0s - loss: 1.1921e-07 - acc: 1.0000
    Epoch 993/1000
    
    3/3 [==============================] - 0s - loss: 1.1921e-07 - acc: 1.0000
    Epoch 994/1000
    
    3/3 [==============================] - 0s - loss: 1.1921e-07 - acc: 1.0000
    Epoch 995/1000
    
    3/3 [==============================] - 0s - loss: 1.1921e-07 - acc: 1.0000
    Epoch 996/1000
    
    3/3 [==============================] - 0s - loss: 1.1921e-07 - acc: 1.0000
    Epoch 997/1000
    
    3/3 [==============================] - 0s - loss: 1.1921e-07 - acc: 1.0000
    Epoch 998/1000
    
    3/3 [==============================] - 0s - loss: 1.1921e-07 - acc: 1.0000
    Epoch 999/1000
    
    3/3 [==============================] - 0s - loss: 1.1921e-07 - acc: 1.0000
    Epoch 1000/1000
    
    3/3 [==============================] - 0s - loss: 1.1921e-07 - acc: 1.0000
    

相关问题