首页 文章

无论学习率如何,损失都不会改变

提问于
浏览
0

我 Build 了一个深度学习模型,与VGG网络略有相似 . 我正在使用带有Tensorflow后端的Keras . 模型摘要如下:

model = Sequential()
model.add(Conv2D(64, 3, border_mode='same', activation='relu', input_shape=(180,320,3)))
model.add(Conv2D(64, 3, border_mode='same', activation='relu'))
model.add(MaxPooling2D(pool_size=(2,2), strides=2))
model.add(Conv2D(64, 3, border_mode='same', activation='relu'))
model.add(Conv2D(64, 3, border_mode='same', activation='relu'))
model.add(MaxPooling2D(pool_size=(2,2), strides=2))
model.add(Conv2D(128, 3, border_mode='same', activation='relu'))
model.add(Conv2D(128, 3, border_mode='same', activation='relu'))
model.add(MaxPooling2D(pool_size=(2,2), strides=2))
model.add(Conv2D(128, 3, border_mode='same', activation='relu'))
model.add(Conv2D(128, 3, border_mode='same', activation='relu'))
model.add(Flatten())
model.add(Dropout(0.5))
model.add(Dense(1024, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(9,  activation='relu'))

我尝试过不同的优化器组合(SGD,Adam等),损失(MSE,MAE等),批量大小(32和64) . 我甚至已经尝试过从0.001到10000的学习率 . 但是,即使在20个时期之后,对于我使用的任何损失函数,验证损失仍然完全相同 . 培训损失变化微不足道 . 我究竟做错了什么?

我应该训练我的网络:给定一个输入图像,网络需要预测一组可以从该图像导出的9个实数值 .

培训期间的终端输出:

Epoch 1/100
    4800/4800 [==============================] - 96s 20ms/step - loss: 133.6534 - mean_absolute_error: 133.6534 - val_loss: 132.3744 - val_mean_absolute_error: 132.3744
    Epoch 2/100
    4800/4800 [==============================] - 49s 10ms/step - loss: 132.8033 - mean_absolute_error: 132.8033 - val_loss: 132.3744 - val_mean_absolute_error: 132.3744
    Epoch 3/100
    4800/4800 [==============================] - 51s 11ms/step - loss: 132.8033 - mean_absolute_error: 132.8033 - val_loss: 132.3744 - val_mean_absolute_error: 132.3744
    Epoch 4/100
    4800/4800 [==============================] - 51s 11ms/step - loss: 132.8033 - mean_absolute_error: 132.8033 - val_loss: 132.3744 - val_mean_absolute_error: 132.3744
    Epoch 5/100
    4800/4800 [==============================] - 51s 11ms/step - loss: 132.8033 - mean_absolute_error: 132.8033 - val_loss: 132.3744 - val_mean_absolute_error: 132.3744
    Epoch 6/100
    4800/4800 [==============================] - 51s 11ms/step - loss: 132.8033 - mean_absolute_error: 132.8033 - val_loss: 132.3744 - val_mean_absolute_error: 132.3744
    Epoch 7/100
    4800/4800 [==============================] - 51s 11ms/step - loss: 132.8033 - mean_absolute_error: 132.8033 - val_loss: 132.3744 - val_mean_absolute_error: 132.3744
    Epoch 8/100
    4800/4800 [==============================] - 51s 11ms/step - loss: 132.8033 - mean_absolute_error: 132.8033 - val_loss: 132.3744 - val_mean_absolute_error: 132.3744
    Epoch 9/100
    4800/4800 [==============================] - 51s 11ms/step - loss: 132.8033 - mean_absolute_error: 132.8033 - val_loss: 132.3744 - val_mean_absolute_error: 132.3744
    Epoch 10/100
    4800/4800 [==============================] - 51s 11ms/step - loss: 132.8033 - mean_absolute_error: 132.8033 - val_loss: 132.3744 - val_mean_absolute_error: 132.3744
    Epoch 11/100
    4800/4800 [==============================] - 51s 11ms/step - loss: 132.8033 - mean_absolute_error: 132.8033 - val_loss: 132.3744 - val_mean_absolute_error: 132.3744
    Epoch 12/100
    4800/4800 [==============================] - 51s 11ms/step - loss: 132.8033 - mean_absolute_error: 132.8033 - val_loss: 132.3744 - val_mean_absolute_error: 132.3744
    Epoch 13/100
    4800/4800 [==============================] - 50s 10ms/step - loss: 132.8033 - mean_absolute_error: 132.8033 - val_loss: 132.3744 - val_mean_absolute_error: 132.3744
    Epoch 14/100
    4800/4800 [==============================] - 51s 11ms/step - loss: 132.8033 - mean_absolute_error: 132.8033 - val_loss: 132.3744 - val_mean_absolute_error: 132.3744
    Epoch 15/100
    4800/4800 [==============================] - 51s 11ms/step - loss: 132.8033 - mean_absolute_error: 132.8033 - val_loss: 132.3744 - val_mean_absolute_error: 132.3744
    Epoch 16/100
    4800/4800 [==============================] - 51s 11ms/step - loss: 132.8033 - mean_absolute_error: 132.8033 - val_loss: 132.3744 - val_mean_absolute_error: 132.3744
    Epoch 17/100
    4800/4800 [==============================] - 51s 11ms/step - loss: 132.8033 - mean_absolute_error: 132.8033 - val_loss: 132.3744 - val_mean_absolute_error: 132.3744
    Epoch 18/100
    4800/4800 [==============================] - 51s 11ms/step - loss: 132.8033 - mean_absolute_error: 132.8033 - val_loss: 132.3744 - val_mean_absolute_error: 132.3744
    Epoch 19/100
    4800/4800 [==============================] - 51s 11ms/step - loss: 132.8033 - mean_absolute_error: 132.8033 - val_loss: 132.3744 - val_mean_absolute_error: 132.3744
    Epoch 20/100
    4800/4800 [==============================] - 51s 11ms/step - loss: 132.8033 - mean_absolute_error: 132.8033 - val_loss: 132.3744 - val_mean_absolute_error: 132.3744
    Epoch 21/100
    4800/4800 [==============================] - 50s 10ms/step - loss: 132.8033 - mean_absolute_error: 132.8033 - val_loss: 132.3744 - val_mean_absolute_error: 132.3744

1 回答

  • 2

    relu

    不要随便使用relu!它具有恒定的零区域,没有梯度 . 它被卡住是完全正常的 .

    • 最糟糕的错误就是最后一层 .

    • 如果希望输出从0到无穷大,请使用 'softplus' .

    • 如果你想在0和1之间使用 'sigmoid'

    • 如果你想在-1和1之间使用 'tanh'

    • 您的学习率是 giant . 有了relu,你需要 small 学习率:

    • 转到 0.00001 及以下 .

    • 尝试其他不会卡住的激活

    • 尝试在激活之前添加批量标准化(这样你就可以确定某些东西将高于零,无论如何):

    • 这也可以让你有更高的学习率

    .

    model.add(Conv2D(..... , activation='linear'))
    model.add(BatchNormalization())
    model.add(Activation('relu'))
    

相关问题