我对以下日志有点怀疑,我在训练深度神经网络时得到-1.0和1.0之间的回归目标值,学习率为0.001和19200/4800训练/验证样本:
____________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
====================================================================================================
cropping2d_1 (Cropping2D) (None, 138, 320, 3) 0 cropping2d_input_1[0][0]
____________________________________________________________________________________________________
lambda_1 (Lambda) (None, 66, 200, 3) 0 cropping2d_1[0][0]
____________________________________________________________________________________________________
lambda_2 (Lambda) (None, 66, 200, 3) 0 lambda_1[0][0]
____________________________________________________________________________________________________
convolution2d_1 (Convolution2D) (None, 31, 98, 24) 1824 lambda_2[0][0]
____________________________________________________________________________________________________
spatialdropout2d_1 (SpatialDropo (None, 31, 98, 24) 0 convolution2d_1[0][0]
____________________________________________________________________________________________________
convolution2d_2 (Convolution2D) (None, 14, 47, 36) 21636 spatialdropout2d_1[0][0]
____________________________________________________________________________________________________
spatialdropout2d_2 (SpatialDropo (None, 14, 47, 36) 0 convolution2d_2[0][0]
____________________________________________________________________________________________________
convolution2d_3 (Convolution2D) (None, 5, 22, 48) 43248 spatialdropout2d_2[0][0]
____________________________________________________________________________________________________
spatialdropout2d_3 (SpatialDropo (None, 5, 22, 48) 0 convolution2d_3[0][0]
____________________________________________________________________________________________________
convolution2d_4 (Convolution2D) (None, 3, 20, 64) 27712 spatialdropout2d_3[0][0]
____________________________________________________________________________________________________
spatialdropout2d_4 (SpatialDropo (None, 3, 20, 64) 0 convolution2d_4[0][0]
____________________________________________________________________________________________________
convolution2d_5 (Convolution2D) (None, 1, 18, 64) 36928 spatialdropout2d_4[0][0]
____________________________________________________________________________________________________
spatialdropout2d_5 (SpatialDropo (None, 1, 18, 64) 0 convolution2d_5[0][0]
____________________________________________________________________________________________________
flatten_1 (Flatten) (None, 1152) 0 spatialdropout2d_5[0][0]
____________________________________________________________________________________________________
dropout_1 (Dropout) (None, 1152) 0 flatten_1[0][0]
____________________________________________________________________________________________________
activation_1 (Activation) (None, 1152) 0 dropout_1[0][0]
____________________________________________________________________________________________________
dense_1 (Dense) (None, 100) 115300 activation_1[0][0]
____________________________________________________________________________________________________
dropout_2 (Dropout) (None, 100) 0 dense_1[0][0]
____________________________________________________________________________________________________
dense_2 (Dense) (None, 50) 5050 dropout_2[0][0]
____________________________________________________________________________________________________
dense_3 (Dense) (None, 10) 510 dense_2[0][0]
____________________________________________________________________________________________________
dropout_3 (Dropout) (None, 10) 0 dense_3[0][0]
____________________________________________________________________________________________________
dense_4 (Dense) (None, 1) 11 dropout_3[0][0]
====================================================================================================
Total params: 252,219
Trainable params: 252,219
Non-trainable params: 0
____________________________________________________________________________________________________
None
Epoch 1/5
19200/19200 [==============================] - 795s - loss: 0.0292 - val_loss: 0.0128
Epoch 2/5
19200/19200 [==============================] - 754s - loss: 0.0169 - val_loss: 0.0120
Epoch 3/5
19200/19200 [==============================] - 753s - loss: 0.0161 - val_loss: 0.0114
Epoch 4/5
19200/19200 [==============================] - 723s - loss: 0.0154 - val_loss: 0.0100
Epoch 5/5
19200/19200 [==============================] - 1597s - loss: 0.0151 - val_loss: 0.0098
两者都训练验证损失减少,这是一见钟情的好消息 . 但是,在第一个时代,训练损失怎么会如此之低?验证损失怎么能更低?这是我模型或培训设置中某处系统性错误的指示吗?
1 回答
实际上 - 小于训练损失的验证损失并不像人们想象的那么罕见 . 它可能发生在例如当验证数据中的所有示例都被您的训练集中的示例很好地覆盖,并且您的网络只是简单地了解了数据集的实际结构 .
当数据结构不是很复杂时,它经常发生 . 实际上 - 在第一个时代之后,一个让你感到惊讶的损失的小值可能是一个线索,这发生在你的情况下 .
在损失很小的情况下 - 你没有说明你的损失是什么,但假设你的任务是回归 - 我猜测它是
mse
- 在这种情况下,0.01
水平的均方误差意味着平均欧几里德真实值和实际值之间的距离等于0.1
[-1, 1]
的值的直径5%
. 那么 - 这个错误实际上是如此之小吗?您还没有指定在一个时期内分析的批次数 . 也许如果您的数据结构不那么复杂且批量很小 - 一个时代足以让您充分了解数据 .
为了检查你的模型是否训练有素,我建议你在
y_pred
上绘制一个correlation plot
Y轴上的X轴和y_true
. 然后你会真正看到你的模型是如何被实际训练的 .EDIT :正如Neil所提到的 - 可能还有更多原因导致小验证错误 - 例如不能很好地分离案例 . 我还要补充一点 - 这个事实 - 5个时代不超过90分钟 - 也许通过使用经典的交叉验证模式检查模型的结果是好的 . 5折 . 这可以向您保证,如果您的数据集 - 您的模型表现良好 .