首页 文章

在验证准确度提高的同

提问于
浏览
1

我正在训练CNN使用keras和tensorflow进行图像的二进制分类(每个15k样本) .

这是我的模特:

#input layer : first conv layer
model = Sequential()
model.add(Conv2D(filters=32,
                 kernel_size=(5,5),
                 input_shape=(256,256,3),
                 padding='same',
                 kernel_regularizer=regularizers.l2(0.0001)))
model.add(BatchNormalization())
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(Dropout(0.1))

# second conv layer
model.add(Conv2D(filters=64,
                 kernel_size=(5,5),
                 padding='same',
                 kernel_regularizer=regularizers.l2(0.0001)))
model.add(BatchNormalization())
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(Dropout(0.2))
# third layer
model.add(Conv2D(filters=128,
                 kernel_size=(5,5),
                 padding='same',
                 kernel_regularizer=regularizers.l2(0.0001)))
model.add(BatchNormalization())
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(Dropout(0.3))
# fourth layer : FC layer
model.add(Flatten())
model.add(Dense(128,kernel_regularizer=regularizers.l2(0.0001)))
model.add(BatchNormalization())
model.add(Activation('relu'))
model.add(Dropout(0.5))
# prediction layer
model.add(Dense(2,activation='softmax',name='prediction',kernel_regularizer=regularizers.l2(0.0001)))

我使用Adam(设置为keras文档中给出的默认值)作为优化器 . 当我开始训练模型时,它开始表现得很奇怪 .

Epoch 14/180 191s - 损失:0.7426 - acc:0.7976 - val_loss:0.7306 - val_acc:0.7739

Epoch 15/180 191s - 损失:0.7442 - acc:0.8034 - val_loss:0.7284 - val_acc:0.8018

Epoch 16/180 192s - 损失:0.7439 - acc:0.8187 - val_loss:0.7516 - val_acc:0.8103

Epoch 17/180 191s - 损失:0.7401 - acc:0.8323 - val_loss:0.7966 - val_acc:0.7945

Epoch 18/180 192s - 损失:0.7451 - acc:0.8392 - val_loss:0.7601 - val_acc:0.8328

Epoch 19/180 191s - 损失:0.7653 - acc:0.8471 - val_loss:0.7776 - val_acc:0.8243

Epoch 20/180 191s - 损失:0.7514 - acc:0.8553 - val_loss:0.8367 - val_acc:0.8170

Epoch 21/180 191s - 损失:0.7580 - acc:0.8601 - val_loss:0.8336 - val_acc:0.8219

Epoch 22/180 192s - 损失:0.7639 - acc:0.8676 - val_loss:0.8226 - val_acc:0.8438

Epoch 23/180 191s - 损失:0.7599 - acc:0.8767 - val_loss:0.8618 - val_acc:0.8280

Epoch 24/180 191s - 损失:0.7632 - acc:0.8761 - val_loss:0.8367 - val_acc:0.8426

Epoch 25/180 191s - 损失:0.7651 - acc:0.8769 - val_loss:0.8520 - val_acc:0.8365

Epoch 26/180 191s - 损失:0.7713 - acc:0.8815 - val_loss:0.8770 - val_acc:0.8316

等等.....

Loss in increasing and the accuracy is also increasing (both training and validation) .

As I am using a softmax classifier it is logical to get the starting loss ~0.69 (-ln(0.5)) but here the loss is higher than that.

我很困惑这是否过度拟合 . 谁能告诉我这里发生了什么?

提前致谢 :)

1 回答

  • 3

    对于二进制分类,您可以尝试将其更改为预测图层:

    model.add(Dense(1, kernel_initializer='normal', activation='sigmoid'))
    model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
    

相关问题