首页 文章

Keras CNN的准确度和损耗是不变的

提问于
浏览
0

我正在使用ResNet50利用转移学习构建keras CNN模型 . 出于某种原因,我的准确性和损失在每个时代都是完全相同的 . 奇怪的是,我看到了类似代码的相同行为,但使用的是VGG19 . 这让我相信问题不在于实际的模型代码和预处理中的某个地方 . 我已经尝试调整学习率,改变优化器,图像分辨率,冻结层等,并且分数不会改变 . 我进入我的图像目录,检查我的两个不同的类是否混合,但它们不是 . 有什么问题?我只是想提前谢谢你 .

附:我正在训练约2000张图像并且有两节课 .

import numpy as np
import pandas as pd

import tensorflow as tf
from keras.preprocessing.image import ImageDataGenerator
from keras.callbacks import ModelCheckpoint, ReduceLROnPlateau
from keras.models import Sequential, Model, load_model
from keras.layers import Conv2D, GlobalAveragePooling2D
from keras.layers import Activation, Dropout, Flatten, Dense
from keras import applications
from keras import optimizers

img_height, img_width, img_channel = 400, 400, 3 #change chanel to 1 instead of three since it is black and white

base_model = applications.ResNet50(weights='imagenet', include_top=False, input_shape=(img_height, img_width, img_channel))

# add a global spatial average pooling layer
x = base_model.output
x = GlobalAveragePooling2D()(x)
# let's add a fully-connected layer
x = Dense(512, activation='relu',name='fc-1')(x)
#x = Dropout(0.5)(x)
x = Dense(256, activation='relu',name='fc-2')(x)
#x = Dropout(0.5)(x)
# and a logistic layer -- let's say we have 2 classes
predictions = Dense(1, activation='softmax', name='output_layer')(x)

model = Model(inputs=base_model.input, outputs=predictions)
model.compile(loss='binary_crossentropy', optimizer=optimizers.SGD(lr=0.1),
              metrics=['accuracy'])

model.summary()

from keras.preprocessing.image import ImageDataGenerator
from keras.callbacks import ModelCheckpoint

batch_size = 6

# prepare data augmentation configuration
train_datagen = ImageDataGenerator(
        rescale=1./255,
        rotation_range=20,
        width_shift_range=0.1,
        height_shift_range=0.1,
        shear_range=0.1,
        zoom_range=0.1,
        horizontal_flip=True,
        vertical_flip=True)


test_datagen = ImageDataGenerator(rescale=1./255)

#possibely resize the image
train_generator = train_datagen.flow_from_directory(
        "../Train/",
        target_size=(img_height, img_width),
        batch_size=batch_size,
        class_mode='binary',
        shuffle=True
)

validation_generator = test_datagen.flow_from_directory(
        "../Test/",
        target_size=(img_height, img_width),
        batch_size=batch_size,
        class_mode='binary',
        shuffle=True)

epochs = 10

history = model.fit_generator(
        train_generator,
        steps_per_epoch=2046 // batch_size,
        epochs=epochs,
        validation_data=validation_generator,
        validation_steps=512 // batch_size,
        callbacks=[ModelCheckpoint('snapshots/ResNet50-transferlearning.model', monitor='val_acc', save_best_only=True)])

这是keras给出的输出:

Epoch 1/10
341/341 [==============================] - 59s 172ms/step - loss: 7.0517 - acc: 0.5577 - val_loss: 7.0334 - val_acc: 0.5588
Epoch 2/10
341/341 [==============================] - 57s 168ms/step - loss: 7.0517 - acc: 0.5577 - val_loss: 7.0334 - val_acc: 0.5588
Epoch 3/10
341/341 [==============================] - 56s 165ms/step - loss: 7.0517 - acc: 0.5577 - val_loss: 7.0334 - val_acc: 0.5588
Epoch 4/10
341/341 [==============================] - 57s 168ms/step - loss: 7.0517 - acc: 0.5577 - val_loss: 7.0334 - val_acc: 0.5588
Epoch 5/10
341/341 [==============================] - 57s 167ms/step - loss: 7.0517 - acc: 0.5577 - val_loss: 7.0334 - val_acc: 0.5588

2 回答

  • 1

    最后一层应该具有'sigmoid'激活而不是softmax,因为它是二进制分类 .

  • 1
    predictions = Dense(1, activation='softmax', name='output_layer')(x)
    

    Dense图层表示您想要分类的不同类的数量,因此对于二进制分类,您需要2,其中您已编写1 .

    所以改变那条线 .

    predictions = Dense(2, activation='softmax', name='output_layer')(x)
    

    只是一个注释,总是试图保持一个变量来处理类的数量,如

    predictions = Dense(num_classes, activation='softmax', name='output_layer')(x)
    

    然后在代码的开头定义num_classes,以获得更好的灵活性和可读性 .

    您可以在此处查看有关Dense图层的文档:https://faroit.github.io/keras-docs/2.0.0/layers/core/

相关问题