首页 文章

深度学习:训练集往往很好,验证设置很差

提问于
浏览
0

我正面临一个问题,我很难理解为什么会有这样的行为 .

我正在尝试使用预先训练的resnet 50(keras)模型进行二进制图像分类,我还构建了一个简单的cnn . 我有大约200x200的大约8k balancer RGB图像,我把这个集合分成三个子集(训练70%,验证15%,测试15%) .

我构建了一个生成器,用于根据 keras.utils.Sequence 将数据提供给我的模型 .

我遇到的问题是我的模型倾向于在训练集上学习,但在验证集上我在预训练的resnet50和简单的cnn上得分很差 . 我尝试了几个方法来解决这个问题,但根本没有改进 .

  • 有和没有训练集上的数据扩充(旋转)

  • 图像在[0,1]之间归一化

  • 有和没有规范者

  • 学习率的变化

这是获得结果的一个例子:

Epoch 1/200
716/716 [==============================] - 320s 447ms/step - loss: 8.6096 - acc: 0.4728 - val_loss: 8.6140 - val_acc: 0.5335

Epoch 00001: val_loss improved from inf to 8.61396, saving model to ../models_saved/resnet_adam_best.h5
Epoch 2/200
716/716 [==============================] - 287s 401ms/step - loss: 8.1217 - acc: 0.5906 - val_loss: 10.9314 - val_acc: 0.4632

Epoch 00002: val_loss did not improve from 8.61396
Epoch 3/200
716/716 [==============================] - 249s 348ms/step - loss: 7.5357 - acc: 0.6695 - val_loss: 11.1432 - val_acc: 0.4657

Epoch 00003: val_loss did not improve from 8.61396
Epoch 4/200
716/716 [==============================] - 284s 397ms/step - loss: 7.5092 - acc: 0.6828 - val_loss: 10.0665 - val_acc: 0.5351

Epoch 00004: val_loss did not improve from 8.61396
Epoch 5/200
716/716 [==============================] - 261s 365ms/step - loss: 7.0679 - acc: 0.7102 - val_loss: 4.2205 - val_acc: 0.5351

Epoch 00005: val_loss improved from 8.61396 to 4.22050, saving model to ../models_saved/resnet_adam_best.h5
Epoch 6/200
716/716 [==============================] - 285s 398ms/step - loss: 6.9945 - acc: 0.7161 - val_loss: 10.2276 - val_acc: 0.5335
....

这是用于将数据加载到模型中的类 .

class DataGenerator(keras.utils.Sequence):

    def __init__(self, inputs,
                 labels, img_size,
                 input_shape,
                 batch_size, num_classes,
                 validation=False):

        self.inputs = inputs
        self.labels = labels
        self.img_size = img_size
        self.input_shape = input_shape
        self.batch_size = batch_size
        self.num_classes = num_classes
        self.validation = validation
        self.indexes = np.arange(len(self.inputs))
        self.inc = 0

    def __getitem__(self, index):
        """Generate one batch of data

        Parameters
        ----------
        index :the index from which batch will be taken

        Returns
        -------
        out : a tuple that contains (inputs and labels associated)
        """
        batch_inputs = np.zeros((self.batch_size, *self.input_shape))
        batch_labels = np.zeros((self.batch_size, self.num_classes))

        # Generate data
        for i in range(self.batch_size):
            # choose random index in features

            if self.validation:
                index = self.indexes[self.inc]
                self.inc += 1
                if self.inc == len(self.inputs):
                    self.inc = 0
            else:
                index = random.randint(0, len(self.inputs) - 1)

            batch_inputs[i] = self.rgb_processing(self.inputs[index])
            batch_labels[i] = to_categorical(self.labels[index], num_classes=self.num_classes)
        return batch_inputs, batch_labels

    def __len__(self):
        """Denotes the number of batches per epoch

        Returns
        -------
        out : number of batches per epochs

        """
        return int(np.floor(len(self.inputs) / self.batch_size))


    def rgb_processing(self, path):
        img = load_img(path)
        rgb = img.get_rgb_array()

        if not self.validation:
            if random.choice([True, False]):
                rgb = random_rotation(rgb)

        return rgb/np.max(rgb)


class Models:

    def __init__(self, input_shape, classes):
        self.input_shape = input_shape
        self.classes = classes
        pass

    def simpleCNN(self, optimizer):
        model = Sequential()
        model.add(Conv2D(32, kernel_size=(3, 3),
                         activation='relu',
                         input_shape=self.input_shape))
        model.add(Conv2D(64, (3, 3), activation='relu'))
        model.add(MaxPooling2D(pool_size=(2, 2)))
        model.add(Dropout(0.25))
        model.add(Flatten())
        model.add(Dense(128, activation='relu'))
        model.add(Dropout(0.5))
        model.add(Dense(len(self.classes), activation='softmax'))

        model.compile(loss=keras.losses.binary_crossentropy,
                      optimizer=optimizer,
                      metrics=['accuracy'])

        return model

    def resnet50(self, optimizer):
        model = keras.applications.resnet50.ResNet50(include_top=False,
                                                     input_shape=self.input_shape,
                                                     weights='imagenet')
        model.summary()
        model.layers.pop()
        model.summary()

        for layer in model.layers:
            layer.trainable = False
        output = Flatten()(model.output)
        #I also tried to add dropout layers here with batch normalization but it does not change results   
        output = Dense(len(self.classes), activation='softmax')(output)

        finetuned_model = Model(inputs=model.input,
                                outputs=output)

        finetuned_model.compile(optimizer=optimizer,
                                loss=keras.losses.binary_crossentropy,
                                metrics=['accuracy'])

        return finetuned_model

这就是这些函数的调用方式:

train_batches = DataGenerator(inputs=train.X.values,
                              labels=train.y.values,
                              img_size=img_size,
                              input_shape=input_shape,
                              batch_size=batch_size,
                              num_classes=len(CLASSES))

validate_batches = DataGenerator(inputs=validate.X.values,
                                 labels=validate.y.values,
                                 img_size=img_size,
                                 input_shape=input_shape,
                                 batch_size=batch_size,
                                 num_classes=len(CLASSES),
                                 validation=True)

if model_name == "cnn":
    model = models.simpleCNN(optimizer=Adam(lr=0.0001))
elif model_name == "resnet":
    model = models.resnet50(optimizer=Adam(lr=0.0001))

early_stopping = EarlyStopping(patience=15)
checkpointer = ModelCheckpoint(output_name + '_best.h5', verbose=1, save_best_only=True)

        history = model.fit_generator(train_batches, steps_per_epoch=num_train_steps, epochs=epochs,
                                  callbacks=[early_stopping, checkpointer], validation_data=validate_batches,
                                  validation_steps=num_valid_steps)

1 回答

  • 0

    我终于找到了导致这种过度拟合的主要因素 . 因为我使用的是预先训练过的模型 . 我被设置为不可训练的层 . 因此,我试图把它们作为可训练的,似乎它解决了这个问题 .

    for layer in model.layers:
            layer.trainable = False
    

    我的假设是我的图像离用于训练模型的数据太远了 .

    我还在resnet模型的末尾添加了一些丢失和批量规范化 .

相关问题