首页 文章

使用Keras中的multi_gpu_model冻结图层

提问于
浏览
0

我正在尝试微调Keras中修改过的InceptionV3模型 .

我按照this page上的示例"Fine-tune InceptionV3 on a new set of classes" .

所以我首先训练了使用以下代码添加到InceptionV3基本模型中的顶部密集层:

model = Model(inputs=base_model.input, outputs=predictions)

for layer in base_model.layers:
    layer.trainable = False

parallel_model = multi_gpu_model(model, gpus=2)

parallel_model.compile(optimizer='rmsprop', loss='categorical_crossentropy')

history = parallel_model.fit_generator(generate_batches(path), steps_per_epoch = num_images/batch_size, epochs = num_epochs)

之后,我尝试微调InceptionV3中的前2个初始块 . 根据这个例子,我应该做的是:

for layer in model.layers[:249]:
   layer.trainable = False
for layer in model.layers[249:]:
   layer.trainable = True

model.compile(optimizer=SGD(lr=0.0001, momentum=0.9), loss='categorical_crossentropy')

model.fit_generator(...)

但我正在使用 multi_gpu_model ,所以我不知道如何冻结前249层 .

我的意思是,如果我冻结no-gpu模型中的图层(如示例),并使用 parallel_model = multi_gpu_model(model, gpus=2) 来冻结 parallel_model 中的图层,那么刚刚训练并包含在 parallel_model 中的顶部密集图层中的权重将是被覆盖了,对吗?

另一方面,我试图直接使用 for layer in parallel_model.layers[:249]: layer.trainable = False ,但当我检查 parallel_model 中的图层时,它显示:

for i, layer in enumerate(parallel_model.layers):
   print(i, layer.name)

(0, 'input_1')
(1, 'lambda_1')
(2, 'lambda_2')
(3, 'model_1')
(4, 'dense_3')

那么'lambda_1','lambda_2'和'model_1'层是什么?为什么它只在 parallel_model 中显示5层?

更重要的是,如何冻结 parallel_model 中的图层?

1 回答

  • 2

    这个例子有点复杂,因为你正在嵌套一个基础模型

    base_model = InceptionV3(weights='imagenet', include_top=False)
    

    进入一个添加自己的密集层的模型,

    model = Model(inputs=base_model.input, outputs=predictions)
    

    然后调用multi_gpu_model再次嵌套模型,当它使用lambda为每个GPU分割模型一次,然后将输出连接在一起,以便将模型分布在多个gpus上 .

    parallel_model = multi_gpu_model(model, gpus=2)
    

    在这种情况下,请记住两件事:更改base_model中图层的可训练性,并将非并行模型加载到您的cpu中以获得最佳性能 .

    这是完整的微调示例,只需更新train_data_dir即可指向您自己的数据位置 .

    import tensorflow as tf
    from keras import Model
    from keras.applications.inception_v3 import InceptionV3, preprocess_input
    from keras.layers import Dense, GlobalAveragePooling2D
    from keras.optimizers import SGD
    from keras.preprocessing.image import ImageDataGenerator
    from keras.utils import multi_gpu_model
    
    train_data_dir = '/home/ubuntu/work/data/train'
    batch_size_per_gpu = 32
    nb_classes = 3
    my_gpus = 2
    target_size = (224, 224)
    num_epochs_to_fit_dense_layer = 2
    num_epochs_to_fit_last_two_blocks = 3
    
    batch_size = batch_size_per_gpu * my_gpus
    train_datagen = ImageDataGenerator(preprocessing_function=preprocess_input)
    train_iterator = train_datagen.flow_from_directory(
        train_data_dir,
        target_size=target_size,
        batch_size=batch_size,
        class_mode='categorical',
        shuffle=True)
    
    # Check to make sure our model will match our data
    assert nb_classes == train_iterator.num_classes
    
    # Create base and template models on cpu
    with tf.device('/cpu:0'):
        base_model = InceptionV3(weights='imagenet', include_top=False)
        for layer in base_model.layers:
            layer.trainable = False
    
        # Add prediction layer to base pre-trained model
        x = base_model.output
        x = GlobalAveragePooling2D()(x)
        x = Dense(1024, activation='relu')(x)
        predictions = Dense(nb_classes, activation='softmax')(x)
    
        template_model = Model(inputs=base_model.input, outputs=predictions)
    
        # If you need to load weights from previous training, do so here:
        # template_model.load_weights('template_model.h5', by_name=True)
    
    # Create parallel model on GPUs
    parallel_model = multi_gpu_model(template_model, gpus=2)
    parallel_model.compile(optimizer='adam', loss='categorical_crossentropy')
    
    # Train parallel model.
    history = parallel_model.fit_generator(
        train_iterator,
        steps_per_epoch=train_iterator.n // batch_size,
        epochs=num_epochs_to_fit_dense_layer)
    
    # Unfreeze some layers in our model
    for layer in base_model.layers[:249]:
       layer.trainable = False
    for layer in base_model.layers[249:]:
       layer.trainable = True
    
    # Train parallel_model with more trainable layers
    parallel_model.compile(optimizer=SGD(lr=0.0001, momentum=0.9), loss='categorical_crossentropy')
    history2 = parallel_model.fit_generator(
        train_iterator,
        steps_per_epoch=train_iterator.n // batch_size,
        epochs=num_epochs_to_fit_last_two_blocks)
    
    # Save model via the template model which shares the same weights as the parallel model.
    template_model.save('template_model.h5')
    

相关问题