我实际上是想用Keras获得VGG16的Sequential模型版本 . 功能版本可以通过以下方式获得:
from __future__ import division, print_function
import os, json
from glob import glob
import numpy as np
from scipy import misc, ndimage
from scipy.ndimage.interpolation import zoom
from keras import backend as K
from keras.layers.normalization import BatchNormalization
from keras.utils.data_utils import get_file
from keras.models import Sequential
from keras.layers.core import Flatten, Dense, Dropout, Lambda
from keras.layers.convolutional import Convolution2D, MaxPooling2D, ZeroPadding2D
from keras.layers.pooling import GlobalAveragePooling2D
from keras.optimizers import SGD, RMSprop, Adam
from keras.preprocessing import image
import keras
import keras.applications.vgg16
from keras.layers import Input
input_tensor = Input(shape=(224,224,3))
VGG_model=keras.applications.vgg16.VGG16(weights='imagenet',include_top= True,input_tensor=input_tensor)
它的总结如下:
VGG_model.summary()
Layer (type) Output Shape Param # Connected to
====================================================================================================
input_1 (InputLayer) (None, 224, 224, 3) 0
____________________________________________________________________________________________________
block1_conv1 (Convolution2D) (None, 224, 224, 64) 1792 input_1[0][0]
____________________________________________________________________________________________________
block1_conv2 (Convolution2D) (None, 224, 224, 64) 36928 block1_conv1[0][0]
____________________________________________________________________________________________________
block1_pool (MaxPooling2D) (None, 112, 112, 64) 0 block1_conv2[0][0]
____________________________________________________________________________________________________
block2_conv1 (Convolution2D) (None, 112, 112, 128) 73856 block1_pool[0][0]
____________________________________________________________________________________________________
block2_conv2 (Convolution2D) (None, 112, 112, 128) 147584 block2_conv1[0][0]
____________________________________________________________________________________________________
block2_pool (MaxPooling2D) (None, 56, 56, 128) 0 block2_conv2[0][0]
____________________________________________________________________________________________________
block3_conv1 (Convolution2D) (None, 56, 56, 256) 295168 block2_pool[0][0]
____________________________________________________________________________________________________
block3_conv2 (Convolution2D) (None, 56, 56, 256) 590080 block3_conv1[0][0]
____________________________________________________________________________________________________
block3_conv3 (Convolution2D) (None, 56, 56, 256) 590080 block3_conv2[0][0]
____________________________________________________________________________________________________
block3_pool (MaxPooling2D) (None, 28, 28, 256) 0 block3_conv3[0][0]
____________________________________________________________________________________________________
block4_conv1 (Convolution2D) (None, 28, 28, 512) 1180160 block3_pool[0][0]
____________________________________________________________________________________________________
block4_conv2 (Convolution2D) (None, 28, 28, 512) 2359808 block4_conv1[0][0]
____________________________________________________________________________________________________
block4_conv3 (Convolution2D) (None, 28, 28, 512) 2359808 block4_conv2[0][0]
____________________________________________________________________________________________________
block4_pool (MaxPooling2D) (None, 14, 14, 512) 0 block4_conv3[0][0]
____________________________________________________________________________________________________
block5_conv1 (Convolution2D) (None, 14, 14, 512) 2359808 block4_pool[0][0]
____________________________________________________________________________________________________
block5_conv2 (Convolution2D) (None, 14, 14, 512) 2359808 block5_conv1[0][0]
____________________________________________________________________________________________________
block5_conv3 (Convolution2D) (None, 14, 14, 512) 2359808 block5_conv2[0][0]
____________________________________________________________________________________________________
block5_pool (MaxPooling2D) (None, 7, 7, 512) 0 block5_conv3[0][0]
____________________________________________________________________________________________________
flatten (Flatten) (None, 25088) 0 block5_pool[0][0]
____________________________________________________________________________________________________
fc1 (Dense) (None, 4096) 102764544 flatten[0][0]
____________________________________________________________________________________________________
fc2 (Dense) (None, 4096) 16781312 fc1[0][0]
____________________________________________________________________________________________________
predictions (Dense) (None, 1000) 4097000 fc2[0][0]
====================================================================================================
Total params: 138,357,544
Trainable params: 138,357,544
Non-trainable params: 0
____________________________________________________________________________________________________
根据这个网站https://github.com/fchollet/keras/issues/3190,它说
Sequential(layers=functional_model.layers)
可以将功能模型转换为顺序模型 . 但是,如果我这样做:
model = Sequential(layers=VGG_model.layers)
model.summary()
它导致
Layer (type) Output Shape Param # Connected to
====================================================================================================
input_1 (InputLayer) (None, 224, 224, 3) 0
____________________________________________________________________________________________________
block1_conv1 (Convolution2D) (None, 224, 224, 64) 1792 input_1[0][0]
input_1[0][0]
input_1[0][0]
____________________________________________________________________________________________________
block1_conv2 (Convolution2D) (None, 224, 224, 64) 36928 block1_conv1[0][0]
block1_conv1[1][0]
block1_conv1[2][0]
____________________________________________________________________________________________________
block1_pool (MaxPooling2D) (None, 112, 112, 64) 0 block1_conv2[0][0]
block1_conv2[1][0]
block1_conv2[2][0]
____________________________________________________________________________________________________
block2_conv1 (Convolution2D) (None, 112, 112, 128) 73856 block1_pool[0][0]
block1_pool[1][0]
block1_pool[2][0]
____________________________________________________________________________________________________
block2_conv2 (Convolution2D) (None, 112, 112, 128) 147584 block2_conv1[0][0]
block2_conv1[1][0]
block2_conv1[2][0]
____________________________________________________________________________________________________
block2_pool (MaxPooling2D) (None, 56, 56, 128) 0 block2_conv2[0][0]
block2_conv2[1][0]
block2_conv2[2][0]
____________________________________________________________________________________________________
block3_conv1 (Convolution2D) (None, 56, 56, 256) 295168 block2_pool[0][0]
block2_pool[1][0]
block2_pool[2][0]
____________________________________________________________________________________________________
block3_conv2 (Convolution2D) (None, 56, 56, 256) 590080 block3_conv1[0][0]
block3_conv1[1][0]
block3_conv1[2][0]
____________________________________________________________________________________________________
block3_conv3 (Convolution2D) (None, 56, 56, 256) 590080 block3_conv2[0][0]
block3_conv2[1][0]
block3_conv2[2][0]
____________________________________________________________________________________________________
block3_pool (MaxPooling2D) (None, 28, 28, 256) 0 block3_conv3[0][0]
block3_conv3[1][0]
block3_conv3[2][0]
____________________________________________________________________________________________________
block4_conv1 (Convolution2D) (None, 28, 28, 512) 1180160 block3_pool[0][0]
block3_pool[1][0]
block3_pool[2][0]
____________________________________________________________________________________________________
block4_conv2 (Convolution2D) (None, 28, 28, 512) 2359808 block4_conv1[0][0]
block4_conv1[1][0]
block4_conv1[2][0]
____________________________________________________________________________________________________
block4_conv3 (Convolution2D) (None, 28, 28, 512) 2359808 block4_conv2[0][0]
block4_conv2[1][0]
block4_conv2[2][0]
____________________________________________________________________________________________________
block4_pool (MaxPooling2D) (None, 14, 14, 512) 0 block4_conv3[0][0]
block4_conv3[1][0]
block4_conv3[2][0]
____________________________________________________________________________________________________
block5_conv1 (Convolution2D) (None, 14, 14, 512) 2359808 block4_pool[0][0]
block4_pool[1][0]
block4_pool[2][0]
____________________________________________________________________________________________________
block5_conv2 (Convolution2D) (None, 14, 14, 512) 2359808 block5_conv1[0][0]
block5_conv1[1][0]
block5_conv1[2][0]
____________________________________________________________________________________________________
block5_conv3 (Convolution2D) (None, 14, 14, 512) 2359808 block5_conv2[0][0]
block5_conv2[1][0]
block5_conv2[2][0]
____________________________________________________________________________________________________
block5_pool (MaxPooling2D) (None, 7, 7, 512) 0 block5_conv3[0][0]
block5_conv3[1][0]
block5_conv3[2][0]
____________________________________________________________________________________________________
flatten (Flatten) (None, 25088) 0 block5_pool[0][0]
block5_pool[1][0]
block5_pool[2][0]
____________________________________________________________________________________________________
fc1 (Dense) (None, 4096) 102764544 flatten[0][0]
flatten[1][0]
flatten[2][0]
____________________________________________________________________________________________________
fc2 (Dense) (None, 4096) 16781312 fc1[0][0]
fc1[1][0]
fc1[2][0]
____________________________________________________________________________________________________
predictions (Dense) (None, 1000) 4097000 fc2[0][0]
fc2[1][0]
fc2[2][0]
====================================================================================================
Total params: 138,357,544
Trainable params: 138,357,544
Non-trainable params: 0
_
这与原始功能模型不同,因为新层连接到前一层3次 . 人们说使用功能模型更强大 . 但我想做的只是弹出最终的预测层 . 功能模型不能这样做......
2 回答
您可以通过定义另一个
Model
将前一层作为输出来确定最后一层:该模型将与原始模型共享完全相同的权重,并且训练将影响两个模型 .
你可以在poppedModel之后添加自己的图层(甚至是模型),没问题:
但重要的是,如果您打算在不影响VGG权重的情况下训练
anotherModel
中的新图层,那么在编译anotherModel
之前,您将poppedModel.trainable = False
和其中的每个图层都包含poppedModel.layers[i].trainable = False
.我也一直在努力解决这个问题,之前的海报几乎就在那里,但遗漏了一个特别的细节,以前让我感到难过 . 事实上,即使使用Functional API创建的模型,您也可以执行“pop”,但这需要更多的工作 .
这是我的模特(Just plain vanilla VGG16)
然后我“弹出”最后一层但不使用pop,只使用Functional API
这将创建一个新模型(new_model),其中最后一层被替换,旧图层被修复(不可训练) .
棘手的部分是将.output作为最后一层,因为这使它成为Tensor . 然后使用Tensor作为新Dense图层的输入,并使其成为新模型中的最终输出...
希望有帮助......
吞