我试图用ImageDataGenerator构建一个CNN,它可以工作,但我之间得到这个错误 . 有谁知道如何解决这一问题?
Error closing: 'Image' object has no attribute 'fp'
我使用Python 3.5和Tensorflow 1.12.0
LOG
2018-12-07 18:50:07.930812:I tensorflow / core / platform / cpu_feature_guard.cc:141]您的CPU支持未编译此TensorFlow二进制文件的指令:AVX2 FMA
2018-12-07 18:50:09.849317:I tensorflow / core / common_runtime / gpu / gpu_device.cc:1432]找到具有属性的设备0:名称:Tesla P100-PCIE-16GB major:6 minor:0 memoryClockRate(GHz) :1.3285 pciBusID:0000:84:00.0 totalMemory:15.90GiB freeMemory:15.61GiB
2018-12-07 18:50:09.849381:I tensorflow / core / common_runtime / gpu / gpu_device.cc:1511]添加可见的gpu设备:0 2018-12-07 18:50:10.138046:I tensorflow / core / common_runtime / gpu / gpu_device.cc:982]具有强度1边缘矩阵的设备互连StreamExecutor:
2018-12-07 18:50:10.138115:I tensorflow / core / common_runtime / gpu / gpu_device.cc:988] 0
2018-12-07 18:50:10.138123:I tensorflow / core / common_runtime / gpu / gpu_device.cc:1001] 0:N
2018-12-07 18:50:10.138479:I tensorflow / core / common_runtime / gpu / gpu_device.cc:1115]创建TensorFlow设备(/ job:localhost / replica:0 / task:0 / device:GPU:0 with 15123 MB内存) - >物理GPU(设备:0,名称:Tesla P100-PCIE-16GB,pc i总线ID:0000:84:00.0,计算能力:6.0)开始记录
2018-12-07 18:50:11.202683:I tensorflow / core / common_runtime / gpu / gpu_device.cc:1511]添加可见的gpu设备:0
2018-12-07 18:50:11.202779:I tensorflow / core / common_runtime / gpu / gpu_device.cc:982]具有强度1边缘矩阵的设备互连StreamExecutor:
2018-12-07 18:50:11.202816:I tensorflow / core / common_runtime / gpu / gpu_device.cc:988] 0
2018-12-07 18:50:11.202823:I tensorflow / core / common_runtime / gpu / gpu_device.cc:1001] 0:N
2018-12-07 18:50:11.203189:I tensorflow / core / common_runtime / gpu / gpu_device.cc:1115]创建TensorFlow设备(/ job:localhost / replica:0 / task:0 / device:GPU:0 with 15123 MB内存) - >物理GPU(设备:0,名称:Tesla P100-PCIE-16GB,pc i总线ID:0000:84:00.0,计算能力:6.0)关闭错误:'Image'对象没有属性'fp'
共找到12553张属于5004类的图像 .
共找到3144张属于5004类的图像 .
大纪元1/1000
1/392 [..............................] - ETA:20:55-损失:8.5183 - acc:0.0000e 00
关闭时出错:'Image'对象没有属性'fp'
关闭时出错:'Image'对象没有属性'fp'
3/392 [..............................] - ETA:7:05-损失:8.5180 - acc:0.0000e 00
关闭时出错:'Image'对象没有属性'fp'
5/392 [..............................] - ETA:4:18 - 损失:8.5180 - acc:0.0000e 00
关闭时出错:'Image'对象没有属性'fp'
关闭时出错:'Image'对象没有属性'fp'
关闭错误:'Image'对象没有属性'fp'
关闭时出错:'Image'对象没有属性'fp'
6/392 [..............................] - ETA:3:40 - 损失:8.5177 - acc:0.0000e 00
关闭时出错:'Image'对象没有属性'fp'
8/392 [..............................] - ETA:2:47 - 损失:8.5183 - acc:0.0000e 00
关闭时出错:'Image'对象没有属性'fp'
关闭时出错:'Image'对象没有属性'fp'
关闭时出错:'Image'对象没有属性'fp'
.....
关闭时出错:'Image'对象没有属性'fp'
关闭时出错:'Image'对象没有属性'fp'
9/392 [..............................] - ETA:3:01 - 损失:8.5182 - acc:0.0000e 00
关闭时出错:'Image'对象没有属性'fp'
代码
import tensorflow as tf
import pandas as pd
import math
batch_size = 32
train_datagen = tf.keras.preprocessing.image.ImageDataGenerator(
rotation_range=5,
rescale=1./255,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True,
vertical_flip=True)
#width_shift_range=0.2,
#height_shift_range=0.2)
si = 250
train_generator = train_datagen.flow_from_dataframe(
dataframe=df_train,
directory="./train",
color_mode="grayscale",
has_ext=True,
classes=classes,
x_col="Image",
y_col="Id",
target_size=(si, si), # all images will be resized to 150x150
batch_size=batch_size,
class_mode="categorical") # since we use binary_crossentropy loss, we need binary labels
test_datagen = tf.keras.preprocessing.image.ImageDataGenerator(rescale=1./255)
valid_generator = train_datagen.flow_from_dataframe(
dataframe=df_test,
directory="./train",
color_mode="grayscale",
has_ext=True,
classes=classes,
x_col="Image",
y_col="Id",
target_size=(si, si), # all images will be resized to 150x150
batch_size=batch_size,
class_mode="categorical") # since we use binary_crossentropy loss, we need binary labels
# learning rate schedule
def step_decay(epoch):
initial_lrate = 4.0
drop = 0.5
epochs_drop = 10.0
lrate = initial_lrate * math.pow(drop, math.floor((1+epoch)/epochs_drop))
return lrate
model = tf.keras.models.Sequential()
model.add(tf.keras.layers.Conv2D(16, kernel_size=(3, 3), activation='relu', input_shape=(si,si,1), kernel_initializer=tf.keras.initializers.glorot_normal(seed=None)))
model.add(tf.keras.layers.Conv2D(16, (3, 3), activation='relu', kernel_initializer=tf.keras.initializers.glorot_normal(seed=None)))
model.add(tf.keras.layers.MaxPooling2D(pool_size=(2, 2)))
model.add(tf.keras.layers.Conv2D(8, (3, 3), activation='relu', kernel_initializer=tf.keras.initializers.glorot_normal(seed=None)))
model.add(tf.keras.layers.MaxPooling2D(pool_size=(2, 2)))
model.add(tf.keras.layers.Dropout(0.25))
model.add(tf.keras.layers.Flatten())
model.add(tf.keras.layers.Dense(100, activation='relu', kernel_initializer=tf.keras.initializers.glorot_normal(seed=None)))
model.add(tf.keras.layers.Dropout(0.1))
model.add(tf.keras.layers.Dense(80, activation='relu', kernel_initializer=tf.keras.initializers.glorot_normal(seed=None)))
model.add(tf.keras.layers.Dense(len(classes), activation='softmax', kernel_initializer=tf.keras.initializers.glorot_normal(seed=None)))
model.compile(loss=tf.keras.losses.categorical_crossentropy,
optimizer=tf.keras.optimizers.Adadelta(),
metrics=['accuracy'])
STEP_SIZE_TRAIN=train_generator.n//train_generator.batch_size
STEP_SIZE_VALID=valid_generator.n//valid_generator.batch_size
print(STEP_SIZE_TRAIN)
lrate = tf.keras.callbacks.LearningRateScheduler(step_decay)
csv_logger = tf.keras.callbacks.CSVLogger('log.csv', append=True, separator=';')
filepath="weights-improvement-{epoch:02d}-{val_acc:.2f}.hdf5"
checkpoint = tf.keras.callbacks.ModelCheckpoint(filepath, monitor='val_acc', verbose=1, save_best_only=True, mode='max')
model.fit_generator(
generator=train_generator,
steps_per_epoch=STEP_SIZE_TRAIN,
epochs=1000,
verbose=1,
validation_data=valid_generator,
validation_steps=STEP_SIZE_VALID, callbacks=[csv_logger, checkpoint, lrate])