首页 文章

尝试与Keras一起使用HDF5数据集时出错

提问于
浏览
3

尝试使用带有keras的HDF5数据集时出现以下错误 . 在创建验证数据切片时,似乎Sequential.fit()遇到切片的键没有属性“stop” . 我不知道这是否是我的HDF5数据集或其他内容的格式问题 . 任何帮助,将不胜感激 .

Traceback(最近一次调用最后一次):文件“autoencoder.py”,第73行,在模块validation_split = 0.2)文件“/home/ben/.local/lib/python2.7/site-packages/keras/models.py “,第672行,在fit initial_epoch = initial_epoch中)文件”/home/ben/.local/lib/python2.7/site-packages/keras/engine/training.py“,第1143行,在fit x中,val_x =( slice_X(x,0,split_at),slice_X(x,split_at))文件“/home/ben/.local/lib/python2.7/site-packages/keras/engine/training.py”,第301行,在slice_X中在X中返回[x [start:stop] for x]文件“/home/ben/.local/lib/python2.7/site-packages/keras/utils/io_utils.py”,第71行,在getitem中为key . stop self.start <= self.end:TypeError:不支持的操作数类型:'NoneType'和'int'

training_input = HDF5Matrix("../../media/patches/data_rotated.h5", 'training_input_rotated')
    training_target = HDF5Matrix("../../media/patches/data_rotated.h5", 'training_target_rotated')

    # Model definition
    autoencoder = Sequential()

    autoencoder.add(Convolution2D(32, 3, 3, activation='relu', border_mode='same',input_shape=(64, 64, 3)))
    autoencoder.add(MaxPooling2D((2, 2), border_mode='same'))
    autoencoder.add(Convolution2D(64, 3, 3, activation='relu', border_mode='same'))
    autoencoder.add(MaxPooling2D((2, 2), border_mode='same'))
    autoencoder.add(Convolution2D(128, 3, 3, activation='relu', border_mode='same'))
    autoencoder.add(Deconvolution2D(64, 3, 3, activation='relu', border_mode='same',output_shape=(None, 16, 16, 64),subsample=(2, 2)))
    autoencoder.add(UpSampling2D((2, 2)))
    autoencoder.add(Deconvolution2D(32, 3, 3, activation='relu', border_mode='same',output_shape=(None, 32, 32, 32),subsample=(2, 2)))
    autoencoder.add(UpSampling2D((2, 2)))
    autoencoder.add(Deconvolution2D(3, 3, 3, activation='sigmoid', border_mode='same',output_shape=(None, 64, 64, 3),subsample=(2, 2)))
    autoencoder.compile(optimizer='adadelta', loss='binary_crossentropy')
    autoencoder.summary()

    # Callback configure
    csv_logger = CSVLogger('../../runs/training_' + start_time + '.log')
    prog_logger = ProgbarLogger()
    checkpointer = ModelCheckpoint(filepath='../../runs/model_' + start_time + '.hdf5', verbose=1, save_best_only=False)

    # Training call
    history = autoencoder.fit(
                    x=training_input,
                    y=training_target,
                    batch_size=256,
                    nb_epoch=1000,
                    verbose=2,
                    callbacks=[csv_logger, prog_logger, checkpointer],
                    validation_split=0.2)

1 回答

  • 0

    我没有修复错误,但我通过在适合的调用中使用validation_data而不是validation_split来解决它 .

相关问题