首页 文章

Tensorflow:如何从预测Tensor中检索信息?

提问于
浏览
4

我找到了一个用于语义分割目的的神经网络 . 网络工作正常,我提供我的培训,验证和测试数据,我得到输出(不同颜色的分段部分) . 一直到这里,一切都好 . 我正在使用Keras和Tensorflow 1.7.0,支持GPU . Python版本是3.5

我想要实现的是访问像素组(片段)以便我可以获得它们的边界的图像坐标,即形成预测图像中以绿色显示的片段X的边界的点阵列 .

怎么做?显然我不能把整个代码放在这里,但这里有一个片段我应该修改以实现我想要的:

我的 evaluate function 中有以下内容:

def evaluate(model_file):
    net = load_model(model_file, custom_objects={'iou_metric': create_iou_metric(1 + len(PART_NAMES)),
                                                 'acc_metric': create_accuracy_metric(1 + len(PART_NAMES), output_mode='pixelwise_mean')})

    img_size = net.input_shape[1]
    image_filename = lambda fp: fp + '.jpg'
    d_test_x = TensorResize((img_size, img_size))(ImageSource(TEST_DATA, image_filename=image_filename))
    d_test_x = PixelwiseSubstract([103.93, 116.78, 123.68], use_lane_names=['X'])(d_test_x)
    d_test_pred = Predict(net)(d_test_x)
    d_test_pred.metadata['properties'] = ['background'] + PART_NAMES

    d_x, d_y = process_data(VALIDATION_DATA, img_size)
    d_x = PixelwiseSubstract([103.93, 116.78, 123.68], use_lane_names=['X'])(d_x)
    d_y = AddBackgroundMap(use_lane_names=['Y'])(d_y)

    d_train = Join()([d_x, d_y])
    print('losses:', net.evaluate_generator(d_train.batch_array_tuple_generator(batch_size=3), 3))

    # the tensor which needs to be modified
    pred_y = Predict(net)(d_x)
    Visualize(('slices', 'labels'))(Join()([d_test_x, d_test_pred]))
    Visualize(('slices', 'labels', 'labels'))(Join()([d_x, pred_y, d_y]))

至于Predict函数,这里是片段:

或者,我发现通过使用以下内容,可以访问张量:

#    for sample_img, in d_x.batch_array_tuple_generator(batch_size=3, n_samples=5):
#        aa = net.predict(sample_img)
#        indexes = np.argmax(aa,axis=3)
#        print(indexes)
#        import pdb
#        pdb.set_trace()

但我不知道这是如何工作的,我从未使用过pdb,因此不知道 .

如果有人想看到 training function ,这里是:

def train(model_name='refine_res', k=3, recompute=False, img_size=224,
        epochs=10, train_decoder_only=False, augmentation_boost=2, learning_rate=0.001,
        opt='rmsprop'):

    print("Traning on: " + str(PART_NAMES))
    print("In Total: " + str(1 + len(PART_NAMES)) + " parts.")

    metrics = [create_iou_metric(1 + len(PART_NAMES)),
               create_accuracy_metric(1 + len(PART_NAMES), output_mode='pixelwise_mean')]

    if model_name == 'dummy':
        net = build_dummy((224, 224, 3), 1 + len(PART_NAMES))  # 1+ because background class
    elif model_name == 'refine_res':
        net = build_resnet50_upconv_refine((img_size, img_size, 3), 1 + len(PART_NAMES), k=k, optimizer=opt, learning_rate=learning_rate, softmax_top=True,
                                           objective_function=categorical_crossentropy,
                                           metrics=metrics, train_full=not train_decoder_only)
    elif model_name == 'vgg_upconv':
        net = build_vgg_upconv((img_size, img_size, 3), 1 + len(PART_NAMES), k=k, optimizer=opt, learning_rate=learning_rate, softmax_top=True,
                               objective_function=categorical_crossentropy,metrics=metrics, train_full=not train_decoder_only)
    else:
        net = load_model(model_name)

    d_x, d_y = process_data(TRAINING_DATA, img_size, recompute=recompute, ignore_cache=False)
    d = Join()([d_x, d_y])

    # create more samples by rotating top view images and translating
    images_to_be_rotated = {}
    factor = 5
    for root, dirs, files in os.walk(TRAINING_DATA, topdown=False):
        for name in dirs:
            format = str(name + '/' + name)  # construct the format of foldername/foldername
            images_to_be_rotated.update({format: factor})

    d_aug = ImageAugmentation(factor_per_filepath_prefix=images_to_be_rotated, rotation_variance=90, recalc_base_seed=True)(d)
    d_aug = ImageAugmentation(factor=3 * augmentation_boost, color_interval=0.03, shift_interval=0.1, contrast=0.4,  recalc_base_seed=True, use_lane_names=['X'])(d_aug)
    d_aug = ImageAugmentation(factor=2, rotation_variance=20, recalc_base_seed=True)(d_aug)
    d_aug = ImageAugmentation(factor=7 * augmentation_boost, rotation_variance=10, translation=35, mirror=True, recalc_base_seed=True)(d_aug)

    # apply augmentation on the images of the training dataset only

    d_aug = AddBackgroundMap(use_lane_names=['Y'])(d_aug)
    d_aug.metadata['properties'] = ['background'] + PART_NAMES

    # substract mean and shuffle
    d_aug = Shuffle()(d_aug)
    d_aug, d_val = RandomSplit(0.8)(d_aug)
    d_aug = PixelwiseSubstract([103.93, 116.78, 123.68], use_lane_names=['X'])(d_aug)
    d_val = PixelwiseSubstract([103.93, 116.78, 123.68], use_lane_names=['X'])(d_val)

    # Visualize()(d_aug)

    d_aug.configure()
    d_val.configure()
    print('training size:', d_aug.size())
    batch_size = 4

    callbacks = []
    #callbacks += [EarlyStopping(patience=10)]
    callbacks += [ModelCheckpoint(filepath="trained_models/"+model_name + '.hdf5', monitor='val_iou_metric', mode='max',
                                  verbose=1, save_best_only=True)]
    callbacks += [CSVLogger('logs/'+model_name + '.csv')]
    history = History()
    callbacks += [history]

    # sess = K.get_session()
    # sess.run(tf.initialize_local_variables())

    net.fit_generator(d_aug.batch_array_tuple_generator(batch_size=batch_size, shuffle_samples=True), steps_per_epoch=d_aug.size() // batch_size,
                      validation_data=d_val.batch_array_tuple_generator(batch_size=batch_size), validation_steps=d_val.size() // batch_size,
                      callbacks=callbacks, epochs=epochs)

    return {k: (max(history.history[k]), min(history.history[k])) for k in history.history.keys()}

1 回答

  • 3

    对于分段任务,考虑到您的批处理是一个图像,图像中的每个像素都被赋予属于一个类的概率 . 假设您有5个类,并且图像具有784个像素(28x28),您将从 net.predict 得到一个形状 (784,5) ,784中的每个像素被分配5个属于这些类的概率值 . 当你执行 np.argmax(aa,axis=3) 时,你得到每个像素的最高概率的索引,然后你可以将它重塑为28x28 indexes.reshape(28,28) 并得到预测的掩码 .

    将问题减少到7x7维度和4个类别(0-3),看起来像

    array([[2, 1, 0, 1, 2, 3, 1],
       [3, 1, 1, 0, 3, 0, 0],
       [3, 3, 2, 2, 0, 3, 1],
       [1, 1, 0, 3, 1, 3, 1],
       [0, 0, 0, 3, 3, 1, 0],
       [1, 2, 3, 0, 1, 2, 3],
       [0, 2, 1, 1, 0, 1, 3]])
    

    您想要提取模型预测的索引1

    segment_1=np.where(indexes==1)
    

    由于它的2维数组,segment_1将是2x7数组,其中第一个数组是行索引,第二个数组是列值 .

    (array([0, 0, 0, 1, 1, 2, 3, 3, 3, 3, 4, 5, 5, 6, 6, 6]), array([1, 3, 6, 1, 2, 6, 0, 1, 4, 6, 5, 0, 4, 2, 3, 5]))
    

    查看第一个和第二个数组中的第一个数字, 0 and 1 指向位于 indexes 的位置

    你可以提取它的 Value

    indexes[segment_1]
    array([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1])
    

    然后继续你想要的第二堂课,让我们说2

    segment_2=np.where(image==2)
    segment_2
    (array([0, 0, 2, 2, 5, 5, 6]), array([0, 4, 2, 3, 1, 5, 1]))
    

    如果你想让每个 class 都自己 . 你可以为每个类创建 indexes 的副本,总共4个副本 class_1=indexes 并将任何不等于1的值设置为零. class_1[class_1!=1]=0 得到这样的东西

    array([[0, 1, 0, 1, 0, 0, 1],
       [0, 1, 1, 0, 0, 0, 0],
       [0, 0, 0, 0, 0, 0, 1],
       [1, 1, 0, 0, 1, 0, 1],
       [0, 0, 0, 0, 0, 1, 0],
       [1, 0, 0, 0, 1, 0, 0],
       [0, 0, 1, 1, 0, 1, 0]])
    

    对于眼睛,您可能认为有数字,但从这个例子中,您可以看出每个部分没有清晰的轮廓 . 我能想到的唯一方法是将图像循环到行中并记录值的变化,并在列中执行相同的操作 . 如果这是理想的情况,我不确定 . 我希望我能覆盖你问题的某些部分 . PDB只是一个调试包,允许您逐步执行代码

相关问题