我正在尝试使用张量流数据集在Keras中进行图像分割,但是当我尝试调用 model.fit() 时,我收到 AttributeError: 'MapDataSet' has no attribute 'ndim' 错误 . 我在谷歌colab中使用Jupyter笔记本 . 最后一行代码产生错误 .

import numpy as np
import tensorflow as tf
from tensorflow import keras 
import sys
from json import load
import math
import tensorflow as tf
from tensorflow.contrib import layers
from tensorflow.contrib.data import map_and_batch

export_file = 'export.json'
IMG_WIDTH = 256
IMG_HEIGHT = 256
with open(export_file) as f:
  export_json = load(f)
legend = export_json['legend']
tfrecord_paths = export_json['tfrecord_paths']

# Fairly certain nothing wrong here
# -----------------------------------------------
def _parse_tfrecord(serialized_example):
    example = tf.parse_single_example(
        serialized_example,
        features={
            'image/encoded': tf.FixedLenFeature([], tf.string),
            'image/filename': tf.FixedLenFeature([], tf.string),
            'image/ID': tf.FixedLenFeature([], tf.string),
            'image/format': tf.FixedLenFeature([], tf.string),
            'image/height': tf.FixedLenFeature([], tf.int64),
            'image/width': tf.FixedLenFeature([], tf.int64),
            'image/channels': tf.FixedLenFeature([], tf.int64),
            'image/colorspace': tf.FixedLenFeature([], tf.string),
            'image/segmentation/class/encoded': tf.FixedLenFeature([], tf.string),
            'image/segmentation/class/format': tf.FixedLenFeature([], tf.string),
            })
    image = tf.image.decode_image(example['image/encoded'])
    image.set_shape([IMG_WIDTH, IMG_HEIGHT, 3])
    label = tf.image.decode_image(example['image/segmentation/class/encoded'])
    label.set_shape([IMG_WIDTH, IMG_HEIGHT, 1])
    image_float = tf.to_float(image)
    label_float = tf.to_float(label)
    return (image_float, label_float)
# -----------------------------------------------

# Create training and testing datasets
test_set_size = math.floor(0.20 * len(tfrecord_paths))
training_dataset = tf.data.TFRecordDataset(tfrecord_paths)
training_dataset = training_dataset.skip(test_set_size)
training_dataset = training_dataset.map(_parse_tfrecord)

test_dataset = tf.data.TFRecordDataset(tfrecord_paths)
test_dataset = test_dataset.take(test_set_size)
test_dataset = test_dataset.map(_parse_tfrecord)

# Printing training_dataset yields: '<MapDataset shapes: ((256, 256, 3), (256, 256, 1)), types: (tf.float32, tf.float32)>'

# Load Inception v3
from keras.applications.inception_v3 import InceptionV3
model = InceptionV3(input_shape = (IMG_HEIGHT, IMG_WIDTH, 3), include_top=False, weights='imagenet')
model.compile(optimizer='RMSProp', loss='categorical_crossentropy', metrics=['accuracy'])
model.fit(training_dataset, epochs=10, steps_per_epoch=30) # -=CAUSES ERROR=-

How to Properly Combine TensorFlow's Dataset API and Keras?开始,我尝试使用第二种解决方案,但即使更改 model.fit() 以接受 one_shot_iterator 而不是 tf.data.Dataset ,我也会遇到相同的错误 .

看起来这可能是一个更常见的问题,正如我在https://github.com/tensorflow/tensorflow/issues/20698找到的那样 . 但是,此处发布的每个解决方案都会导致我之前遇到的相同错误 .