我正在修改cifar multi GPU tensorflow代码来读取Imagenet数据集 .
我所做的编辑是:
Cifar10.py:
1)更改了tf.app.flags.DEFINE_string('data_dir',...)
2)删除了data_dir = os.path.join中的后一部分(FLAGS.data_dir,'cifar-10-batches-bin')
3)从maybe_download_and_extract()中删除了下载部分
cifar10_input.py:
1)图像大小= 227
2)result.height = 256并且result.width = 256
3)改变了
filenames = [os.path.join(data_dir, 'data_batch_%d.bin' % i) for i in xrange(1, 6)]
至
filenames = [os.path.join(data_dir, i) for i in os.listdir(data_dir)]
但这会引发一个丑陋的错误:tensorflow.python.framework.errors.OutOfRangeError:RandomShuffleQueue'_1_tower_0 / shuffle_batch / random_shuffle_queue'已关闭且元素不足(请求128,当前大小为0)
[[Node:tower_0 / shuffle_batch = QueueDequeueMany [component_types = [DT_FLOAT,DT_INT32],timeout_ms = -1,_device =“/ job:localhost / replica:0 / task:0 / cpu:0”](tower_0 / shuffle_batch / random_shuffle_queue ,tower_0 / shuffle_batch / n / _775)]]
[[节点:tower_1 / shuffle_batch / n / _664 = HostSendT = DT_INT32,client_terminated = false,recv_device =“/ job:localhost / replica:0 / task:0 / cpu:0”,send_device =“/ job:localhost / replica :0 / task:0 / gpu:1“,send_device_incarnation = 1,tensor_name =”edge_170_tower_1 / shuffle_batch / n“, device =”/ job:localhost / replica:0 / task:0 / gpu:1“]]引起的op u'tower_0 / shuffle_batch',定义于:
文件“lib / python2.7 / site-packages / tensorflow / models / image / cifar10 / cifar10_multi-gpu_train.py”,第224行,in
tf.app.run()
文件“/home/saoni.m/tensorflow/lib/python2.7/site-packages/tensorflow/python/platform/default/_app.py”,第30行,在运行中
sys.exit(main(sys.argv))
文件“lib / python2.7 / site-packages / tensorflow / models / image / cifar10 / cifar10_multi-gpu_train.py”,第222行,主要
train()
火车上的文件“lib / python2.7 / site-packages / tensorflow / models / image / cifar10 / cifar10_multi-gpu_train.py”,第150行
loss = tower_loss(scope)
在Tower_loss中输入文件“lib / python2.7 / site-packages / tensorflow / models / image / cifar10 / cifar10_multi-gpu_train.py”,第65行
images, labels = cifar10.distorted_inputs()
在distorted_inputs中输入文件“/home/saoni.m/tensorflow/lib/python2.7/site-packages/tensorflow/models/image/cifar10/cifar10.py”,第119行
batch_size=FLAGS.batch_size)
在distorted_inputs中输入文件“/home/saoni.m/tensorflow/lib/python2.7/site-packages/tensorflow/models/image/cifar10/cifar10_input.py”,第153行
min_queue_examples, batch_size)
在_generate_image_and_label_batch中输入文件“/home/saoni.m/tensorflow/lib/python2.7/site-packages/tensorflow/models/image/cifar10/cifar10_input.py”,第104行
min_after_dequeue=min_queue_examples)
文件“/home/saoni.m/tensorflow/lib/python2.7/site-packages/tensorflow/python/training/input.py”,第496行,在shuffle_batch中返回queue.dequeue_many(batch_size,name = name)
在dequeue_many中输入文件“/home/saoni.m/tensorflow/lib/python2.7/site-packages/tensorflow/python/ops/data_flow_ops.py”,第287行
self._queue_ref, n, self._dtypes, name=name)
在_queue_dequeue_many中输入文件“/home/saoni.m/tensorflow/lib/python2.7/site-packages/tensorflow/python/ops/gen_data_flow_ops.py”,第319行
timeout_ms=timeout_ms, name=name)
文件“/home/saoni.m/tensorflow/lib/python2.7/site-packages/tensorflow/python/ops/op_def_library.py”,第664行,在apply_op中op_def = op_def)
在create_op中输入文件“/home/saoni.m/tensorflow/lib/python2.7/site-packages/tensorflow/python/framework/ops.py”,第1834行
original_op=self._default_original_op, op_def=op_def)
文件"/home/saoni.m/tensorflow/lib/python2.7/site-packages/tensorflow/python/framework/ops.py",第1043行,在 init
self._traceback = _extract_stack()
当我追溯到调用shuffle_batch()的行时:
images, label_batch = tf.train.shuffle_batch(
[image, label],
batch_size=batch_size,
num_threads=num_preprocess_threads,
capacity=min_queue_examples + 3 * batch_size,
min_after_dequeue=min_queue_examples)
传递给它的值是:批量大小128,num_threads 16,容量20384,min_after_deque 20000
2 回答
看起来您没有从读者那里获得任何数据输入 .
你变了:
data_dir /中的实际内容是什么? (你确定正在使用正确的dirname等吗?)
我的建议是在执行开始时
print filenames
,'s not doing anything in tensorflow, just python, so you'将立即得到一个易于阅读的答案 . 如果它看起来有效,我们将从那里开始工作 . :)第二个问题是您的更改不足以开始在imagenet上工作 .
read_cifar10
函数专用于cifar输入格式,但ImageNet数据(大多数)是JPEG,带有指定标签的单独文件 . 您可以使用tf.image.decode_jpeg
解码jpeg,但您还需要合并synset标签 .我遇到了类似的问题,我尝试更改python列表,如os.listdir(data_dir)中的[os.path.join(data_dir,i)for i)到files = tf.train.match_filenames_once(“/ path / to /data.tfrecords-*“),filequeue = tf.train.input_string_input_producer(files) . 它对我有用,你可以尝试一下 .