首页 文章

由于tf.placeholder的麻烦,无法运行谓词

提问于
浏览
0

道歉,我是Tensorflow的新手 . 我正在开发一个简单的onelayer_perceptron脚本,只需使用Tensorflow获取init参数训练神经网络:

我的编译器抱怨:

您必须使用dtype float为占位符张量'input'提供值

错误发生在这里:

input_tensor = tf.placeholder(tf.float32,[None,n_input],name =“input”)

请看看到目前为止我做了什么:

1)我初始化我的输入值

n_input = 10  # Number of input neurons
n_hidden_1 = 10  # Number of hidden layers
n_classes = 3  # Out layers

weights = {
    'h1': tf.Variable(tf.random_normal([n_input, n_hidden_1])),
    'out': tf.Variable(tf.random_normal([n_hidden_1, n_classes]))
}
biases = {
    'b1': tf.Variable(tf.random_normal([n_hidden_1])),
    'out': tf.Variable(tf.random_normal([n_classes]))
}

2)初始化占位符:

input_tensor = tf.placeholder(tf.float32, [None, n_input], name="input")
output_tensor = tf.placeholder(tf.float32, [None, n_classes], name="output")

3)训练NN

# Construct model
prediction = onelayer_perceptron(input_tensor, weights, biases)

init = tf.global_variables_initializer()

4)这是我的onelayer_perceptron函数,只做典型的NN计算matmul层和权重,添加偏差并使用sigm激活

def onelayer_perceptron(input_tensor, weights, biases):
    layer_1_multiplication = tf.matmul(input_tensor, weights['h1'])
    layer_1_addition = tf.add(layer_1_multiplication, biases['b1'])
    layer_1_activation = tf.nn.sigmoid(layer_1_addition)

    out_layer_multiplication = tf.matmul(layer_1_activation, weights['out'])
    out_layer_addition = out_layer_multiplication + biases['out']

    return out_layer_addition

5)运行我的脚本

with tf.Session() as sess:
   sess.run(init)

   i = sess.run(input_tensor)
   print(i)

1 回答

  • 1

    您没有将输入馈送到占位符;你用 feed_dict 做到了 .

    你应该做类似的事情:

    out = session.run(Tensor(s)_you_want_to_evaluate, feed_dict={input_tensor: input of size [batch_size,n_input], output_tensor: output of size [batch size, classes] })
    

相关问题