首页 文章

Tensorflow真实的例子

提问于
浏览
0

我已经写了一个基于这个Tensorflow example的代码 . 我没有任何意义的问题(这里缺少' either 1 or 0 ) so my question is what I'?

import tensorflow as tf
import  numpy as np
import  csv
import os


#defining  batch fuuntion

def batch(iterable, n=1):
    l = len(iterable)
    for ndx in range(0, l, n):
        yield iterable[ndx:min(ndx + n, l)]


os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'
Training_File = 'Training.csv'
Test_File     = 'Test.csv'
numberOFClasses = 19
batchSize = 19

# read training data
filePointer  = open(Training_File, 'r', newline='')
reader = csv.reader(filePointer)
Training_Data   = []
Training_Labels = []
row = next(reader)
len(row)
#### Getting Training_Data and labels
for  row in reader:
    Training_Data.append(row[:-2])
    Training_Labels.append(row[-1])
# close TrainingFile  and  getting Data and labels from Test
len(Training_Data)
filePointer.close();

filePointer =open(Test_File, 'r', newline='')
reader  =   csv.reader(filePointer)
Test_Data = []
Test_Labels=[]
row = next(reader)

for row in reader:
    Test_Data.append(row[:-2])
    Test_Labels.append(row[-1])
len(Test_Labels)
filePointer.close()
len(Training_Data[0])



x = tf.placeholder('float',[None,len(row[:-2])])
w = tf.Variable(tf.zeros([len(row[:-2]),numberOFClasses]))
b = tf.Variable(tf.zeros([numberOFClasses]))
model = tf.add(tf.matmul(x,w),b)
y_ = tf.placeholder(tf.float32,[None,numberOFClasses])
y =  tf.nn.softmax(model)

cross_entropy= -tf.reduce_sum(y_*tf.log(y),reduction_indices=[1])
train_step = tf.train.GradientDescentOptimizer(0.01).minimize(cross_entropy)
sess = tf.Session()
sess.run(tf.global_variables_initializer())
index =0
batch_xs = []
batch_ys = []
batch_txs= []
batch_tys= []
# Training processing


for i in batch(Training_Data,batchSize):
    batch_xs.append(i)
for i in batch(Training_Labels,batchSize):
    batch_ys.append(i)

for i in batch(Test_Data,batchSize):
    batch_txs.append(i)
for i in batch(Test_Labels,batchSize):
    batch_tys.append(i)


#print(np.reshape(batch_ys[len],(1,batchSize)))
for i in range(len(batch_xs) -1 ):
    sess.run(train_step,feed_dict={x:batch_xs[i],y_:np.reshape(batch_ys[i],(1,batchSize))})



correct_prediction = tf.equal(tf.arg_max(y,1),tf.arg_max(y_,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction,"float"))
for i in range(len(batch_txs) -1):
    print(sess.run(accuracy,feed_dict={x:batch_txs[i],y_:np.reshape(batch_tys[i],(1,batchSize))}))

UPDATE 我改变了批次的大小:

.............................................
numberOFClasses = 19

batchSize = 19 * 3
....................................
for i in range(int(len(batch_xs)/batchSize) ):
    print(sess.run(train_step,feed_dict={x:batch_xs[i],y_:np.reshape(batch_ys[i],(batchSize,numberOFClasses))}))



correct_prediction = tf.equal(tf.arg_max(y,1),tf.arg_max(y_,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction,"float"))
for i in range(len(batch_txs) -1):
    print(sess.run(accuracy,feed_dict={x:batch_txs[i],y_:np.reshape(batch_tys[i],(1,batchSize))}))

结果仍然相同,所以我只是没有得到我在这里缺少的东西

2ndUpdate

运行这部分代码:对于范围内的j(len(batch_xs)-1):print(sess.run(train_step,feed_dict = {x:batch_xs [j],y_:np.reshape(batch_ys [j],( numberOFClasses,3))}))

提供了一个巨大的错误消息,但我想这部分是相关的:

InvalidArgumentError (see above for traceback): Incompatible shapes: [19,3] vs. [57,19]
 [[Node: mul = Mul[T=DT_FLOAT, _device="/job:localhost/replica:0/task:0/cpu:0"](_recv_Placeholder_1_0, Log)]]

因此,由于我的批量大小是树的次数,所以我应该得到57个预测 - > Y_ .

塑造Y_ [57,1]

对于范围内的j(len(batch_xs)-1):print(sess.run(train_step,feed_dict = {x:batch_xs [j],y_:np.reshape(batch_ys [j],(batchSize,1))}) )

打印提供 None 作为返回值但没有错误(我猜)确定 .

但是运行精度部分会提供1和0,如开头所述 .

测试和训练数据和标签100%正确!

这是CSV文件结尾的一部分:

enter image description here

2 回答

  • 0

    它不能是您的问题或错误的来源,但我认为所有出现的行[: - 2]都应该被行[: - 1]替换,如果你想要除了一个所有索引(Python不包括索引) endrow[begin:end] 范围内给出

    你应该有:

    y_ = tf.placeholder(tf.float32,[None,numberOFClasses])
    ...
    sess.run(train_step,feed_dict={x:batch_xs[i],y_:np.reshape(batch_ys[i],(batchSize, numberOFClasses ))})
    ...
    print(sess.run(accuracy,feed_dict={x:batch_txs[i],y_:np.reshape(batch_tys[i],(batchSize, numberOFClasses ))}))
    

    无论如何,你应该绝对使用 batch_size != numberOFClasses ,因为它会抛出一个错误,你可以用它来理解代码中的错误 . 如果不这样做,则会丢失异常消息,但错误仍然存在,隐藏(您的网络仍然无法了解您想要的内容) . 当你得到重新检查问题的错误外观,并试图理解为什么(看看形状是什么,应该是什么)

  • 1

    从您的代码示例中,无法确切地说出(至少缺少batch()和batchSize),但我的猜测是您有一个大小为1的批次(无论是否有意),因此您可以获得一个(如果样本被正确预测)或零(如果样本被错误分类) . 为了获得有意义的准确度,您需要评估更大的批次 .

相关问题