首页 文章

Python - 低精度,低损耗和张量流

提问于
浏览
0

我正在构建一个简单的神经网络,它需要3个值并提供2个输出 .

我的准确率为67.5%,平均成本为0.05

我有一个包含1000个示例和500个测试示例的训练数据集 . 我计划在不久的将来制作更大的数据集 .

不久之前,我设法获得了大约82%的准确度,有时甚至更高,但成本相当高 .

我一直在尝试添加目前在模型中的另一个层,这就是我在1.0以下得到损失的原因

我不确定出了什么问题,我是Tensorflow和NN的新手 .

这是我的代码:

import tensorflow as tf
import numpy as np
import sys
sys.path.insert(0, '.../Dataset/Testing/')
sys.path.insert(0, '.../Dataset/Training/')
#other files
from TestDataNormaliser import *
from TrainDataNormaliser import *

learning_rate = 0.01
trainingIteration = 10
batchSize = 100
displayStep = 1


x = tf.placeholder("float", [None, 3])
y = tf.placeholder("float", [None, 2])



#layer 1
w1 = tf.Variable(tf.truncated_normal([3, 4], stddev=0.1))
b1 = tf.Variable(tf.zeros([4])) 
y1 = tf.matmul(x, w1) + b1

#layer 2
w2 = tf.Variable(tf.truncated_normal([4, 4], stddev=0.1))
b2 = tf.Variable(tf.zeros([4]))
#y2 = tf.nn.sigmoid(tf.matmul(y1, w2) + b2)
y2 = tf.matmul(y1, w2) + b2

w3 = tf.Variable(tf.truncated_normal([4, 2], stddev=0.1)) 
b3 = tf.Variable(tf.zeros([2]))
y3 = tf.nn.sigmoid(tf.matmul(y2, w3) + b3) #sigmoid


#output
#wO = tf.Variable(tf.truncated_normal([2, 2], stddev=0.1))
#bO = tf.Variable(tf.zeros([2]))
a = y3 #tf.nn.softmax(tf.matmul(y2, wO) + bO) #y2
a_ = tf.placeholder("float", [None, 2])


#cost function
cross_entropy = tf.reduce_mean(-tf.reduce_sum(y * tf.log(a)))
#cross_entropy = -tf.reduce_sum(y*tf.log(a))

optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(cross_entropy)


#training

init = tf.global_variables_initializer() #initialises tensorflow

with tf.Session() as sess:
    sess.run(init) #runs the initialiser

    writer = tf.summary.FileWriter(".../Logs")
    writer.add_graph(sess.graph)
    merged_summary = tf.summary.merge_all()

    for iteration in range(trainingIteration):
        avg_cost = 0
        totalBatch = int(len(trainArrayValues)/batchSize) #1000/100
        #totalBatch = 10

        for i in range(batchSize):
            start = i
            end = i + batchSize #100

            xBatch = trainArrayValues[start:end]
            yBatch = trainArrayLabels[start:end]

            #feeding training data

            sess.run(optimizer, feed_dict={x: xBatch, y: yBatch})

            i += batchSize

            avg_cost += sess.run(cross_entropy, feed_dict={x: xBatch, y: yBatch})/totalBatch

            if iteration % displayStep == 0:
                print("Iteration:", '%04d' % (iteration + 1), "cost=", "{:.9f}".format(avg_cost))

        #
    print("Training complete")


    predictions = tf.equal(tf.argmax(a, 1), tf.argmax(y, 1))

    accuracy = tf.reduce_mean(tf.cast(predictions, "float"))
    print("Accuracy:", accuracy.eval({x: testArrayValues, y: testArrayLabels}))

2 回答

  • 2

    一些重要的注意事项:

    • 你不需要训练一个相当于单层网络的网络,只需要大量浪费的计算 . 这可以通过添加简单的非线性(例如,非线性)来轻松解决 . 在每个matmul /偏置线之后的tf.nn.relu,例如, y2 = tf.nn.relu(y2)表示最后一层的所有条形图 .

    • 您正在使用数值不稳定的交叉熵实现 . 我'd encourage you to use tf.nn.sigmoid_cross_entropy_with_logits, and removing your explicit sigmoid call (the input to your sigmoid function is what is generally referred to as the logits, or '后勤单位') .

    • 您似乎没有随意移动数据集 . 鉴于您选择优化器,这可能会特别糟糕,这导致我们...

    • 随机梯度下降并不是很好 . 为了在不增加太多复杂性的情况下进行提升,请考虑使用MomentumOptimizer . AdamOptimizer是我的首选,但与他们一起玩 .

    在编写干净,可维护的代码时,我还鼓励您考虑以下事项:

    • 使用更高级别的API,例如tf.layers . 它在一个可变的级别进行,但很容易在所有复制的代码中出错,并且层实现的默认值通常都很好

    • 考虑使用tf.data.Dataset API进行数据输入 . 起初它有点吓人,但它处理很多事情,比如批量,洗牌,重复时代等等 .

    • 考虑使用类似tf.estimator.Estimator API的东西来处理会话运行,摘要编写和评估 . 通过所有这些更改,您可能会看到如下所示的内容(我已将代码保留下来,因此您可以大致看到等效的行) .

    对于图形构造:

    def get_logits(features):
        """tf.layers API is cleaner and has better default values."""
        # #layer 1
        # w1 = tf.Variable(tf.truncated_normal([3, 4], stddev=0.1))
        # b1 = tf.Variable(tf.zeros([4]))
        # y1 = tf.matmul(x, w1) + b1
        x = tf.layers.dense(features, 4, activation=tf.nn.relu)
    
        # #layer 2
        # w2 = tf.Variable(tf.truncated_normal([4, 4], stddev=0.1))
        # b2 = tf.Variable(tf.zeros([4]))
        # y2 = tf.matmul(y1, w2) + b2
        x = tf.layers.dense(x, 4, activation=tf.nn.relu)
    
        # w3 = tf.Variable(tf.truncated_normal([4, 2], stddev=0.1))
        # b3 = tf.Variable(tf.zeros([2]))
        # y3 = tf.nn.sigmoid(tf.matmul(y2, w3) + b3) #sigmoid
        # N.B Don't take a non-linearity here.
        logits = tf.layers.dense(x, 1, actiation=None)
    
        # remove unnecessary final dimension, batch_size * 1 -> batch_size
        logits = tf.squeeze(logits, axis=-1)
        return logits
    
    
    def get_loss(logits, labels):
        """tf.nn.sigmoid_cross_entropy_with_logits is numerically stable."""
        # #cost function
        # cross_entropy = tf.reduce_mean(-tf.reduce_sum(y * tf.log(a)))
        return tf.nn.sigmoid_cross_entropy_with_logits(
            logits=logits, labels=labels)
    
    
    def get_train_op(loss):
        """There are better options than standard SGD. Try the following."""
        learning_rate = 1e-3
        # optimizer = tf.train.GradientDescentOptimizer(learning_rate)
        optimizer = tf.train.MomentumOptimizer(learning_rate)
        # optimizer = tf.train.AdamOptimizer(learning_rate)
        return optimizer.minimize(loss)
    
    
    def get_inputs(feature_data, label_data, batch_size, n_epochs=None,
                   shuffle=True):
        """
        Get features and labels for training/evaluation.
    
        Args:
            feature_data: numpy array of feature data.
            label_data: numpy array of label data
            batch_size: size of batch to be returned
            n_epochs: number of epochs to train for. None will result in repeating
                forever/until stopped
            shuffle: bool flag indicating whether or not to shuffle.
        """
        dataset = tf.data.Dataset.from_tensor_slices(
            (feature_data, label_data))
    
        dataset = dataset.repeat(n_epochs)
        if shuffle:
            dataset = dataset.shuffle(len(feature_data))
        dataset = dataset.batch(batch_size)
        features, labels = dataset.make_one_shot_iterator().get_next()
        return features, labels
    

    对于会话运行,你可以像你一样使用它(我称之为'艰难的方式')......

    features, labels = get_inputs(
        trainArrayValues, trainArrayLabels, batchSize, n_epochs, shuffle=True)
    logits = get_logits(features)
    loss = get_loss(logits, labels)
    train_op = get_train_op(loss)
    init = tf.global_variables_initializer()
    # monitored sessions have the `should_stop` method, which works with datasets
    with tf.train.MonitoredSession() as sess:
        sess.run(init)
        while not sess.should_stop():
            # get both loss and optimizer step in the same session run
            loss_val, _ = sess.run([loss, train_op])
            print(loss_val)
        # save variables etc, do evaluation in another graph with different inputs?
    

    但我认为你最好使用tf.estimator.Estimator,尽管有些人更喜欢tf.keras.Models .

    def model_fn(features, labels, mode):
        logits = get_logits(features)
        loss = get_loss(logits, labels)
        train_op = get_train_op(loss)
        predictions = tf.greater(logits, 0)
        accuracy = tf.metrics.accuracy(labels, predictions)
        return tf.estimator.EstimatorSpec(
            mode=mode, loss=loss, train_op=train_op,
            eval_metric_ops={'accuracy': accuracy}, predictions=predictions)
    
    
    def train_input_fn():
        return get_inputs(trainArrayValues, trainArrayLabels, batchSize)
    
    
    def eval_input_fn():
        return get_inputs(
            testArrayValues, testArrayLabels, batchSize, n_epochs=1, shuffle=False)
    
    
    # Where variables and summaries will be saved to
    model_dir = './model'
    
    estimator = tf.estimator.Estimator(model_fn, model_dir)
    estimator.train(train_input_fn, max_steps=max_steps)
    
    estimator.evaluate(eval_input_fn)
    

    请注意,如果使用估算器,变量将在训练后保存,因此您不需要每次都重新训练 . 如果要重置,只需删除model_dir .

  • 0

    我看到你在最后一层使用了带有S形激活函数的softmax损失 . 现在让我解释一下softmax激活和sigmoidal之间的区别 .

    您现在允许网络的输出为y =(0,1),y =(1,0),y =(0,0)和y =(1,1) . 这是因为你的S形激活会在0和1之间“挤压”y中的每个元素 . 但是,您的损失函数假设您的y向量总和为1 .

    你需要做的是惩罚S形交叉熵函数,如下所示:

    -tf.reduce_sum(y*tf.log(a))-tf.reduce_sum((1-y)*tf.log(1-a))
    

    或者,如果你想要总和为1,你需要在最后一层使用softmax激活(以获得你的a)而不是sigmoids,这是这样实现的

    exp_out = tf.exp(y3)
    a = exp_out/tf reduce_sum(exp_out)
    

    PS . 我在火车上使用手机,请原谅错别字

相关问题