首页 文章

Tensorflow采样Softmax丢失正确使用

提问于
浏览
3

在许多类的分类问题中,tensorflow文档建议在简单的softmax上使用sampled_softmax_loss来减少训练运行时间 .

根据docssource(第1180行),sampled_softmax_loss的调用模式为:

tf.nn.sampled_softmax_loss(weights, # Shape (num_classes, dim)     - floatXX
                     biases,        # Shape (num_classes)          - floatXX 
                     labels,        # Shape (batch_size, num_true) - int64
                     inputs,        # Shape (batch_size, dim)      - floatXX  
                     num_sampled,   # - int
                     num_classes,   # - int
                     num_true=1,  
                     sampled_values=None,
                     remove_accidental_hits=True,
                     partition_strategy="mod",
                     name="sampled_softmax_loss")

目前还不清楚(至少对我而言)如何将现实世界的问题转换为这种损失函数所需的形状 . 我认为'输入'字段是问题所在 .

这是一个复制粘贴就绪的最小工作示例,在调用loss函数时会抛出矩阵乘法形状错误 .

import tensorflow as tf

# Network Parameters
n_hidden_1 = 256  # 1st layer number of features
n_input = 784     # MNIST data input (img shape: 28*28)
n_classes = 10    # MNIST total classes (0-9 digits)    

# Dependent & Independent Variable Placeholders
x = tf.placeholder("float", [None, n_input])
y = tf.placeholder("float", [None, n_classes]) # 

# Weights and Biases
weights = {
    'h1': tf.Variable(tf.random_normal([n_input, n_hidden_1])),
    'out': tf.Variable(tf.random_normal([n_hidden_1, n_classes]))
}
biases = {
    'b1': tf.Variable(tf.random_normal([n_hidden_1])),
    'out': tf.Variable(tf.random_normal([n_classes]))
}

# Super simple model builder
def tiny_perceptron(x, weights, biases):
    layer_1 = tf.add(tf.matmul(x, weights['h1']), biases['b1'])
    layer_1 = tf.nn.relu(layer_1)
    out_layer = tf.matmul(layer_1, weights['out']) + biases['out']
    return out_layer   

# Create the model
pred = tiny_perceptron(x, weights, biases)    

# Set up loss function inputs and inspect their shapes
w = tf.transpose(weights['out'])
b = biases['out']
labels = tf.reshape(tf.argmax(y, 1), [-1,1])
inputs = pred
num_sampled = 3
num_true = 1
num_classes = n_classes

print('Shapes\n------\nw:\t%s\nb:\t%s\nlabels:\t%s\ninputs:\t%s' % (w.shape, b.shape, labels.shape, inputs.shape))
# Shapes
# ------
# w:      (10, 256)  # Requires (num_classes, dim)     - CORRECT
# b:      (10,)      # Requires (num_classes)          - CORRECT
# labels: (?, 1)     # Requires (batch_size, num_true) - CORRECT
# inputs: (?, 10)    # Requires (batch_size, dim)      - Not sure

loss_function = tf.reduce_mean(tf.nn.sampled_softmax_loss(
                     weights=w,
                     biases=b,
                     labels=labels,
                     inputs=inputs,
                     num_sampled=num_sampled,
                     num_true=num_true,
                     num_classes=num_classes))

最后一行触发和ValueError,声明你不能将张量乘以形状(?,10)和(?,256) . 作为一般规则,我同意这一说法 . 完整错误如下所示:

ValueError: Dimensions must be equal, but are 10 and 256 for 'sampled_softmax_loss_2/MatMul_1' (op: 'MatMul') with input shapes: [?,10], [?,256].

如果tensorflow docs的'dim'值是常量的,那么进入loss函数的'weight'或'inputs'变量是不正确的 .

任何想法都会很棒,我完全不知道如何正确使用这个损失函数,它会对我们用于(500k级)的模型的训练时间产生巨大影响 . 谢谢!

---EDIT---

通过使用参数并忽略 sampled_softmax_loss 调用模式的预期输入,可以使上面显示的示例无错误地运行 . 如果你这样做,它会产生一个可训练的模型,它对预测精度有影响(如你所料) .

2 回答

  • 0

    在softmax图层中,您将网络预测乘以具有维度 (num_classes, num_hidden_1)w 矩阵的维度 (num_classes,) ,因此您最终会尝试将大小为 (num_classes,) 的目标标签与现在大小为 (num_hidden_1,) 的内容进行比较 . 将您的微小感知器更改为输出 layer_1 ,然后更改成本的定义 . 下面的代码可能会成功 .

    def tiny_perceptron(x, weights, biases):
        layer_1 = tf.add(tf.matmul(x, weights['h1']), biases['b1'])
        layer_1 = tf.nn.relu(layer_1)
        return layer_1
    
    layer_1 = tiny_perceptron(x, weights, biases)
    loss_function = tf.reduce_mean(tf.nn.sampled_softmax_loss(
                         weights=weights['h1'],
                         biases=biases['b1'],
                         labels=labels,
                         inputs=layer_1,
                         num_sampled=num_sampled,
                         num_true=num_true,
                         num_classes=num_classes))
    

    当您使用某个优化器训练您的网络时,您将告诉它最小化 loss_function ,这应该意味着它将调整两组权重和偏差 .

  • 1

    关键是要传递正确的重量,偏差,输入和标签形状 . 传递给sampled_softmax的权重形状与一般情况不同 . 例如, logits = xw + b ,像这样调用sampled_softmax: sampled_softmax(weight=tf.transpose(w), bias=b, inputs=x) ,不是 sampled_softmax(weight=w, bias=b, inputs=logits) !!此外,标签不是一个热门的代表 . 如果您的标签是一个热门代表,请传递 labels=tf.reshape(tf.argmax(labels_one_hot, 1), [-1,1])

相关问题