首页 文章

TensorFlow:执行此损失计算

提问于
浏览
11

我的问题和问题在两个代码块下面说明 .


损失函数

def loss(labels, logits, sequence_lengths, label_lengths, logit_lengths):    
    scores = []
    for i in xrange(runner.batch_size):
        sequence_length = sequence_lengths[i]
        for j in xrange(length):
            label_length = label_lengths[i, j]
            logit_length = logit_lengths[i, j]

             # get top k indices <==> argmax_k(labels[i, j, 0, :], label_length)
            top_labels = np.argpartition(labels[i, j, 0, :], -label_length)[-label_length:]
            top_logits = np.argpartition(logits[i, j, 0, :], -logit_length)[-logit_length:]

            scores.append(edit_distance(top_labels, top_logits))

    return np.mean(scores)

# Levenshtein distance
def edit_distance(s, t):
    n = s.size
    m = t.size
    d = np.zeros((n+1, m+1))
    d[:, 0] = np.arrange(n+1)
    d[0, :] = np.arrange(n+1)

    for j in xrange(1, m+1):
        for i in xrange(1, n+1):
            if s[i] == t[j]:
                d[i, j] = d[i-1, j-1]
            else:
                d[i, j] = min(d[i-1, j] + 1,
                              d[i, j-1] + 1,
                              d[i-1, j-1] + 1)

    return d[m, n]

正在使用中

我试图压扁我的代码,以便一切都在一个地方发生 . 如果有错字/混淆点,请告诉我 .

sequence_lengths_placeholder = tf.placeholder(tf.int64, shape=(batch_size))
labels_placeholder = tf.placeholder(tf.float32, shape=(batch_size, max_feature_length, label_size))
label_lengths_placeholder = tf.placeholder(tf.int64, shape=(batch_size, max_feature_length))
loss_placeholder = tf.placeholder(tf.float32, shape=(1))

logit_W = tf.Variable(tf.zeros([lstm_units, label_size]))
logit_b = tf.Variable(tf.zeros([label_size]))

length_W = tf.Variable(tf.zeros([lstm_units, max_length]))
length_b = tf.Variable(tf.zeros([max_length]))

lstm = rnn_cell.BasicLSTMCell(lstm_units)
stacked_lstm = rnn_cell.MultiRNNCell([lstm] * layer_count)

rnn_out, state = rnn.rnn(stacked_lstm, features, dtype=tf.float32, sequence_length=sequence_lengths_placeholder)

logits = tf.concat(1, [tf.reshape(tf.matmul(t, logit_W) + logit_b, [batch_size, 1, 2, label_size]) for t in rnn_out])

logit_lengths = tf.concat(1, [tf.reshape(tf.matmul(t, length_W) + length_b, [batch_size, 1, max_length]) for t in rnn_out])

optimizer = tf.train.AdamOptimizer(learning_rate)
global_step = tf.Variable(0, name='global_step', trainable=False)
train_op = optimizer.minimize(loss_placeholder, global_step=global_step)

...
...
# Inside training loop

np_labels, np_logits, sequence_lengths, label_lengths, logit_lengths = sess.run([labels_placeholder, logits, sequence_lengths_placeholder, label_lengths_placeholder, logit_lengths], feed_dict=feed_dict)
loss = loss(np_labels, np_logits, sequence_lengths, label_lengths, logit_lengths)
_ = sess.run([train_op], feed_dict={loss_placeholder: loss})

我的问题

问题是这会返回错误:

File "runner.py", line 63, in <module>
    train_op = optimizer.minimize(loss_placeholder, global_step=global_step)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/training/optimizer.py", line 188, in minimize
    name=name)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/training/optimizer.py", line 277, in apply_gradients
    (grads_and_vars,))

  ValueError: No gradients provided for any variable: <all my variables>

所以我认为这是TensorFlow抱怨它无法计算我的损失的梯度,因为损失是由超出TF范围的numpy执行的 .

所以很自然地要解决这个问题,我会尝试在TensorFlow中实现它 . 问题是,我的 logit_lengthslabel_lengths 都是张量,所以当我尝试访问单个元素时,我试图使用 tf.nn.top_k() ,它的 loss 参数为 Int .

另一个问题是我的 label_lengths 是占位符,因为我的 loss 值需要在 optimizer.minimize(loss) 调用之前定义,我也会收到一个错误,指出需要为占位符传递一个值 .

我只是想知道如何尝试实现这种损失功能 . 或者,如果我遗漏了一些明显的东西 .


Edit: 在一些further reading之后我发现通常像我描述的那样的损失用于验证和训练中使用的替代损失最小化与使用真实损失的地方相同 . 有谁知道代理损失用于像我这样的基于编辑距离的场景?

1 回答

  • 1

    我要做的第一件事是使用tensorflow而不是numpy来计算损失 . 这将允许tensorflow为您计算渐变,因此您将能够反向传播,这意味着您可以最大限度地减少损失 .

    核心库中有tf.edit_distance(https://www.tensorflow.org/api_docs/python/tf/edit_distance)函数 .

    很自然地解决这个问题,我会尝试在TensorFlow中实现它 . 问题是,我的logit_lengths和label_lengths都是张量,所以当我尝试访问单个元素时,我返回了一个Tensor of shape [] . 当我尝试使用tf.nn.top_k()时,这是一个问题,它将Int作为其k参数 .

    你能提供一些细节问题吗?

相关问题