首页 文章

Tensorflow:注意输出与下一个解码器输入连接,导致seq2seq模型中的尺寸不匹配

提问于
浏览
0

[TF 1.8]我正在尝试为玩具聊天机器人构建一个seq2seq模型,以了解张量流和深度学习 . 我能够通过采样softmax和beam搜索训练和运行模型,但后来我尝试使用tf.contrib.seq2seq.AttentionWrapper应用tf.contrib.seq2seq.LuongAttention我在构建图时遇到以下错误:

ValueError: Dimensions must be equal, but are 384 and 256 for 'rnn/while/rnn/multi_rnn_cell/cell_0/basic_lstm_cell/MatMul_2' (op: 'MatMul') with input shapes: [64,384], [256,512].

这是我的模特:

class ChatBotModel:

def __init__(self, inferring=False, batch_size=1, use_sample_sofmax=True):
    """forward_only: if set, we do not construct the backward pass in the model.
    """
    print('Initialize new model')
    self.inferring = inferring
    self.batch_size = batch_size
    self.use_sample_sofmax = use_sample_sofmax


    def build_graph(self):
        # INPUTS
        self.X = tf.placeholder(tf.int32, [None, None])
        self.Y = tf.placeholder(tf.int32, [None, None])
        self.X_seq_len = tf.placeholder(tf.int32, [None])
        self.Y_seq_len = tf.placeholder(tf.int32, [None])


        self.gl_step = tf.Variable(
                      0, dtype=tf.int32, trainable=False, name='global_step')

        single_cell = tf.nn.rnn_cell.BasicLSTMCell(128)
        keep_prob = tf.cond(tf.convert_to_tensor(self.inferring, tf.bool), lambda: tf.constant(
            1.0), lambda: tf.constant(0.8))
        single_cell = tf.contrib.rnn.DropoutWrapper(
            single_cell, output_keep_prob=keep_prob)
        encoder_cell = tf.contrib.rnn.MultiRNNCell([single_cell for _ in range(2)])

        # ENCODER         
        encoder_out, encoder_state = tf.nn.dynamic_rnn(
            cell = encoder_cell, 
            inputs = tf.contrib.layers.embed_sequence(self.X, 10000, 128),
            sequence_length = self.X_seq_len,
            dtype = tf.float32)
        # encoder_state is ((cell0_c, cell0_h), (cell1_c, cell1_h))

        # DECODER INPUTS
        after_slice = tf.strided_slice(self.Y, [0, 0], [self.batch_size, -1], [1, 1])
        decoder_inputs = tf.concat( [tf.fill([self.batch_size, 1], 2), after_slice], 1)

        # ATTENTION
        attention_mechanism = tf.contrib.seq2seq.LuongAttention(
            num_units = 128, 
            memory = encoder_out,
            memory_sequence_length = self.X_seq_len)

        # DECODER COMPONENTS
        Y_vocab_size = 10000
        decoder_cell = tf.contrib.rnn.MultiRNNCell([single_cell for _ in range(2)])
        decoder_cell = tf.contrib.seq2seq.AttentionWrapper(
            cell = decoder_cell,
            attention_mechanism = attention_mechanism,
            attention_layer_size=128)
        decoder_embedding = tf.Variable(tf.random_uniform([Y_vocab_size, 128], -1.0, 1.0))
        projection_layer = CustomDense(Y_vocab_size)
        if self.use_sample_sofmax:
            softmax_weight = projection_layer.kernel
            softmax_biases = projection_layer.bias

        if not self.inferring:
            # TRAINING DECODER
            training_helper = tf.contrib.seq2seq.TrainingHelper(
                inputs = tf.nn.embedding_lookup(decoder_embedding, decoder_inputs),
                sequence_length = self.Y_seq_len,
                time_major = False)

            decoder_initial_state = decoder_cell.zero_state(self.batch_size, dtype=tf.float32).clone(
                cell_state=encoder_state)

            training_decoder = tf.contrib.seq2seq.BasicDecoder(
                cell = decoder_cell,
                helper = training_helper,
                initial_state = decoder_initial_state,
                output_layer = projection_layer
            )
            training_decoder_output, _, _ = tf.contrib.seq2seq.dynamic_decode(
                decoder = training_decoder,
                impute_finished = True,
                maximum_iterations = tf.reduce_max(self.Y_seq_len))
            training_logits = training_decoder_output.rnn_output

            # LOSS
            softmax_loss_function = None
            if self.use_sample_sofmax:
                def sampled_loss(labels, logits):
                    labels = tf.reshape(labels, [-1, 1])
                    return tf.nn.sampled_softmax_loss(weights=softmax_weight,
                                                      biases=softmax_biases,
                                                      labels=labels,
                                                      inputs=logits,
                                                      num_sampled=64,
                                                      num_classes=10000)
                softmax_loss_function = sampled_loss

            masks = tf.sequence_mask(self.Y_seq_len, tf.reduce_max(self.Y_seq_len), dtype=tf.float32)
            self.loss = tf.contrib.seq2seq.sequence_loss(logits = training_logits, targets = self.Y, weights = masks, softmax_loss_function=softmax_loss_function)

            # BACKWARD
            params = tf.trainable_variables()
            gradients = tf.gradients(self.loss, params)
            clipped_gradients, _ = tf.clip_by_global_norm(gradients, 5.0)
            self.train_op = tf.train.AdamOptimizer().apply_gradients(zip(clipped_gradients, params), global_step=self.gl_step)
        else:
            encoder_states = []
            for i in range(2):
                if isinstance(encoder_state[i],tf.contrib.rnn.LSTMStateTuple):
                    encoder_state_c = tf.contrib.seq2seq.tile_batch(encoder_state[i].c, multiplier=2)
                    encoder_state_h = tf.contrib.seq2seq.tile_batch(encoder_state[i].h, multiplier=2)
                    encoder_state = tf.contrib.rnn.LSTMStateTuple(c=encoder_state_c, h=encoder_state_h)
                encoder_states.append(encoder_state)
            encoder_states = tuple(encoder_states)

            predicting_decoder = tf.contrib.seq2seq.BeamSearchDecoder(
                cell = decoder_cell,
                embedding = decoder_embedding,
                start_tokens = tf.tile(tf.constant([2], dtype=tf.int32), [self.batch_size]),
                end_token = 3,
                initial_state = decoder_initial_state,
                beam_width = 2,
                output_layer = projection_layer,
                length_penalty_weight = 0.0)
            predicting_decoder_output, _, _ = tf.contrib.seq2seq.dynamic_decode(
                decoder = predicting_decoder,
                impute_finished = False,
                maximum_iterations = 4 * tf.reduce_max(self.Y_seq_len))
            self.predicting_logits = predicting_decoder_output.predicted_ids

追溯几行日志,我发现错误发生在这里:

/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/rnn_cell_impl.py in call(self, inputs, state)
    636 
    637     gate_inputs = math_ops.matmul(
--> 638         array_ops.concat([inputs, h], 1), self._kernel)
    639     gate_inputs = nn_ops.bias_add(gate_inputs, self._bias)

我检查了LSTM单元的'h'张量,它的形状为[batch_size,128],所以我的猜测是前一个解码步骤的注意输出与当前编码器的输入连接,使'输入'具有[batch_size,256]的形状然后它与'h'张量连接形成[batch_size,384]张量,导致此错误 .

我的问题是:注意输出是不是应该与下一个解码器的输入连接,或者我想念任何事情?以及如何解决此错误 .

1 回答

  • 0

    你可能已经找到了答案但是对于那些也遇到这个错误的偷看(如我),专注于第二个形状 . 它指定[256,512] . 现在打开代码“rnn_cell_impl.py”并转到concat op正在进行的行 . 您会注意到内核形状是被报告为与您的解码器输入不同步的内核形状(其中num_units为attention_layer_size作为第一维,0为您的batch_size) .

    基本上你使用的是你在解码器中为编码器单元创建的相同单元(它是一个2层的lstm,右边是128?)因此内核大小显示为256,512 . 要解决这个问题,请在这两个之间的行中添加

    Y_vocab_size = 10000
    ## create new decoder base rnn cell 
    decode_op_cell = tf.nn.rnn_cell.BasicLSTMCell(128)
    ## create new decoder base rnn cell
    decoder_cell = tf.contrib.rnn.MultiRNNCell([decode_op_cell for _ in range(2)])
    

    现在,如果您可以在给出错误的同一行中可视化代码,您将看到[64,384]和[384,512](这是一个合法的操作,应该修复您的错误)当然,无论什么辍学您要添加的等等,也可以随意添加到此decode_op_cell .

相关问题