我在这里有一些代码来执行简单的编码器,解码器操作使用rnn(BasicLSTMCell,输入)作为编码器和rnn_decoder(decoder_inputs,states,BasicLSTMCell)作为解码器 . 我有一个get_batch函数,它迭代到下一个批处理并返回一个大小为[batch_size,LSTMSize]的numpy矩阵 . 不幸的是,因为我在获得下一批时循环访问数据 - 我重用了"loss_optimize"范围内的某些变量 . 我已经尝试在创建变量的范围内使用reuse = True但是没有效果 .

encoder_batch = tf.placeholder(
        tf.float32, shape=[None, self.params["time_steps"]])
    decoder_batch = tf.placeholder(
        tf.float32, shape=[None, self.params["time_steps"]])

    with variable_scope.variable_scope("simple_seq2seq"):
        with variable_scope.variable_scope("encoder_scope"):
            encoder_output, encoder_state = rnn.rnn(self.cell["encoder"],
                                                    [encoder_batch], dtype=tf.float32)
        with variable_scope.variable_scope("decoder_scope"):
            decoder_output, decoder_state = seq2seq.rnn_decoder([decoder_batch],
                                                                encoder_state,
                                                                self.cell["decoder"])

        with variable_scope.variable_scope("loss_optimize") as lo:
            loss = tf.get_variable("loss",[self.params["batch_size"],self.params["time_steps"]],dtype=self.dtype)
            loss = tf.subtract(decoder_output, decoder_batch)
            mean_loss = tf.reduce_mean(loss)
            optimize_loss = tf.train.AdagradOptimizer(
                self.params['learning_rate']).minimize(mean_loss)

        init_op = tf.global_variables_initializer()

    with tf.Session() as sess:
        sess.run(init_op)
        # previous_losses, predictions = [], []
        self.step = 0
        while self.step * self.params['batch_size'] < self.params['training_steps']:
            input_batch, output_batch = self.get_batch(train_data["features"],
                                                       train_data["targets"])
            if self.step > 0:
                scope.reuse_variables()
            ol = sess.run(optimize_loss, feed_dict={encoder_batch: input_batch,
                                                    decoder_batch: output_batch})
            # sess.run()
            # previous_losses.append(loss.eval())
            # predictions.append(decoder_output.eval())
            if self.step % self.params["print_steps"]:
                print("loss: " + repr(ol) + " step: " +
                      repr(self.step * self.params["batch_size"]))
            self.step += 1
    self.initial_states = {"encoder": encoder_state,
                           "decoder": decoder_state}

关于变量的重用,我得到以下错误:

ValueError:变量simple_seq2seq / encoder_scope / RNN / BasicLSTMCell / Linear / Matrix已经存在,不允许 . 你的意思是在VarScope中设置reuse = True吗?最初定义于:文件“/home/ophamster/src/python/seq2seq_model/seq2seq_model.py”,第164行,在train [encoder_batch]中,dtype = tf.float32)文件“lstm_seq2seq.py”,第90行,以m为单位 . 列车(train_data)