首页 文章

Keras LSTM州

提问于
浏览
4

我想在Keras运行LSTM并获得输出和状态 . 在TF中有类似的事情

with tf.variable_scope("RNN"):
      for time_step in range(num_steps):
        if time_step > 0: tf.get_variable_scope().reuse_variables()
        (cell_output, state) = cell(inputs[:, time_step, :], state)
        outputs.append(cell_output)

有没有办法在Keras做到这一点,我可以获得最后一个状态,并在序列的长度很大时将其提供给新的输入 . 我知道有状态=真,但我想在我训练的同时进入状态 . 我知道它使用的是扫描而不是for循环,但基本上我想保存状态,然后在下次运行时,将它们作为LSTM的起始状态 . 简而言之,获得输出和状态 .

1 回答

  • 2

    由于LSTM是一个图层,一个图层在keras中只能有一个输出(如果我错了就纠正我),你 can not 同时得到两个输出而不修改源代码 .

    最近我正在攻击keras来实现一些先进的结构,一些你可能不喜欢的想法确实有效 . 我正在做的是 override the keras layer ,以便我们可以访问代表隐藏状态的张量 .

    首先,您可以检查keras/layers/recurrent.py中的 call() 函数,了解keras的工作原理:

    def call(self, x, mask=None):
        # input shape: (nb_samples, time (padded with zeros), input_dim)
        # note that the .build() method of subclasses MUST define
        # self.input_spec with a complete input shape.
        input_shape = self.input_spec[0].shape
        if K._BACKEND == 'tensorflow':
            if not input_shape[1]:
                raise Exception('When using TensorFlow, you should define '
                                'explicitly the number of timesteps of '
                                'your sequences.\n'
                                'If your first layer is an Embedding, '
                                'make sure to pass it an "input_length" '
                                'argument. Otherwise, make sure '
                                'the first layer has '
                                'an "input_shape" or "batch_input_shape" '
                                'argument, including the time axis. '
                                'Found input shape at layer ' + self.name +
                                ': ' + str(input_shape))
        if self.stateful:
            initial_states = self.states
        else:
            initial_states = self.get_initial_states(x)
        constants = self.get_constants(x)
        preprocessed_input = self.preprocess_input(x)
    
        last_output, outputs, states = K.rnn(self.step, preprocessed_input,
                                             initial_states,
                                             go_backwards=self.go_backwards,
                                             mask=mask,
                                             constants=constants,
                                             unroll=self.unroll,
                                             input_length=input_shape[1])
        if self.stateful:
            self.updates = []
            for i in range(len(states)):
                self.updates.append((self.states[i], states[i]))
    
        if self.return_sequences:
            return outputs
        else:
            return last_output
    

    其次,我们应该覆盖我们的Layer,这是一个简单的脚本:

    import keras.backend as K
    from keras.layers import Input, LSTM
    class MyLSTM(LSTM):
       def call(self, x, mask=None):
       # .... blablabla, right before return
    
       # we add this line to get access to states
       self.extra_output = states
    
       if self.return_sequences:
       # .... blablabla, to the end
    
       # you should copy **exactly the same code** from keras.layers.recurrent
    
    I = Input(shape=(...))
    lstm = MyLSTM(20)
    output = lstm(I) # by calling, we actually call the `call()` and create `lstm.extra_output`
    extra_output = lstm.extra_output # refer to the target
    
    calculate_function = K.function(inputs=[I], outputs=extra_output+[output]) # use function to calculate them **simultaneously**.
    

相关问题