首页 文章

输入堆叠RNN的形状

提问于
浏览
1

我正在尝试使用带有TensorFlow后端的Keras将一些RNN串联起来 . 我可以使用单个 SimpleRNN 图层创建模型,但是当我尝试添加第二个 SimpleRNN 图层时,我无法确定合适的输入大小 .

from keras import models
from keras.layers.recurrent import SimpleRNN
from keras.layers import Activation


model = models.Sequential()

hidden_units = 256
skeleton_dimensions = 3 * 16  # 3 dimensions for 16 joints
input_temporal_length = 7

in_shape = (input_temporal_length, skeleton_dimensions,)

# three hidden layers of 256 each
model.add(SimpleRNN(hidden_units, input_shape=in_shape,
                    activation='relu', use_bias=True,))
# what input shape is this supposed to have?
model.add(SimpleRNN(hidden_units, input_shape=(1, skeleton_dimensions,),
                    activation='relu', use_bias=True,))

我的第二个 SimpleRNN 应该作为输入形状?

Recurrent Layers的文档似乎暗示:

如果return_sequences输出形状:具有形状的3D张量(batch_size,timesteps,units) . 否则,2D张量与形状(batch_size,units) .

鉴于 return_sequences 自动设置为 False 我试图适当地设置下一个维度的 input_shape 但是我收到错误:

Using TensorFlow backend.
Traceback (most recent call last):
  File "rnn_agony.py", line 19, in <module>
    activation='relu', use_bias=True,))
  File "/usr/local/lib/python3.5/dist-packages/keras/models.py", line 455, in add
    output_tensor = layer(self.outputs[0])
  File "/usr/local/lib/python3.5/dist-packages/keras/layers/recurrent.py", line 252, in __call__
    return super(Recurrent, self).__call__(inputs, **kwargs)
  File "/usr/local/lib/python3.5/dist-packages/keras/engine/topology.py", line 511, in __call__
    self.assert_input_compatibility(inputs)
  File "/usr/local/lib/python3.5/dist-packages/keras/engine/topology.py", line 413, in assert_input_compatibility
    str(K.ndim(x)))
ValueError: Input 0 is incompatible with layer simple_rnn_2: expected ndim=3, found ndim=2

1 回答

  • 1

    如果要堆叠RNN,则需要设置 return_sequences=True ,您将不再需要设置 input_shape . 这具有直观意义,因为RNN期望输入序列 .

相关问题