首页 文章

Keras有更多输入和一个输出问题

提问于
浏览
0
self.embed = Sequential([Embedding(9488, output_dim=512,input_length=14),
                                Activation('relu'),
                                    Dropout(0.5)], name='embed.0')

self.fc_embed = Sequential([Dense(512, input_shape=(10,2048)),
                                    Activation('relu'),
                                    Dropout(0.5)], name='fc_embed.0')

inputs_bedding = Input(shape=(10,))
xt = self.embed(inputs_bedding)

input_feats = Input(shape=(10,2048))
fc_feats = self.fc_embed(input_feats)

fc_feats_new = K.reshape(fc_feats, [fc_feats.shape[1], fc_feats.shape[2]])
xt_new = K.reshape(xt, [xt.shape[1], xt.shape[2]])

 prev_h = state[0][-1] (shape is (10,512))
 att_lstm_input = Concatenate([prev_h, fc_feats_new, xt_new], axis=1)
 lstm, h_att, c_att = LSTM(units=512, name='core.att_lstm', return_state=True)(att_lstm_input)
 model = Model([input_feats, inputs_att, inputs_bedding], lstm)
 model.summary()

这是我得到的错误:

File "copy_eval.py", line 165, in <module>
model1 = TopDownModel.forward(fc_feats, att_feats, seq, att_masks)

文件"/home/ubuntu/misc/customize_keras.py",第127行,在正向lstm中,h_att,c_att = LSTM(单位= 512,名称= 'core.att_lstm',return_state = True)(att_lstm_input)文件"/usr/local/lib/python2.7/dist-packages/keras/layers/recurrent.py",第500行,在 call 中返回超级(RNN,self) . call (输入,** kwargs)文件"/usr/local/lib/python2.7/dist-packages/keras/engine/topology.py",第575行,在 call self.assert_input_compatibility(输入)文件"/usr/local/lib/python2.7/dist-packages/keras/engine/topology.py",第448行,在assert_input_compatibility中str(输入)'. All inputs to the layer ' ValueError:使用非输入调用层core.att_lstm一个象征性的张量 . 收到类型: . 全输入:[] . 该层的所有输入都应该是张量 .

有关更多输入,如何将它们合并为一个输出?

1 回答

  • 0

    Concatenate应该用作图层,如下所示:

    att_lstm_input = Concatenate(axis=1)([prev_h, fc_feats_new, xt_new])
    

相关问题