我正在尝试构建一个LSTM编码器解码器,我的主要目标是解码器的初始状态与编码器相同 . 我从here找到了下面的代码并尝试将其附加到我的案例中 . 我有一个形状的数据(1000,20,1) . 我希望编码器解码器能够在输出中输入我的输入 . 我不知道如何纠正它正在运行的代码,即使我理解错误 . 当我尝试运行它时,我收到以下错误:
The model expects 2 input arrays, but only received one array. Found:
array with shape (10000, 20, 1)
from keras.models import Model
from keras.layers import Input
from keras.layers import LSTM
from keras.layers import Dense
from keras.models import Sequential
latent_dim = 128
encoder_inputs = Input(shape=(20,1))
encoder = LSTM(latent_dim, return_state=True)
encoder_outputs, state_h, state_c = encoder(encoder_inputs)
# We discard `encoder_outputs` and only keep the states.
encoder_states = [state_h, state_c]
# Set up the decoder, using `encoder_states` as initial state.
decoder_inputs = Input(shape=(20, 1))
decoder_lstm = LSTM(latent_dim, return_sequences=True, return_state=True)
decoder_outputs, _, _ = decoder_lstm(decoder_inputs, initial_state=encoder_states)
decoder_dense = Dense(1, activation='tanh')
decoder_outputs = decoder_dense(decoder_outputs)
model = Model([encoder_inputs, decoder_inputs], decoder_outputs)
model.compile(optimizer='adam', loss='mse', metrics=['acc', 'mae'])
history=model.fit(xtrain, xtrain, epochs=200, verbose=2, shuffle=False)
我也有这个模型,但我不知道如何在这里将编码器状态与解码器状态相同 . repeatvector是这样做的吗?
#define model
model = Sequential()
model.add(LSTM(100, input_shape=(n_timesteps_in, n_features)))
model.add(RepeatVector(n_timesteps_in))
model.add(LSTM(100, return_sequences=True))
model.add(TimeDistributed(Dense(n_features, activation='tanh')))
model.compile(loss='mse', optimizer='adam', metrics=['mae'])
history=model.fit(train, train, epochs=epochs, verbose=2, shuffle=False)
1 回答
您正在构建一个具有2个输入的模型,即
encoder_inputs
和decoder_inputs
,但只提供一个输入.fit(xtrain, xtrain, ...)
,第二个参数是输出 . 如果您需要提供另一个形式.fit([xtrain, the_inputs_for_decoder], xtrain, ...)
的参数