首页 文章

Keras seq2seq - 单词嵌入

提问于
浏览
4

我正在基于Keras的seq2seq创建一个生成的聊天机器人 . 我使用了这个网站的代码:https://machinelearningmastery.com/develop-encoder-decoder-model-sequence-sequence-prediction-keras/

我的模型看起来像这样:

# define training encoder
encoder_inputs = Input(shape=(None, n_input))
encoder = LSTM(n_units, return_state=True)
encoder_outputs, state_h, state_c = encoder(encoder_inputs)
encoder_states = [state_h, state_c]

# define training decoder
decoder_inputs = Input(shape=(None, n_output))
decoder_lstm = LSTM(n_units, return_sequences=True, return_state=True)
decoder_outputs, _, _ = decoder_lstm(decoder_inputs, initial_state=encoder_states)
decoder_dense = Dense(n_output, activation='softmax')
decoder_outputs = decoder_dense(decoder_outputs)
model = Model([encoder_inputs, decoder_inputs], decoder_outputs)

# define inference encoder
encoder_model = Model(encoder_inputs, encoder_states)

# define inference decoder
decoder_state_input_h = Input(shape=(n_units,))
decoder_state_input_c = Input(shape=(n_units,))
decoder_states_inputs = [decoder_state_input_h, decoder_state_input_c]
decoder_outputs, state_h, state_c = decoder_lstm(decoder_inputs, initial_state=decoder_states_inputs)
decoder_states = [state_h, state_c]
decoder_outputs = decoder_dense(decoder_outputs)
decoder_model = Model([decoder_inputs] + decoder_states_inputs [decoder_outputs] + decoder_states)

这个神经网络设计用于处理一个热编码向量,输入到这个网络似乎是这样的:

[[[0. 0. 0. 0. 1. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
   0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
   0. 0. 0. 0. 0.]
  [0. 0. 0. 0. 0. 0. 0. 1. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
   0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
   0. 0. 0. 0. 0.]
  [0. 0. 1. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
   0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
   0. 0. 0. 0. 0.]]
  [[0. 0. 0. 0. 1. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
   0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
   0. 0. 0. 0. 0.]
  [0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 1. 0. 0. 0.
   0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
   0. 0. 0. 0. 0.]
  [0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 1. 0. 0. 0.
   0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
   0. 0. 0. 0. 0.]]]

如何重建这些模型以使用单词?我想使用word嵌入层,但我不知道如何将嵌入层连接到这些模型 .

我的输入应该是 [[1,5,6,7,4], [4,5,7,5,4], [7,5,4,2,1]] ,其中int数字是单词的表示 .

我尝试了一切,但我仍然遇到错误 . 你能帮我吗?

2 回答

  • 0

    我终于做到了 . 这是代码:

    encoder_inputs = Input(shape=(sentenceLength,), name="Encoder_input")
    encoder = LSTM(n_units, return_state=True, name='Encoder_lstm') 
    Shared_Embedding = Embedding(output_dim=embedding, input_dim=vocab_size, name="Embedding") 
    word_embedding_context = Shared_Embedding(encoder_inputs) 
    encoder_outputs, state_h, state_c = encoder(word_embedding_context) 
    encoder_states = [state_h, state_c] decoder_lstm = LSTM(n_units, return_sequences=True, return_state=True, name="Decoder_lstm") 
    word_embedding_answer = Shared_Embedding(decoder_inputs) decoder_outputs, _, _ = decoder_lstm(word_embedding_answer, initial_state=encoder_states) 
    decoder_dense = Dense(vocab_size, activation='softmax', name="Dense_layer") 
    decoder_outputs = decoder_dense(decoder_outputs) 
    model = Model([encoder_inputs, decoder_inputs], decoder_outputs) 
    encoder_model = Model(encoder_inputs, encoder_states) 
    decoder_state_input_h = Input(shape=(n_units,), name="H_state_input") 
    decoder_state_input_c = Input(shape=(n_units,), name="C_state_input") 
    decoder_states_inputs = [decoder_state_input_h, decoder_state_input_c] 
    decoder_outputs, state_h, state_c = decoder_lstm(word_embedding_answer, initial_state=decoder_states_inputs) 
    decoder_states = [state_h, state_c] 
    decoder_outputs = decoder_dense(decoder_outputs)
    decoder_model = Model([decoder_inputs] + decoder_states_inputs, [decoder_outputs] + decoder_states)
    

    “model”是训练模型encoder_model,decode_model是推理模型

  • 3

    下面在本例的FAQ部分中,他们提供了一个如何使用seq2seq嵌入的示例 . 当我得到它时,我会发布在这里 . https://blog.keras.io/a-ten-minute-introduction-to-sequence-to-sequence-learning-in-keras.html

相关问题