我的整个网络架构在LSTM之后具有双向LSTM和MLP层 .

MLP层输入由串联的四个LSTM输出组成 . 我网络的伪代码如下:

X = tf.placeholder(tf.float64, [1, 95, 200])
Y = tf.placeholder(tf.float64, [95, 2])
MLP = tf.placeholder(tf.float64, [95, 1000])
sen_seq_len = tf.placeholder(tf.int32, [1])
weights = {
    'MLP_W1': tf.Variable(tf.random_uniform([8 * lstm_hidden, mlp_hidden_num], -0.01, 0.01, dtype=tf.float64)),
    'MLP_W2': tf.Variable(tf.random_uniform([mlp_hidden_num, 2], -0.01, 0.01, dtype=tf.float64))
}

biases = {
    'MLP_W1': tf.Variable(tf.random_normal([mlp_hidden_num], -0.01, 0.01, dtype=tf.float64)),
    'MLP_W2': tf.Variable(tf.random_normal([2], -0.01, 0.01, dtype=tf.float64))
}

fw_cell = tf.nn.rnn_cell.LSTMCell(lstm_hidden)
bw_cell = tf.nn.rnn_cell.LSTMCell(lstm_hidden)

outputs, states = tf.nn.bidirectional_dynamic_rnn(fw_cell, bw_cell, inputs=X, sequence_length=sen_seq_len,
                                                  dtype=tf.float64)
outputs = tf.concat(axis=2, values=outputs)

MLP_input = concat(outputs[0], outputs[1], outputs[4], outputs[6])    # outputs index is different at each step

Z1 = tf.matmul(MLP_input, weights['MLP_W1']) + biases['MLP_W1']
A1 = tf.nn.relu(Z1)
Z2 = tf.matmul(A1, weights['MLP_W2']) + biases['MLP_W2']
yhat = Z2

当我将连接的LSTM编码传递给feed_dict时,会出现错误

TypeError:Feed的值不能是tf.Tensor对象

我是否误解了LSTM细胞输出?

如何将每个步骤的LSTM输出提供给feed_dict?

还是有其他框架可以实现我的网络?

我的主要源代码如下:

https://github.com/taebinalive/dp-bilstm/blob/master/main.py