首页 文章

使用TensorFlow创建Siamese网络时的ValueError

提问于
浏览
0

我正在尝试使用Siamese Network来确定两个输入是否相同 . 以下是Siamese网络的简短摘要:

暹罗网络是由两个具有绑定权重的相同神经网络组成的网络(两个网络的权重相同) . 给定两个输入X_1和X_2,X_1被馈送到第一网络,X_2被馈送到第二网络 . 然后,两个网络的输出结合起来,产生一个问题的答案:两个输入是相似还是不同?

我使用tensorflow创建了以下网络,但是我遇到了错误 .

graph = tf.Graph()

# Add nodes to the graph
with graph.as_default():
    with tf.variable_scope('siamese_network') as scope:
        labels = tf.placeholder(tf.int32, [None, None], name='labels')
        keep_prob = tf.placeholder(tf.float32, name='question1_keep_prob')

        question1_inputs = tf.placeholder(tf.int32, [None, None], name='question1_inputs')

        question1_embedding = tf.get_variable(name='embedding', initializer=tf.random_uniform((n_words, embed_size), -1, 1))
        question1_embed = tf.nn.embedding_lookup(question1_embedding, question1_inputs)

        question1_lstm = tf.contrib.rnn.BasicLSTMCell(lstm_size)
        question1_drop = tf.contrib.rnn.DropoutWrapper(question1_lstm, output_keep_prob=keep_prob)
        question1_multi_lstm = tf.contrib.rnn.MultiRNNCell([question1_drop] * lstm_layers)

        initial_state = question1_multi_lstm.zero_state(batch_size, tf.float32)

        question1_outputs, question1_final_state = tf.nn.dynamic_rnn(question1_multi_lstm, question1_embed, initial_state=initial_state, scope='question1_siamese')
        question1_predictions = tf.contrib.layers.fully_connected(question1_outputs[:, -1], 1, activation_fn=tf.sigmoid)

        scope.reuse_variables()

        question2_inputs = tf.placeholder(tf.int32, [None, None], name='question2_inputs')

        question2_embedding = tf.get_variable(name='embedding', initializer=tf.random_uniform((n_words, embed_size), -1, 1))
        question2_embed = tf.nn.embedding_lookup(question2_embedding, question2_inputs)

        question2_lstm = tf.contrib.rnn.BasicLSTMCell(lstm_size)
        question2_drop = tf.contrib.rnn.DropoutWrapper(question2_lstm, output_keep_prob=keep_prob)
        question2_multi_lstm = tf.contrib.rnn.MultiRNNCell([question2_drop] * lstm_layers)

        question2_outputs, question2_final_state = tf.nn.dynamic_rnn(question2_multi_lstm, question2_embed, initial_state=initial_state)
        question2_predictions = tf.contrib.layers.fully_connected(question2_outputs[:, -1], 1, activation_fn=tf.sigmoid)

我在以下行收到以下错误:

question2_outputs, question2_final_state = tf.nn.dynamic_rnn(question2_multi_lstm, question2_embed, initial_state=initial_state)

这是错误:

ValueError: Variable siamese_network/rnn/multi_rnn_cell/cell_0/basic_lstm_cell/weights does not exist, 
or was not created with tf.get_variable(). Did you mean to set reuse=None in VarScope?

SOLUTION

问题出在以下方面:

question1_outputs, question1_final_state = tf.nn.dynamic_rnn(question1_multi_lstm, question1_embed, initial_state=initial_state, scope='question1_siamese')

我不得不删除 scope 属性,它工作正常 .

1 回答

  • 2

    你打电话的时候

    scope.reuse_variables()
    

    你告诉tensorflow,以后使用的变量已经被声明,应该重用 . 然而,您的Siamese网络共享一些但不是全部的变量;更确切地说, question2_outputsquestion2_final_statquestion2_predictions 对于您的第二个网络是唯一的,不会重复使用权重 .

    在你当前的代码中,因为一切都是平面的,你实际上不需要调用 reuse_variables ,你可以简单地写

    question2_embedding = question1_embedding
    

    你应该没事当您开始在函数中封装公共网络时, reuse_variables 会派上用场 . 你可以写点像

    with tf.variable_scope('siamese_common') as scope:
      net1 = siamese_common(question1_input)
      scope.reuse_variables()
      net2 = siamese_common(question2_input)
    

    获取插入第一个和第二个网络的相应输出的公共部分 .

相关问题