首页 文章

模型的输出张量必须是Keras张量

提问于
浏览
1

我试图从两个模型输出之间的差异中学习模型 . 所以我制作了如下代码 . 但它发生错误读取:

TypeError:模型的输出张量必须是Keras张量 . 找到:Tensor(“sub:0”,shape =(?,10),dtype = float32)

我找到了包括Lambda在内的相关答案,但我无法解决这个问题 . 有谁知道这个问题?可能会看到将张量转换为keras的张量 .

Thx提前 .

from keras.layers import Dense
from keras.models import Model
from keras.models import Sequential

left_branch = Sequential()
left_branch.add(Dense(10, input_dim=784))

right_branch = Sequential()
right_branch.add(Dense(10, input_dim=784))

diff = left_branch.output - right_branch.output

model = Model(inputs=[left_branch.input, right_branch.input], outputs=[diff])
model.compile(optimizer='rmsprop', loss='binary_crossentropy', loss_weights=[1.])

model.summary(line_length=150)

2 回答

  • 3

    最好保持一个层完成所有操作,不要减去这样的输出(我不会因为文档的预期不同而冒险隐藏错误):

    from keras.layers import *
    
    def negativeActivation(x):
        return -x
    
    left_branch = Sequential()
    left_branch.add(Dense(10, input_dim=784))
    
    right_branch = Sequential()
    right_branch.add(Dense(10, input_dim=784))
    
    negativeRight = Activation(negativeActivation)(right_branch.output) 
    diff = Add()([left_branch.output,negativeRight])
    
    model = Model(inputs=[left_branch.input, right_branch.input], outputs=diff)
    model.compile(optimizer='rmsprop', loss='binary_crossentropy', loss_weights=[1.])
    

    当加入这样的模型时,我更喜欢使用 Model 方式来实现它,而不是使用 Sequential

    def negativeActivation(x):
        return -x
    
    leftInput = Input((784,))
    rightInput = Input((784,))
    
    left_branch = Dense(10)(leftInput) #Dense(10) creates a layer
    right_branch = Dense(10)(rightInput) #passing the input creates the output
    
    negativeRight = Activation(negativeActivation)(right_branch) 
    diff = Add()([left_branch,negativeRight])
    
    model = Model(inputs=[leftInput, rightInput], outputs=diff)
    model.compile(optimizer='rmsprop', loss='binary_crossentropy', loss_weights=[1.])
    

    有了这个,您可以创建具有相同图层的其他模型,它们将共享相同的权重:

    leftModel = Model(leftInput,left_branch)
    rightModel = Model(rightInput,right_branch)
    fullModel = Model([leftInput,rightInput],diff)
    

    如果他们共享同一层,那么训练其中一个会影响其他人 . 例如,您可以在编译之前制作 left_branch.trainable = False (或再次编译以进行训练),在完整模型中训练正确的部分 .

  • 0

    我想我解决了这个问题,但这可能是确切的解决方案 . 我添加了一些代码如下:

    diff = left_branch.output - right_branch.output
    setattr(diff, '_keras_history', getattr(right_branch.output, '_keras_history'))
    setattr(diff, '_keras_shape', getattr(right_branch.output, '_keras_shape'))
    setattr(diff, '_uses_learning_phase', getattr(right_branch.output, '_uses_learning_phase'))
    

    发生错误的原因是diff tensor没有名为_keras_history的attr等等 . 因此故意将它们添加到差异张量可以防止上述错误 . 我检查了原始代码,并且可以学习 .

相关问题