我正在尝试关注Deep Autoencoder Keras example . 我'm getting a dimension mismatch exception, but for the life of me, I can'弄清楚原因 . 当我只使用一个编码维度时它可以工作,但是当我堆叠它们时却不行 .
例外:输入0与图层dense_18不兼容:预期shape =(None,128),found shape =(None,32)*
错误在行 decoder = Model(input=encoded_input, output=decoder_layer(encoded_input))
from keras.layers import Dense,Input
from keras.models import Model
import numpy as np
# this is the size of the encoded representations
encoding_dim = 32
#NPUT LAYER
input_img = Input(shape=(784,))
#ENCODE LAYER
# "encoded" is the encoded representation of the input
encoded = Dense(encoding_dim*4, activation='relu')(input_img)
encoded = Dense(encoding_dim*2, activation='relu')(encoded)
encoded = Dense(encoding_dim, activation='relu')(encoded)
#DECODED LAYER
# "decoded" is the lossy reconstruction of the input
decoded = Dense(encoding_dim*2, activation='relu')(encoded)
decoded = Dense(encoding_dim*4, activation='relu')(decoded)
decoded = Dense(784, activation='sigmoid')(decoded)
#MODEL
autoencoder = Model(input=input_img, output=decoded)
#SEPERATE ENCODER MODEL
encoder = Model(input=input_img, output=encoded)
# create a placeholder for an encoded (32-dimensional) input
encoded_input = Input(shape=(encoding_dim,))
# retrieve the last layer of the autoencoder model
decoder_layer = autoencoder.layers[-1]
# create the decoder model
decoder = Model(input=encoded_input, output=decoder_layer(encoded_input))
#COMPILER
autoencoder.compile(optimizer='adadelta', loss='binary_crossentropy')
3 回答
感谢Marcin的暗示 . 原来需要展开所有解码器层以使其工作 .
问题在于:
在以前的模型中 - 最后一层是唯一的解码器层 . 所以它输入也是解码器的输入 . 但是现在你有3个解码层,所以你必须回到第一个才能获得解码器的第一层 . 所以改变这一行:
应该做的工作 .
您需要将每个解码器层的转换应用到前一个解码器层 . 您可以在接受的答案中手动展开和硬编码这些,或者以下循环应该处理它: