最后,我能够在Keras自动编码器中运行数据压缩代码 . 我今天的问题是(1)如何将瓶颈功能下载并保存到我的计算机中作为CSV文件? (2)如何优化我的参数/功能以获得更好的结果?损失和准确性很糟糕 . 请在代码后立即查看我的结果 .

X = Input(shape=(37499,))
encoded = Dense(500, activation='tanh')(X)
encoded = Dense(100, activation='tanh')(encoded)
encoded = Dropout(0.5)(encoded)

decoded = Dense(500, activation='tanh')(encoded)
decoded = Dense(37499, activation='tanh')(decoded)

autoencoder = Model(X, decoded)
autoencoder.compile(optimizer='SGD',
                    loss='mean_squared_error',
                    metrics=['accuracy'])

autoencoder.fit(X_train, X_train,
                epochs=10,
                batch_size=10,
                shuffle=True,
                validation_data=(X_test, X_test))

model = Model(X, encoded)
model.compile(optimizer='SGD',
                    loss='mean_squared_error',
                    metrics=['accuracy'])

autoencoder.fit(X_train, X_train,
                epochs=5,
                batch_size=10,
                shuffle=True,
                validation_data=(X_test, X_test))

model = Model(X, encoded)
model.compile(optimizer='SGD',
                    loss='mean_squared_error',
                    metrics=['accuracy'])

encoded_train = model.predict(X_train)
print(encoded_train)

培训65个样本,验证17个样本Epoch 1/10 65/65 [==============================] - 3s 46ms /步 - 损失:230546246.1538 - acc:0.1538 - val_loss:300820875.2941 - val_acc:0.0588 Epoch 2/10 65/65 [====================== ========] - 2s 24ms /步 - 损失:230545668.9231 - acc:0.1077 - val_loss:300820783.0588 - val_acc:0.0588 Epoch 3/10 65/65 [============ ================== - 2s 25ms /步 - 损失:230545597.5385 - acc:0.1077 - val_loss:300820773.6471 - val_acc:0.0588 Epoch 4/10 65/65 [== ============================] - 2s 24ms /步 - 损失:230545581.5385 - acc:0.0923 - val_loss:300820719.0588 - val_acc:0.0588大纪元5/10 65/65 [==============================] - 2s 25ms /步 - 损失:230545553.2308 - acc :0.0923 - val_loss:300820719.0588 - val_acc:0.0588 Epoch 6/10 65/65 [==============================] - 2s 24ms /步 - 损失:230545560.6154 - acc:0.1077 - val_loss:300820719.0588 - val_acc:0.0000e 00 Epoch 7/10 65/65 [==================== ==========] - 2s 24ms /步 - l oss:230545549.5385 - acc:0.0923 - val_loss:300820719.0588 - val_acc:0.0588 Epoch 8/10 65/65 [=========================== ===] - 2s 24ms /步 - 损失:230545558.1538 - acc:0.1231 - val_loss:300820705.8824 - val_acc:0.0588 Epoch 9/10 65/65 [================= =============] - 2s 24ms /步 - 损失:230545554.4615 - acc:0.1077 - val_loss:300820705.8824 - val_acc:0.0588 Epoch 10/10 65/65 [======= =======================] - 2s 24ms /步 - 损失:230545556.9231 - acc:0.1077 - val_loss:300820705.8824 - val_acc:0.0588训练65个样本,验证17个样本Epoch 1/5 65/65 [==============================] - 2s 28ms /步 - 损失:230545555.6923 - acc:0.1077 - val_loss:300820705.8824 - val_acc:0.0588 Epoch 2/5 65/65 [============================ ==] - 2s 24ms /步 - 损失:230545555.6923 - acc:0.1231 - val_loss:300820705.8824 - val_acc:0.0588 Epoch 3/5 65/65 [================== ============] - 2s 24ms /步 - 损失:230545550.7692 - acc:0.1077 - val_loss:300820705.8824 - val_acc:0.05 88 Epoch 4/5 65/65 [==============================] - 2s 25ms /步 - 损失:230545549.5385 - acc:0.1077 - val_loss:300820705.8824 - val_acc:0.0588 Epoch 5/5 65/65 [==============================] - 2s 24ms /步 - 损失:230545548.3077 - acc:0.0923 - val_loss:300820705.8824 - val_acc:0.0588 dict_keys(['val_loss','val_acc','loss','acc'])