我进行了训练并完成了KERAS NN模型的以下输出 . 我为模型质量添加了混淆矩阵输出

...

大纪元09998:val_acc没有改善0.83780大纪元9999/10000 12232/12232 [==============================] - 1s 49us /步 - 损失:0.2656 - acc:0.9133 - val_loss:0.6134 - val_acc:0.8051

大纪元09999:val_acc没有改善0.83780 Epoch 10000/10000 12232/12232 [==============================] - 1s 48us /步 - 损失:0.2655 - acc:0.9124 - val_loss:0.5918 - val_acc:0.8283

Epoch 10000:val_acc没有从0.83780改为3058/3058 [==============================] - 0s 46us / step

acc:82.83%模型质量Tn:806 Tp:1727 Fp:262 Fn:263精度:0.8683回忆:0.8678准确度0.8283 F得分0.8681

然后我对超参数进行更改并重新加载所有内容并使用以下代码重新编译

# prep checkpointing
model_file = output_file.rsplit('.',1)[0] + '_model.h5'
checkpoint = ModelCheckpoint(model_file, monitor='val_acc', verbose=1, 
                             save_best_only=True, save_weights_only=True, mode='max')
callbacks_list = [checkpoint]
model.load_weights(model_file) # - commented out first time thru, reload second time thru

adam = optimizers.Adam(lr=l_rate, beta_1=0.9, beta_2=0.999, epsilon=None, decay=0.0, amsgrad=False)
model.compile(loss='binary_crossentropy', optimizer=adam, metrics=['accuracy'])


model.fit(X_train, Y_train, validation_data=(model_test_X, model_test_y),
          batch_size = batch_s, epochs=num_epochs, 
          callbacks=callbacks_list, verbose=1)

在重新启动时,我希望最好的准确度与上面一样 . 在这种情况下,val_acc = 0.83780

然而,在前两个时期之后我得到了这个输出:

Epoch 1/10000 12232/12232 [==============================] - 1s 99us /步 - 损失:0.2747 - acc :0.9097 - val_loss:0.6191 - val_acc:0.8143

Epoch 00001:val_acc从-inf改进为0.81426,将模型保存到Data_model.h5

Epoch 2/10000 12232/12232 [==============================] - 1s 42us /步 - 损失:0.2591 - acc :0.9168 - val_loss:0.6367 - val_acc:0.8322

Epoch 00002:val_acc从0.81426提高到0.83224,将模型保存到Data_model.h5

Epoch 3/10000 12232/12232 [==============================] - 1s 44us /步 - 损失:0.2699 - acc :0.9140 - val_loss:0.6157 - val_acc:0.8313

Epoch 00003:val_acc从0.83224没有改善......

虽然我理解模型可能从不同的地方开始,但我的假设是最佳准确度(val_acc)将从前一次运行中结转 .

我的问题是我错过了什么?