我修改了Keras OCR示例:https://github.com/keras-team/keras/blob/master/examples/image_ocr.py以执行在线手写识别 . 我正在以10个批次喂养1000个中风序列 . 在使用3层LSTM网络,默认设置的RMSprop优化器时,CTC损失很快就会下降;但经过一定数量的时代,它突然转向NaN . 根据学习率和时代数量;它进入NaN后的实际纪元数不断变化 . 如果我减少了纪元数,那么我可以得到最小损失值12,之后损失归NaN所有 . 我在这个损失值上获得了35%的准确率 . 有人可以帮助我解释为什么损失值不低于某一点而是变成NaN?

以下是代码段,完整代码在此处上传https://github.com/aayushee/HWR/blob/master/CTC4D.py

inputs = Input(name='the_input', shape=x_train.shape[1:], dtype='float32')
rnn_encoded = Bidirectional(LSTM(64, return_sequences=True,kernel_initializer=init,bias_initializer=bias),name='bidirectional_1',merge_mode='concat',trainable=trainable)(inputs)
birnn_encoded = Bidirectional(LSTM(32, return_sequences=True,kernel_initializer=init,bias_initializer=bias),name='bidirectional_2',merge_mode='concat',trainable=trainable)(rnn_encoded)
trirnn_encoded=Bidirectional(LSTM(16,return_sequences=True,kernel_initializer=init,bias_initializer=bias),name='bidirectional_3',merge_mode='concat',trainable=trainable)(birnn_encoded)output = TimeDistributed(Dense(28, name='dense',kernel_initializer=init,bias_initializer=bias))(trirnn_encoded)
y_pred = Activation('softmax', name='softmax')(output)
model=Model(inputs=inputs,outputs=y_pred)
labels = Input(name='the_labels', shape=[max_len], dtype='int32') 
input_length = Input(name='input_length', shape=[1], dtype='int64')
label_length = Input(name='label_length', shape=[1], dtype='int64')
loss_out = Lambda(ctc_lambda_func, output_shape=(1,), name='ctc')([y_pred,labels, input_length, label_length])
model = Model(inputs=[inputs, labels, input_length, label_length], outputs=loss_out)
opt=RMSprop(lr=0.001,clipnorm=1.)
model.compile(loss={'ctc': lambda y_true, y_pred: y_pred}, optimizer=opt)
my_generator = generator(x_train,y_train,batch_size)
hist= model.fit_generator(my_generator,epochs=80,steps_per_epoch=100,shuffle=True,use_multiprocessing=False,workers=1)
test_func = K.function([inputs], [y_pred])

以下是75个时期训练的损失与时代的关系图:Loss vs Epochs plot