我正在研究一个项目,其中CNN模型应该预测图像片上的均方误差 . 这些图像是视频帧 . 我有分割数据集(80% - 20%)(X_train,y_train)和(X_test,y_test) . X_ *包含48x48个补丁,y_ *包含地面实况质量得分 .

CNN模型如下:

print('CNN Model loading...')
from keras import optimizers
from keras.models import Sequential
from keras.layers.convolutional import Conv2D, MaxPooling2D, AveragePooling2D
from keras.layers.advanced_activations import LeakyReLU
from keras.layers import Activation, Dense, Dropout, Flatten

from keras import backend as K
from keras.callbacks import LearningRateScheduler

print('CNN Model working...')
model3=Sequential()
model3.add(Conv2D(filters=32, kernel_size=(3, 3), padding="same", 
          input_shape=X_train.shape[1:], activation='relu'))
model3.add(MaxPooling2D(pool_size=(2, 2)))

model3.add(Conv2D(filters=64, kernel_size=(3, 3), padding="same", activation='relu'))
model3.add(MaxPooling2D(pool_size=(2, 2)))

model3.add(Conv2D(filters=128, kernel_size=(3, 3), padding="same", activation='relu'))
model3.add(MaxPooling2D(pool_size=(2, 2)))

model3.add(Flatten())

model3.add(Dense(512, activation='relu'))
#model3.add(Dense(512, activation='relu'))

model3.add(Dense(1, activation='relu'))

sgd = optimizers.SGD(lr=0.01, decay=1e-6, momentum=0.9, nesterov=True)
model3.compile(loss='mse', optimizer=sgd, metrics=['accuracy'])

def scheduler(epoch):
    if epoch%2==0 and epoch!=0:
        lr = K.get_value(model3.optimizer.lr)
        K.set_value(model3.optimizer.lr, lr*.9)
        print("lr changed to {}".format(lr*.9))
    return K.get_value(model3.optimizer.lr)

lr_decay = LearningRateScheduler(scheduler)

model3_fit=model3.fit(X_train, y_train, validation_data = (X_test, y_test), 
                      epochs=10, verbose=1, batch_size=100,
                      callbacks=[lr_decay])

print('CNN Model done...')

问题是,这个网络显示精度= 0.0000e00的东西和损失= 2753.something,这是最糟糕的 . 我也尝试了不同的CNN参数,但它一直让我看到最糟糕的表现 .

我的目标是使用VGG16获得质量得分,最初的性能很差,这就是为什么我想在简单的CNN模型上首先取得好成绩的原因 .

我将感谢你的帮助 .

这是10个时代的输出:

CNN Model loading...
CNN Model working...
Train on 195840 samples, validate on 48960 samples
Epoch 1/10
195840/195840 [==============================] - 146s 746us/step - loss: 2652.0550 - acc: 0.0000e+00 - val_loss: 2589.8413 - val_acc: 0.0000e+00
Epoch 2/10
195840/195840 [==============================] - 146s 745us/step - loss: 2589.2724 - acc: 0.0000e+00 - val_loss: 2589.8413 - val_acc: 0.0000e+00
Epoch 3/10
lr changed to 0.008999999798834325
195840/195840 [==============================] - 146s 745us/step - loss: 2589.2724 - acc: 0.0000e+00 - val_loss: 2589.8413 - val_acc: 0.0000e+00
Epoch 4/10
195840/195840 [==============================] - 146s 745us/step - loss: 2589.2724 - acc: 0.0000e+00 - val_loss: 2589.8413 - val_acc: 0.0000e+00
Epoch 5/10
lr changed to 0.008099999651312828
195840/195840 [==============================] - 145s 742us/step - loss: 2589.2724 - acc: 0.0000e+00 - val_loss: 2589.8413 - val_acc: 0.0000e+00
Epoch 6/10
195840/195840 [==============================] - 146s 746us/step - loss: 2589.2724 - acc: 0.0000e+00 - val_loss: 2589.8413 - val_acc: 0.0000e+00
Epoch 7/10
lr changed to 0.007289999350905419
195840/195840 [==============================] - 146s 745us/step - loss: 2589.2725 - acc: 0.0000e+00 - val_loss: 2589.8413 - val_acc: 0.0000e+00
Epoch 8/10
195840/195840 [==============================] - 146s 747us/step - loss: 2589.2724 - acc: 0.0000e+00 - val_loss: 2589.8413 - val_acc: 0.0000e+00
Epoch 9/10
lr changed to 0.006560999248176813
195840/195840 [==============================] - 148s 757us/step - loss: 2589.2724 - acc: 0.0000e+00 - val_loss: 2589.8413 - val_acc: 0.0000e+00
Epoch 10/10
195840/195840 [==============================] - 149s 759us/step - loss: 2589.2724 - acc: 0.0000e+00 - val_loss: 2589.8413 - val_acc: 0.0000e+00

我在优化器中更改'lr'的值后也观察到了结果,但它对性能没有太大影响 .