我当前的问题是,我尝试的所有各种CNN回归模型总是返回相同(或非常相似)的值,我试图找出原因 . 但我会接受各种各样的建议 .
我的数据集如下所示:
-
x
:64x64灰度图像排列为64 x 64 x n ndarray -
y
:介于0和1之间的值,每个对应一个图像(将其视为某种比例) -
weather
:每张图像拍摄时的4个天气读数(环境温度,湿度,露点,气压)
目标是使用图像和天气数据来预测 y
. 由于我正在使用图像,我认为CNN是合适的(如果这里有其他策略,请告诉我) .
根据我的理解,CNN最常用于分类任务 - 它太不同了 - 我只需要将损失函数更改为MSE / RMSE,将最后一个激活函数更改为线性(尽管可能是sigmoid更合适因为 y
介于0和1之间 .
我遇到的第一个障碍是试图弄清楚如何合并天气数据,自然的选择是将它们整合到第一个完全连接的层中 . 我在这里找到了一个例子:How to train mix of image and data in CNN using ImageAugmentation in TFlearn
我遇到的第二个障碍是确定一个架构 . 通常我会选择一篇论文并复制其架构,但我在CNN图像回归中找不到任何东西 . 所以我尝试了一个(相当简单的)网络,有3个卷积层和2个完全连接的层,然后我尝试了来自https://github.com/tflearn/tflearn/tree/master/examples的VGGNet和AlexNet架构
现在问题是'm having is that all of the models I'm尝试输出相同的值,即训练集的平均值 y
. 观察张量板,损失函数相当快地变平(大约25个时期之后) . 你知道这里发生了什么吗?虽然我了解每个层正在做什么的基础知识,但我对于为特定数据集或任务构建良好架构的内容没有直觉 .
这是一个例子 . 我在tflearn示例页面中使用VGGNet:
tf.reset_default_graph()
img_aug = ImageAugmentation()
img_aug.add_random_flip_leftright()
img_aug.add_random_flip_updown()
img_aug.add_random_90degrees_rotation(rotations=[0, 1, 2, 3])
convnet = input_data(shape=[None, size, size, 1],
data_augmentation=img_aug,
name='hive')
weathernet = input_data(shape=[None, 4], name='weather')
convnet = conv_2d(convnet, 64, 3, activation='relu', scope='conv1_1')
convnet = conv_2d(convnet, 64, 3, activation='relu', scope='conv1_2')
convnet = max_pool_2d(convnet, 2, strides=2, name='maxpool1')
convnet = conv_2d(convnet, 128, 3, activation='relu', scope='conv2_1')
convnet = conv_2d(convnet, 128, 3, activation='relu', scope='conv2_2')
convnet = max_pool_2d(convnet, 2, strides=2, name='maxpool2')
convnet = conv_2d(convnet, 256, 3, activation='relu', scope='conv3_1')
convnet = conv_2d(convnet, 256, 3, activation='relu', scope='conv3_2')
convnet = conv_2d(convnet, 256, 3, activation='relu', scope='conv3_3')
convnet = max_pool_2d(convnet, 2, strides=2, name='maxpool3')
convnet = conv_2d(convnet, 512, 3, activation='relu', scope='conv4_1')
convnet = conv_2d(convnet, 512, 3, activation='relu', scope='conv4_2')
convnet = conv_2d(convnet, 512, 3, activation='relu', scope='conv4_3')
convnet = max_pool_2d(convnet, 2, strides=2, name='maxpool4')
convnet = conv_2d(convnet, 512, 3, activation='relu', scope='conv5_1')
convnet = conv_2d(convnet, 512, 3, activation='relu', scope='conv5_2')
convnet = conv_2d(convnet, 512, 3, activation='relu', scope='conv5_3')
convnet = max_pool_2d(convnet, 2, strides=2, name='maxpool5')
convnet = fully_connected(convnet, 4096, activation='relu', scope='fc6')
convnet = merge([convnet, weathernet], 'concat')
convnet = dropout(convnet, .75, name='dropout1')
convnet = fully_connected(convnet, 4096, activation='relu', scope='fc7')
convnet = dropout(convnet, .75, name='dropout2')
convnet = fully_connected(convnet, 1, activation='sigmoid', scope='fc8')
convnet = regression(convnet,
optimizer='adam',
learning_rate=learning_rate,
loss='mean_square',
name='targets')
model = tflearn.DNN(convnet,
tensorboard_dir='log',
tensorboard_verbose=0)
model.fit({
'hive': x_train,
'weather': weather_train
},
{'targets': y_train},
n_epoch=1000,
batch_size=batch_size,
validation_set=({
'hive': x_val,
'weather': weather_val
},
{'targets': y_val}),
show_metric=False,
shuffle=True,
run_id='poop')
要了解我的对象是什么:
-
x_train
是形状的ndarray(n, 64, 64, 1)
-
weather_train
是形状的ndarray(n, 4)
-
y_train
是形状的ndarray(n, 1)
过度拟合是另一个问题,但考虑到模型在训练集上的表现不佳,我想我以后可以担心 .