首页 文章

超分辨率期间的颜色变化

提问于
浏览
1

我正在研究做超级分辨率的CNN . 训练顺利,没有过度拟合但是当我在低分辨率图像上尝试训练的网络时,输出图像已经改变了颜色:


The input image

The original image

The output image

The output image


即使训练时间较长,结果也保持不变 . 有人遇到过类似的问题吗?

我的第一个想法是将输出激活函数更改为从0到1(sigmoid)而不是ReLu,但没有任何改进 .

这是我在Keras实施的网络:

input_img = Input(shape=(None,None,3))

c1 = Convolution2D(64, (3, 3))(input_img)
a1 = Activation('relu')(c1)

c2 = Convolution2D(64, (3, 3))(a1)
a2 = Activation('relu')(c2)
b2 = BatchNormalization()(a2)

c3 = Convolution2D(64, (3, 3))(b2)
a3 = Activation('relu')(c3)
b3 = BatchNormalization()(a3)

c4 = Convolution2D(64, (3, 3), strides=(2,2), padding='same')(b3)
a4 = Activation('relu')(c4)
b4 = BatchNormalization()(a4)

c5 = Convolution2D(64, (3, 3))(b4)
a5 = Activation('relu')(c5)
b5 = BatchNormalization()(a5)

d1 = Conv2DTranspose(64, (3, 3))(b5)
a6 = Activation('relu')(d1)
b6 = BatchNormalization()(a6)

m1 = add([a4, b6])
a7 = Activation('relu')(m1)

d2 = Conv2DTranspose(64, (3, 3), strides=(2,2), padding='same')(a7)
a8 = Activation('relu')(d2)
b8 = BatchNormalization()(a8)

d3 = Conv2DTranspose(64, (3, 3))(b8)
a9 = Activation('relu')(d3)
b9 = BatchNormalization()(a9)

m2 = add([a2, b9])
a10 = Activation('relu')(m2)

d4 = Conv2DTranspose(64, (3, 3))(a10)
a11 = Activation('relu')(d4)
b10 = BatchNormalization()(a11)

d5 = Conv2DTranspose(3, (3, 3))(b10)
a12 = Activation('relu')(d5)
b11 = BatchNormalization()(a12)

m3 = add([input_img, b11])
a13 = Activation('relu')(m3)

out = Convolution2D(3, (5, 5), activation='sigmoid', padding='same') (a13)

model = Model(input_img, out)
model.compile(optimizer='adam', loss='mae')

1 回答

  • 0

    在大多数论文中,人们首先将图像转换为YCrCb格式,并且仅使用CNN处理Y通道(其包含所有相关图像细节) . 然后将重建的Y通道与Cr和Cb通道连接,使用传统的双三次插值(或您选择的插值)对其进行上采样,然后将图像转换为RGB . 试试这个,它可能会更好 .

相关问题