首页 文章

logits和标签之间的形状不匹配

提问于
浏览
0

我正在为我的传感器提供形状为160 * 60 * 1的图像 .

我的CNN结构如下

X_input = tf.placeholder(tf.float32, [None, 160 * 60])
X = tf.reshape(X_input, shape=[-1, 160, 60, 1])
Y_ = tf.placeholder(tf.float32, [None, 5 * image_reader.NUM_CLASSES])

learning_rate = 0.001


def create_fully_connected_weight(shape):
    return tf.Variable(tf.truncated_normal(shape, stddev=0.1))


def create_conv_weight(patch_height, patch_width, input_channel, output_channel):
    initial = tf.truncated_normal(shape=[patch_height, patch_width, input_channel, output_channel], stddev=0.1)
    return tf.Variable(initial)


def create_bias(shape):
    initial = 0.1 * tf.random_normal(shape=shape)
    return tf.Variable(initial)


def create_strides(batch_step, height_step, width_step, channel_step):
    return [batch_step, height_step, width_step, channel_step]


def create_conv_layer(input, W, strides, padding='SAME'):
    return tf.nn.conv2d(input, W, strides, padding)


def apply_max_pool(x, ksize, strides, padding='SAME'):
    return tf.nn.max_pool(x, ksize, strides, padding)


keep_prob = tf.placeholder(tf.float32)

W1 = create_conv_weight(5, 5, 1, 32)
print("W1 shape:", W1.get_shape())
B1 = create_bias([32])
strides1 = create_strides(1, 1, 1, 1)
Y1 = tf.nn.relu(create_conv_layer(X, W1, strides1, padding="SAME") + B1)
Y1 = apply_max_pool(Y1, [1, 2, 2, 1], [1, 2, 2, 1])
Y1 = tf.nn.dropout(Y1, keep_prob=keep_prob)
print(Y1)

W2 = create_conv_weight(5, 5, 32, 64)
print("W2 shape:", W2.get_shape())
B2 = create_bias([64])
strides2 = create_strides(1, 1, 1, 1)
Y2 = tf.nn.relu(create_conv_layer(Y1, W2, strides2, padding="SAME") + B2)
Y2 = apply_max_pool(Y2, [1, 2, 2, 1], [1, 2, 2, 1])
Y2 = tf.nn.dropout(Y2, keep_prob=keep_prob)
print(Y2)

W3 = create_conv_weight(5, 5, 64, 128)
print("W3 shape:", W3.get_shape())
B3 = create_bias([128])
strides3 = create_strides(1, 1, 1, 1)
Y3 = tf.nn.relu(create_conv_layer(Y2, W3, strides3, padding="SAME") + B3)
Y3 = apply_max_pool(Y3, [1, 2, 2, 1], [1, 2, 2, 1])
Y3 = tf.nn.dropout(Y3, keep_prob=keep_prob)
print(Y3)

# keep_prob = tf.placeholder(tf.float32)

Y3 = tf.reshape(Y3, [-1, 20 * 10 * 128])

W4 = create_fully_connected_weight([20 * 10 * 128, 1024])
print("W4 shape:", W4.get_shape())
B4 = create_bias([1024])
Y4 = tf.nn.relu(tf.matmul(Y3, W4) + B4)
Y4 = tf.nn.dropout(Y4, keep_prob=keep_prob)
print(Y4)

W5 = create_fully_connected_weight([1024, 5 * image_reader.NUM_CLASSES])
print("W5 shape:", W5.get_shape())
B5 = create_bias([5 * image_reader.NUM_CLASSES])
Ylogits = tf.matmul(Y4, W5) + B5

然而,虽然我试图计算损失

cross_entropy = tf.nn.sigmoid_cross_entropy_with_logits(logits=Ylogits, labels=Y_)
loss = tf.reduce_mean(cross_entropy)

我正在接受一个错误,即不兼容的形状:[200,55]与[250,55] . 我想我有一个与图层有关的问题,但我几个小时都找不到它 .

1 回答

  • 0

    if'SAME':output_shape [i] = ceil(input_spatial_shape [i] / strides [i])

    Y1 - 160 ,60
    Y1 - 80, 30 <= 160/2, 60/2
    Y2 - 80, 30 
    Y2 - 40, 15 <= 80/2, 30/2
    Y3 - 40, 15
    Y3 - 20, 8
    

    我很困惑你如何在完全连接的层之前结束20,10 .

    此外,你得到的错误看起来像错误的batch_size(250 vs 200)?

    您是否可以创建一个具有此错误的可运行脚本,以便其他人可以提供帮助 .

相关问题