我正在关注Keras教程,并想在Pytorch中隐藏它,所以我正在进行翻译 . 我对这两者都不是很熟悉,特别是在输入尺寸参数上,尤其是最后一层 - 我需要另一个线性层吗?任何人都可以将以下内容翻译成Pytorch顺序定义吗?

visible = Input(shape=(64,64,1))
conv1 = Conv2D(32, kernel_size=4, activation='relu')(visible)
pool1 = MaxPooling2D(pool_size=(2, 2))(conv1)
conv2 = Conv2D(16, kernel_size=4, activation='relu')(pool1)
pool2 = MaxPooling2D(pool_size=(2, 2))(conv2)
hidden1 = Dense(10, activation='relu')(pool2)
output = Dense(1, activation='sigmoid')(hidden1)
model = Model(inputs=visible, outputs=output)

这是模型的输出:

Layer (type)                 Output Shape              Param #   
_________________________________________________________________
input_1 (InputLayer)         (None, 64, 64, 1)         0         
conv2d_1 (Conv2D)            (None, 61, 61, 32)        544       
max_pooling2d_1 (MaxPooling2 (None, 30, 30, 32)        0         
conv2d_2 (Conv2D)            (None, 27, 27, 16)        8208      
max_pooling2d_2 (MaxPooling2 (None, 13, 13, 16)        0         
dense_1 (Dense)              (None, 13, 13, 10)        170       
dense_2 (Dense)              (None, 13, 13, 1)         11        

Total params: 8,933
Trainable params: 8,933
Non-trainable params: 0

我得出的结果缺乏对输入形状的规范,我也对指定的Keras模型中的步幅的翻译感到有点困惑,因为它在MaxPooling2D中使用了stride 2,但是没有在其他地方指定它 - 它是也许是玩具的例子 .

model = nn.Sequential(
      nn.Conv2d(1, 32, 4),
      nn.ReLU(),
      nn.MaxPool2d(2, 2),
      nn.Conv2d(1, 16, 4),
      nn.ReLU(),
      nn.MaxPool2d(2, 2),
      nn.Linear(10, 1),
      nn.Sigmoid(),
      )